• @kbotc
    link
    English
    157 months ago

    Dynamically sized but stored contiguously makes the systems performance engineer in me weep. If the lists get big, the kernel is going to do so much churn.

    • @[email protected]
      link
      fedilink
      English
      157 months ago

      Contiguous storage is very fast in terms of iteration though often offsetting the cost of allocation

      • @[email protected]
        link
        fedilink
        English
        77 months ago

        Modern CPUs are also extremely efficient at dealing with contiguous data structures. Branch prediction and caching get to shine on them.

        Avoiding memory access or helping CPU access it all upfront switches physical domain of computation.

    • :3 3: :3 3: :3 3: :3
      link
      fedilink
      English
      7
      edit-2
      7 months ago

      Which is why you should:

      1. Preallocate the vector if you can guesstimate the size
      2. Use a vector library that won’t reallocate the entire vector on every single addition (like Rust, whose Vec doubles in size every time it runs out of space)

      Memory is fairly cheap. Allocation time not so much.

    • @yetiftw
      link
      English
      47 months ago

      matlab likes to pick the smallest available spot in memory to store a list, so for loops that increase the size of a matrix it’s recommended to preallocate the space using a matrix full of zeros!

    • @tamal3
      link
      English
      27 months ago

      Is that churn or chum? (RN or M)