Assuming our simulation is not designed to auto-scale (and our Admins don’t know how to download more RAM), what kind of side effects could we see in the world if the underlying system hosting our simulation began running out of resources?

  • @kromem
    link
    English
    5
    edit-2
    11 months ago

    The assumption that it isn’t designed around memory constraints isn’t reasonable.

    We have limits on speed so you can’t go too fast leading to pop in.

    As you speed up the slower things move so there needs to be less processing in spite of more stuff (kind of like a frame rate drop but with a fixed number of frames produced).

    As you get closer to more dense collections of stuff the same thing happens.

    And even at the lowest levels, the conversion from a generative function to discrete units to track stateful interactions discards the discrete units if the permanent information about the interaction was erased, indicative of low level optimizations.

    The scale is unbelievable, but it’s very memory considerate.