Its basically like a cloud storage, and your local storage (your brain) gets wiped every loop. You can edit this file any time you want using your brain (you can be tied up and it still works). 1024 Bytes is all you get. Yes you read that right: BYTES, not KB, MB, or GB: 1024 BYTES

Lets just say, for this example: The loop is 7 days form a Monday 6 AM to the next Monday 5:59 AM.

How do you best use these 1024 Bytes to your advantage?

How would your strategy be different if every human on Earth also gets the same 1024 Bytes “memory buffer”?

  • Kairos
    link
    fedilink
    5
    edit-2
    2 days ago

    That

    That won’t work bro. The page gets reset. I guess URLs could work as a way to externalise information. Like link to the IMDB page of Groundhog day.

    • thermal_shock
      link
      English
      32 days ago

      I see, I missed where everything is reset and not just your memory

    • @owatnext
      link
      22 days ago

      Library of Babel. Look it up. It contains every possible string of characters. Just link to the string that says what you want to say.

      • Kairos
        link
        fedilink
        42 days ago

        The link would be the length of the string that you’re trying to store.

        • @QuadratureSurfer
          link
          English
          12 days ago

          Only for the first word, but you don’t have to include the full link every single time. Just the first part would have it and then swap out just the final bits at the end for however many words you want.

        • @owatnext
          link
          -12 days ago

          Okay I have a solution I think. You can extract the Location, Wall, Shelf, Volume, and Page. So then your 1024 only needs to contain "Lib of Babel Hexagon Wall 2 Shelf 1 pg. 210” as an example. You should be able to sort it out from there.

          • Kairos
            link
            fedilink
            1
            edit-2
            2 days ago

            That’s not how entropy works. GZIP compression will take care of any redundancies with a subset of UTF-8 edit: and UTF-8 itself

            • @owatnext
              link
              12 days ago

              I don’t want to sound dumb but I have read this several times since you responded and I have no clue what this means. Like I know what all of these words mean but I can’t put them into context with what I said. I wasn’t talking about compression algorithms. I’m so sorry. )=

              • Kairos
                link
                fedilink
                1
                edit-2
                2 days ago

                UTF-8 text is inherently wasteful.

                Say you have binary data and you want to encode it with UTF-8. For simplicity let’s say the spec goes up to 2^16 codepoints.

                Now each one of these codepoints (a unique character) this can be able to be encoded in 2 bytes directly, but because UTF-8 encoding is inherently wasteful, it needs more bytes than that on average. The reason is UTF-8 ensures that a valid string doesn’t contain a null byte (eight 0 bits, byte aligned). Useful for things like filenames, databases, etc. This means that some bit strings are nonsensical to UTF-8. The majority of them, actually.

                Its the same thing with English text, except instead of 1s and 0s we have letters and punctuation. English uses multiple letters per syllable, with certain combonations of letters being nonsensical, although a valid string. It’s inherently wasteful but its nice for reading.

                GZip compression will minimize the effects of both of these. Although, because of laws of entropy, you will always need to store some kind of information which will let you decompress it into the original English Text+UTF-8 string.

                Basically, it’s a fancy computer science way to store, in less bytes a UTF-8 + English text using “this is UTF-8 text” and “this is English text” and some information to detangle it all. Although it is not stored that way. It’s all just bits to GZip, both input and output. Both UTF-8 and English text inherently create patterns, And GZip compresses away patters. Rather well too.

                This also means that random data is incompressible because there’s no pattern. Unless you want to do lossily which is the literal only way Internet video streaming works so well.

      • ERROR: Earth.exe has crashedOP
        link
        fedilink
        English
        02 days ago

        LOL I saw another user commenting abiut data compression, then I remembered the Vsauce video I watched a long time ago about the Library of Babel. I missed your comment and didn’t even realize you mentioned it 2 hours before I did. 😁