• @AbouBenAdhem
    link
    English
    6
    edit-2
    9 months ago

    I wonder if this is actually comparable to the way our brains store long-term memory?

    • @abhi9uOP
      link
      English
      49 months ago

      Interesting. I’m just thinking aloud to understand this.

      In this case, the models are looking at a few sequence of bytes in their context and are able to predict the next byte(s) with good accuracy, which allows efficient encoding. Most of our memories are associative, i.e. we associate them with some concept/name/idea. So, do you mean, our brain uses the concept to predict a token which gets decoded in the form of a memory?

      • @AbouBenAdhem
        link
        English
        69 months ago

        Firstly—maybe what we consider an “association” is actually an indicator that our brains are using the same internal tokens to store/compress the memories.

        But what I was thinking of specifically is narrative memories: our brains don’t store them frame-by-frame like video, but rather, they probably store only key elements and use their predictive ability to extrapolate the omitted elements on demand.

        • @abhi9uOP
          link
          English
          29 months ago

          Yes, that makes much more sense.

        • @[email protected]
          link
          fedilink
          English
          19 months ago

          No, because our brains also use hierarchical activation for association, which is why if we’re talking about bugs and I say “I got a B” you assume its a stinging insect, not a passing grade.

          If it was simple word2vec we wouldn’t have that additional means of noise suppression.

  • @[email protected]
    link
    fedilink
    English
    39 months ago

    does anyone know whether these results were obtained while taking the size of the dictionary into account?

    • @abhi9uOP
      link
      English
      39 months ago

      Do you mean the number of tokens in the LLM’s tokenizer, or the dictionary size of the compression algorithm?

      The vocab size of the pretrained models is not mentioned anywhere in the paper. Although, they did conduct an experiment where they measured compression performance while using tokenizers of different vocabulary sizes.

      If you meant the dictionary size of the compression algorithm, then there was no dictionary because they only used arithmetic coding to do the compression which doesn’t use dictionaries.

    • @AbouBenAdhem
      link
      English
      39 months ago

      It looks like they did it both ways (“raw rate” vs “adjusted rate”):

      In the case of the adjusted compression rate, the model’s size is also added to the compressed size, i.e., it becomes (compressed size + number of model parameters) / raw size. This metric allows us to see the impact of model parameters on the compression performance. A very large model might be able to compress the data better compared to a smaller model, but when its size is taken into account, the smaller model might be doing better. This metric allows us to see that.

      • @abhi9uOP
        link
        English
        29 months ago

        Yes. They also mention that using such large models for compression is not practical because their size thwarts any amount of data you might want to compress. But this result gives a good picture into how generalized such large models are, and how well they are able to predict the next tokens for image/audio data at a high accuracy.