• @AbouBenAdhem
    link
    English
    6
    edit-2
    1 year ago

    I wonder if this is actually comparable to the way our brains store long-term memory?

    • @abhi9uOP
      link
      English
      41 year ago

      Interesting. I’m just thinking aloud to understand this.

      In this case, the models are looking at a few sequence of bytes in their context and are able to predict the next byte(s) with good accuracy, which allows efficient encoding. Most of our memories are associative, i.e. we associate them with some concept/name/idea. So, do you mean, our brain uses the concept to predict a token which gets decoded in the form of a memory?

      • @AbouBenAdhem
        link
        English
        61 year ago

        Firstly—maybe what we consider an “association” is actually an indicator that our brains are using the same internal tokens to store/compress the memories.

        But what I was thinking of specifically is narrative memories: our brains don’t store them frame-by-frame like video, but rather, they probably store only key elements and use their predictive ability to extrapolate the omitted elements on demand.

        • @abhi9uOP
          link
          English
          21 year ago

          Yes, that makes much more sense.

        • @[email protected]
          link
          fedilink
          English
          11 year ago

          No, because our brains also use hierarchical activation for association, which is why if we’re talking about bugs and I say “I got a B” you assume its a stinging insect, not a passing grade.

          If it was simple word2vec we wouldn’t have that additional means of noise suppression.