• FaceDeer
    link
    fedilink
    01 year ago

    It’s not repeating its training data verbatim because it can’t do that. It doesn’t have the training data stored away inside itself. If it did the big news wouldn’t be AI, it would be the insanely magical compression algorithm that’s been discovered that allows many terabytes of data to be compressed down into just a few gigabytes.

    • Hello Hotel
      link
      English
      1
      edit-2
      1 year ago

      Do you remember quotes in english ascii /s

      Tokens are even denser than ascii. simmlar to word “chunking” My guess is it’s like lossy video compression but for text, [Attacked] with [lazers] by [deatheaters] apon [margret];[has flowery language]; word [margret] [comes first] (Theoretical example has 7 “tokens”)

      It may have actually impressioned a really good copy of that book as it’s lilely read it lots of times.

      • FaceDeer
        link
        fedilink
        11 year ago

        If it’s lossy enough then it’s just a high-level conceptual memory, and that’s not copyrightable.

        • Hello Hotel
          link
          English
          11 year ago

          It varries based on how much time its been given with the media.