• @Jordan117
    link
    English
    115 months ago

    IIRC based on the source paper the “verbatim” text is common stuff like legal boilerplate, shared code snippets, book jacket blurbs, alphabetical lists of countries, and other text repeated countless times across the web. It’s the text equivalent of DALL-E “memorizing” a meme template or a stock image – it doesn’t mean all or even most of the training data is stored within the model, just that certain pieces of highly duplicated data have ascended to the level of concept and can be reproduced under unusual circumstances.

    • lemmyvore
      link
      fedilink
      English
      105 months ago

      Problem is, they claimed none of it gets stored.

      • TWeaK
        link
        fedilink
        English
        55 months ago

        They claim it’s not stored in the LLM, they admit to storing it in the training database but argue fair use under the research exemption.

        This almost makes it seems like the LLM can tap into the training database when it reaches some kind of limit. In which case the training database absolutely should not have a fair use exemption - it’s not just research, but a part of the finished commercial product.