• @[email protected]
    link
    fedilink
    English
    81 year ago

    It’s fundamentally a surveillance technology, because the technology fundamentally cannot function without that large dataset of language to begin with. It needs massive amounts of data that have to be surveilled to be achieved, because unless you’re Reddit or Facebook, your own site probably doesn’t contain enough data to fill out the needs of the LLM. Thus you need to scrape the internet for more data in hopes of filling it out.

    I very much disagree with the characterization that training an LLM on a book is pirating said book. We might see copyright owners release their materials in the future under licenses that disallow this, which is their right (though it’s not clear to me that any copy is being made). In my opinion there’s not a lot difference between me training an LLM on said book and me using the story as inspiration for my own book. I suspect we’ll never agree on that one.

    Pretty amusing that you think scraping published data somehow constitutes surveillance, though.

    • @[email protected]
      link
      fedilink
      English
      11 year ago

      What do you think happens to data when it’s scraped? Copying the data is a fundamental requirement for using it in training. These models are trained in big datacenters where the original work is split up and tokenized and used over and over again.

      The difference between you training a model and you reading a book (put online by its author in clear text, to avoid the obvious issue of actual piracy for human use) is that you reading on a website is the intention of the copyright holder and you as a person have a fundamental right to remember things and be inspired. You don’t however have a right to copy and use the text for other purposes, whether that’s making a t-shirt with a memorable line, printing it out to give to someone else, or tokenizing it to train a computer algorithm.

      • @[email protected]
        link
        fedilink
        English
        11 year ago

        What do you think happens to data when it’s scraped? Copying the data is a fundamental requirement for using it in training. These models are trained in big datacenters where the original work is split up and tokenized and used over and over again.

        Tokenizing and calculating vectors or whatever is not the same thing as distributing copies of said work.

        The difference between you training a model and you reading a book (put online by its author in clear text, to avoid the obvious issue of actual piracy for human use) is that you reading on a website is the intention of the copyright holder and you as a person have a fundamental right to remember things and be inspired.

        Copyright holders can’t say what I do with their work, nor what I do with the knowledge of their book. They can only say how I copy and distribute it. I don’t need consent to burn an author’s book, create fan art around it, or quote characters in my blog. I do need their consent to copy and distribute their works directly.

        You don’t however have a right to copy and use the text for other purposes, whether that’s making a t-shirt with a memorable line, printing it out to give to someone else, or tokenizing it to train a computer algorithm.

        And at some point the resolution of said words is so specific that it becomes uncopyrightable. You can’t copyright most phrases nor words.

        • @[email protected]
          link
          fedilink
          English
          11 year ago

          Tokenizing and calculating vectors or whatever is not the same thing as distributing copies of said work.

          It very much is. You can’t just run a cipher on a copyrighted work and say “it’s not the same, so I didn’t copy it”. Tokenization is reversible to the original text. And “distributing” is separate from violating copyright. It’s not distriburight, it’s copyright. Copying a work without authorization for private use is still violating copyright.

          • @[email protected]
            link
            fedilink
            English
            01 year ago

            You can’t just run a cipher on a copyrighted work and say “it’s not the same, so I didn’t copy it”.

            Yes I can. I can download a Web page, encrypt it on my machine, and I’m not distributing said work.

            And “distributing” is separate from violating copyright. It’s not distriburight, it’s copyright. Copying a work without authorization for private use is still violating copyright.

            That’s just false.

            • @[email protected]
              link
              fedilink
              English
              01 year ago

              You absolutely do not know what you’re talking about. This is just trivial copyright law, but there’s a weird internet mythology that if you can access something on the net you can take it as long as you don’t share it further. The reason the mass-sharers tended to get prosecuted is because they were easier and more valuable targets, not because the people they were sharing it with weren’t also breaking the law.

    • Hot Saucerman
      link
      fedilink
      English
      -3
      edit-2
      1 year ago

      on a book is pirating said book.

      If the source is literally a piracy website that serves up applications on how to remove DRM from ebooks, it’s absolutely piracy. You can’t just deny the source and be like “it’s not piracy!” The way the data came into your hands was illicitly, not legally. Especially if DRM has been circumvented and removed before it came into your hands.

      They didn’t go out and buy copies of thousands of books.

      Pretty amusing that you think scraping published data somehow constitutes surveillance, though.

      I don’t, I was making a point about how absurdly large the language models have to be, which is to say, if they have to have that much data on top of thousands of pirated books, it means they fundamentally cannot make the models work without also scraping the internet for data, which is surveillance.

      • @[email protected]
        link
        fedilink
        English
        81 year ago

        If the source is literally a piracy website that serves up applications on how to remove DRM from ebooks, it’s absolutely piracy. You can’t just deny the source and be like “it’s not piracy!”

        They didn’t go out and buy copies of thousands of books.

        And if they went to a library and scanned all the books?

        I don’t, I was making a point about how absurdly large the language models have to be, which is to say, if they have to have that much data on top of thousands of pirated books, it means they fundamentally cannot make the models work without also scraping the internet for data, which is surveillance.

        I mean, it’s just not surveillance, by definition. There’s no observation, just data ingestion. You’re deliberately trying to conflate the words to associate a negative behavior with LLM training to make your argument.

        I really don’t get why LLMs get everybody all riled up. People have been running Web crawlers since the dawn of the Web.