Two authors sued OpenAI, accusing the company of violating copyright law. They say OpenAI used their work to train ChatGPT without their consent.

  • @ClamDrinker
    link
    English
    1
    edit-2
    1 year ago

    You realize LLMs are designed not to self improve by design right? It’s totally possible and has been tried - It’s just that they usually don’t end up very well once they do. And LLMs do learn new things, they’re just called new models. Because it takes time and resources to retrain LLMs with new information in mind. It’s up to the human guiding the AI to guide it towards something that isn’t copyright infringement. AIs don’t just generate things on their own without being prompted to by a human.

    You’re asking for a general intelligence AI, which would most likely be comprised of different specialized AIs to work together. Similar to our brains having specific regions dedicated to specific tasks. And this just doesn’t exist yet, but one of it’s parts now does.

    Also, you say “right” and “probable” are without difference, yet once again bring something into the conversation which can only be “right”. Code. You cannot create code that is incorrect or it will not work. Text and creative works cannot be wrong. They can only be judged by opinions, not by rule books which say “it works” or “it doesn’t”.

    The last line is just a bit strange honestly. The biggest users of AI are creative minds, and it’s why it’s important that AI models remain open source so all creative minds can use them.

    • @[email protected]
      link
      fedilink
      English
      -21 year ago

      You realize LLMs are designed not to self improve by design right? It’s totally possible and has been tried - It’s just that they usually don’t end up very well once they do.

      Tay is yet another example of AI lacking comprehension and intelligence; it produced racist and antisemitic content because it had no comprehension of ethics or morality, and so it just responded to the input given to it. It’s a display of “intelligence” on the same level as a slime mold seeking out the biggest nearby source of food–the input Tay received was largely racist/antisemitic, so its output became racist/antisemitic.

      And LLMs do learn new things, they’re just called new models. Because it takes time and resources to retrain LLMs with new information in mind. It’s up to the human guiding the AI to guide it towards something that isn’t copyright infringement.

      And the way that humans do that is by not using copyrighted material for its training dataset. Using copyrighted material to produce an AI model is infringing on the rights of the people who created the material, the vast majority of whom are small-time authors and artists and open-source projects composed of individuals contributing their time and effort to said projects). Full stop.

      Also, you say “right” and “probable” are without difference, yet once again bring something into the conversation which can only be “right”. Code. You cannot create code that is incorrect or it will not work. Text and creative works cannot be wrong. They can only be judged by opinions, not by rule books which say “it works” or “it doesn’t”.

      Then why does ChatGPT invent Powershell cmdlets out of whole cloth that don’t exist yet accomplish the exact precise task that the prompter asked it to do?

      The last line is just a bit strange honestly. The biggest users of AI are creative minds, and it’s why it’s important that AI models remain open source so all creative minds can use them.

      The biggest users of AI are techbros who think that spending half an hour crafting a prompt to get stable diffusion to spit out the right blend of artists’ labor are anywhere near equivalent to the literal collective millions of man hours spent by artists honing their skill in order to produce the content that AI companies took without consent or attribution and ran through a woodchipper. Oh, and corporations trying to use AI to replace artists, writers, call center employees, tech support agents…

      Frankly, I’m absolutely flabbergasted that the popular sentiment on Lemmy seems to be so heavily in favor of defending large corporations taking data produced en masse by individuals without even so much as the most cursory of attribution (to say nothing of consent or compensation) and using it for the companies’ personal profit. It’s no different morally or ethically than Meta hoovering all of our personal data and reselling it to advertisers.

      • @ClamDrinker
        link
        English
        01 year ago

        You’re shifting the goal post. You wanted an AI that can learn stuff while it’s being used and now you’re unhappy that one existed that did so in a primitive form. If you want a general artificial intelligence that is also able to understand the words it says, we are still decades off. For now it can simply only work off patterns, for which the training data needs to be curated. And as explained previously, it’s not infringing on copyright to train things on publicized works. You are simply denying that fact because you don’t want that to be true, but it is. And that’s why your sentiment isn’t shared outside of some anti-AI circle you’re part of.

        The biggest users of AI are techbros who think that spending half an hour crafting a prompt to get stable diffusion to spit out the right blend of artists’ labor are anywhere near equivalent to the literal collective millions of man hours spent by artists honing their skill in order to produce the content that AI companies took without consent or attribution and ran through a woodchipper. Oh, and corporations trying to use AI to replace artists, writers, call center employees, tech support agents…

        So because you don’t know any creative people who use the technology ethically, they don’t exist? Good to hear you’re sticking it up for the little guy who isn’t making headlines or being provocative. I don’t necessarily see these as ethical uses either, but I would be incredibly disingenuous to insinuate these are the only and primary ways to use AI - They are not, and your ignorance is showing if you actually believe so.

        Frankly, I’m absolutely flabbergasted that the popular sentiment on Lemmy seems to be so heavily in favor of defending large corporations taking data produced en masse by individuals without even so much as the most cursory of attribution (to say nothing of consent or compensation) and using it for the companies’ personal profit. It’s no different morally or ethically than Meta hoovering all of our personal data and reselling it to advertisers.

        I’m sorry, but you realize that this doesn’t make any sense right? Large corporations are the ones who would have enough information and/or money at their disposal to train their own AIs without relying on publicized works. Should any kind of blockade be created to stop people training AI models from using public work, you would effectively be taking AI away from the masses in the form of Open Source models, not from those corporations. So if anything, it’s you who is arguing for large corporations to have a monopoly on AI technology as it currently is.

        Don’t think I actually like companies like OpenAI or Meta, it’s why I’ve been arguing about AI models in general, not their specific usage of the technology (As that is a whole different can of worms).

        • @[email protected]
          link
          fedilink
          English
          -11 year ago

          I’m not shifting the goal post–I have been consistent in my position that AI does not truly “learn” in the way that humans do, and is incapable of the comprehension required for actual human creativity. Tay spouting racist rhetoric because that’s what was put into it supports that position, if anything; if it were capable of comprehending the language it was being fed, it wouldn’t have done that.

          You have stated that it’s not infringing on copyright to train a model on published works, yes. I wholeheartedly disagree, because, as I have previously stated, AI models as they currently exist cannot produce new, derivative works based off the training model, but only reconstitute the training model together in various different combinations. This is important because one of the requirements for copyright protection, as per the US Copyright Office, is that it’s an independent creation, which “means that the author created the work without copying from other works.” AI’s inability to create its own work without copying from other works means that it cannot produce copyrightable material.

          As a result, if you input an infringing dataset into an AI’s training model, the resulting output is also infringing, because it is not, and cannot, be transformative to the level required to meet the minimal creativity threshold needed for copyright protection. At best, you can make an argument that the infringement in an AI’s output is acceptable under the de minimis doctrine (i.e. that the amount of the copyrighted work contained in an infringing work is so trivial as to not warrant protection). However, my belief is that if a hypothetical composite work takes all of its source material from 100 different copyrighted sources, it wouldn’t qualify for de minimis protection because the composite work is 100% infringing, even though each individual source only contributed 1% to the total work.

          To summarize, my line of thinking is as follows:

          • The specific output of an AI does not in of itself qualify for copyright protection because no human minds were involved in creating it, except for the mind that gave the AI the prompt; however, this involvement is not significant enough to overcome the minimal creativity standard required for copyright protection. This is the position of the US Copyright Office (page 7, The Human Authorship Requirement):

          The U.S. Copyright Office will register an original work of authorship, provided that the work was created by a human being. The copyright law only protects “the fruits of intellectual labor” that “are founded in the creative powers of the mind.” Trade-Mark Cases, 100 U.S. 82, 94 (1879). Because copyright law is limited to “original intellectual conceptions of the author,” the Office will refuse to register a claim if it determines that a human being did not create the work.

          • Since the specific output of an AI model lacks any copyright protection, that output does not qualify for any related defenses such as fair use because as these defenses require significant transformative effort of the work in question. If something cannot be transformative, novel, or new enough to qualify for copyright protection in the first place, it’s impossible for it to be transformative enough for a fair use defense. It also cannot qualify for copyright protection as a compilation or derivative work, as they both must contain copyrightable subject matter–since the AI output is not copyrightable, they cannot be claimed as either compliations or derivatives.

          • As a result, if the training dataset input to an AI model is infringing, then the output of that AI model is also infringing, since the output does not independently qualify for copyright protection, nor can they leverage related defenses.

          I’m sorry, but you realize that this doesn’t make any sense right? Large corporations are the ones who would have enough information and/or money at their disposal to train their own AIs without relying on publicized works. Should any kind of blockade be created to stop people training AI models from using public work, you would effectively be taking AI away from the masses in the form of Open Source models, not from those corporations. So if anything, it’s you who is arguing for large corporations to have a monopoly on AI technology as it currently is.

          Large corporations and open-source AI models are scraping our IP without consent because they think they can get away with it, and because it’s easier to steal it than properly obtaining consent from the people whose content they are using. And to be clear, I don’t give a shit if preventing AI from stealing copyrighted content kills large open-source AI tools. If the only way they can be useful is by committing mass infringement, then they don’t deserve to exist. They can either use their own internally-developed datasets, datasets that only draw from the public domain, obtain the consent (which may or may not include royalties) from creators, or wither on the vine. That applies to both open-source and commercial AI technology.

          Finally, I want to make it 100% clear that I have no issues with AI models that do not use copyrighted material in their training datasets. My employer introduced an AI chatbot trained entirely on our internal and public knowledgebases, and I’m perfectly fine with that morally/ethically/legally. (Personally, I think it’s a little useless since the last time I used it the damn thing confidently gave me a false answer with fake links to nonexistent KB articles, but that’s besides the point.) My entire issue with AI is centered around the unlicensed use of copyrighted material by AI models without the creator’s consent, attribution, or compensation.