Microsoft, OpenAI sued for copyright infringement by nonfiction book authors in class action claim::The new copyright infringement lawsuit against Microsoft and OpenAI comes a week after The New York Times filed a similar complaint in New York.

  • @CosmoNova
    link
    English
    -211 months ago

    I hear those kinds of arguments a lot, though usually from the exact same people who claimed nobody would be convicted of fraud for NFT and crypto scams when those were at their peak. The days of the wild west internet are long over.

    Theft in the digital space is a very real thing in the eyes of the law, especially when it comes to copyright infringement. It‘s wild to me how many people seem to think Microsoft will just get a freebie here because they helped pioneering a new technology for personal gain. Copyright holders have a very real case here and I‘d argue even a strong one.

    Even using user data (that they own legally) for machine learning could get them into trouble in some parts of the developed world because users 10 years ago couldn‘t anticipate it could be used that way and not give their full consent for that.

      • @[email protected]
        link
        fedilink
        English
        211 months ago

        Personally, I think public info is fair game - consent or not, it’s public. They’re not sharing the source material, and the goal was never plagiarism. There was a period where it became coherent enough to get very close to plagiarism, but it’s been moving past that phase very quickly

        Microsoft, especially with how they scraped private GitHub repos (and the things I’m sure Google and Facebook just haven’t gotten caught doing with private data) is way over the line for me. But I see that more as being bad stewards of private data - they shouldn’t be looking at it, their AI shouldn’t be looking at it, the public shouldn’t be able to see it, and they probably failed on all counts

        Granted, I think copyright is a bullshit system. Normal people don’t get any protection, because you need to pay to play. Being unable to defend it means you lose it, and in most situations you’re going to spend way more on legal costs than you could possibly get back.

        I also think the most important thing is that this tech is spread everywhere, because we can’t have one group in charge of the miracle technology… It’s too powerful.

        Google has all the data they could need, they’ve bullied the web into submission… They don’t have to worry about copyright, they control the largest ad network and dominate search (at least for now).

        It sucks that you can take any artist’s visual work, and fine tune a network to replicate endless rough facsimile in a few days. I genuinely get how that must feel violating.

        But they’re going to be screwed when the corporate work dries up for a much cheaper option, and they’re going to have to deal with the flood of AI work… Copyright won’t help them, it’s too late for it to even slow it down

        If companies did something wrong, have it out in court. My concern is that they’re going to pass laws on this that claim it’s for the artists, but effectively gatekeep AI to tech giants

    • @General_Effort
      link
      English
      311 months ago

      Even using user data (that they own legally) for machine learning could get them into trouble in some parts of the developed world because users 10 years ago couldn‘t anticipate it could be used that way and not give their full consent for that.

      Where, for example?

      • @CosmoNova
        link
        English
        111 months ago

        The European Union, for example.

        • @General_Effort
          link
          English
          211 months ago

          That’s not right. It explicitly is legal in the EU.

          • @CosmoNova
            link
            English
            011 months ago

            That is not how the EU works. Member states can get together to tarif and sanction behavior, but just because the EU generally allows something doesn’t mean all member states have to abide. Different constitutions and all. Besides I’d like to know where exactly any EU resolution explicitly allows corporations to throw any data they have at any technology or LLM’s specifically even when nobody ever gave consent to that. Corporations have to be quite specific for how they process your data and broadly saying “machine learning stuff” 10 years ago isn’t really water proof.

            • @General_Effort
              link
              English
              111 months ago

              No. EU legislation often has so-called opening clauses that allow member states to tune “EU laws” to their needs but it’s not the default behavior.

              You seem to have the GDPR in mind. It regulates personal data, meaning data that can be tied to a person. If that is not possible, the GDPR has no objections.