The New York Times is suing OpenAI and Microsoft for copyright infringement, claiming the two companies built their AI models by “copying and using millions” of the publication’s articles and now “directly compete” with its content as a result.

As outlined in the lawsuit, the Times alleges OpenAI and Microsoft’s large language models (LLMs), which power ChatGPT and Copilot, “can generate output that recites Times content verbatim, closely summarizes it, and mimics its expressive style.” This “undermine[s] and damage[s]” the Times’ relationship with readers, the outlet alleges, while also depriving it of “subscription, licensing, advertising, and affiliate revenue.”

The complaint also argues that these AI models “threaten high-quality journalism” by hurting the ability of news outlets to protect and monetize content. “Through Microsoft’s Bing Chat (recently rebranded as “Copilot”) and OpenAI’s ChatGPT, Defendants seek to free-ride on The Times’s massive investment in its journalism by using it to build substitutive products without permission or payment,” the lawsuit states.

The full text of the lawsuit can be found here

  • ArugulaZ
    link
    fedilink
    1311 months ago

    What, did ChatGPT find an algorithm that writes irresponsible “both sides are equally as bad” news articles faster and better than the New York Times? I can see why that’d rattle their cage. You know, the bird cage lined with copies of the New York Times.

  • @buzz
    link
    4
    edit-2
    8 months ago

    Removed by mod

  • @[email protected]
    link
    fedilink
    211 months ago

    If you use any form of copyrighted work to train your models that model should not be used for any commercial purpose should be the rule for this.

  • @[email protected]
    link
    fedilink
    -1411 months ago

    If the garbage that comes out of chatgpt can be considered legitimate competition, then the new your times sucks a journalism

    • Hillock
      link
      fedilink
      2311 months ago

      It’s not a legitimate competition, that’s the entirely point. The claim is AI models rely on stealing content and changing it slightly or not all. And if a “regular” journalist does this, they would get into trouble. Just because the entity switches to an AI company doesn’t make this business model legitimate.

      A few years ago there was a big plagiarism scandal on IGN because one of their “journalists” mostly took reviews of other people, changed a few words, and published it. Obviously that’s not fine.