OpenAI has publicly responded to a copyright lawsuit by The New York Times, calling the case “without merit” and saying it still hoped for a partnership with the media outlet.

In a blog post, OpenAI said the Times “is not telling the full story.” It took particular issue with claims that its ChatGPT AI tool reproduced Times stories verbatim, arguing that the Times had manipulated prompts to include regurgitated excerpts of articles. “Even when using such prompts, our models don’t typically behave the way The New York Times insinuates, which suggests they either instructed the model to regurgitate or cherry-picked their examples from many attempts,” OpenAI said.

OpenAI claims it’s attempted to reduce regurgitation from its large language models and that the Times refused to share examples of this reproduction before filing the lawsuit. It said the verbatim examples “appear to be from year-old articles that have proliferated on multiple third-party websites.” The company did admit that it took down a ChatGPT feature, called Browse, that unintentionally reproduced content.

  • @abhibeckert
    link
    English
    7
    edit-2
    1 year ago

    it can’t tell you about what’s going on in the present (except by doing a web search, which my understanding is not a part of this lawsuit)

    It’s absolutely part of the lawsuit. NYT just isn’t emphasising it because they know OpenAI is perfectly within their rights to do web searches and bringing it up would weaken NYT’s case.

    ChatGPT with web search is really good at telling you what’s on right now. It won’t summarise NYT articles, because NYT has blocked it with robots.txt, but it will summarise other news organisations that cover the same facts.

    The fundamental issue is news and facts are not protected by copyright… and organisations like the NYT take advantage of that all the time by immediately plagiarising and re-writing/publishing stories broken by thousands of other news organisations. This really is the pot calling the kettle black.

    When NYT loses this case, and I think they probably will, there’s a good chance OpenAI will stop checking robots.txt files.