Which of the following sounds more reasonable?

  • I shouldn’t have to pay for the content that I use to tune my LLM model and algorithm.

  • We shouldn’t have to pay for the content we use to train and teach an AI.

By calling it AI, the corporations are able to advocate for a position that’s blatantly pro corporate and anti writer/artist, and trick people into supporting it under the guise of a technological development.

  • @[email protected]
    link
    fedilink
    English
    251 year ago

    I’m not sure what you’re trying to say here; LLMs are absolutely under the umbrella of AI, they are 100% a form of AI. They are not AGI/STRONG AI, but they are absolutely a form of AI. There’s no “reframing” necessary.

    No matter how you frame it, though, there’s always going to be a battle between the entities that want to use a large amount of data for profit (corporations) and the people who produce said content.

    • @Silinde
      link
      English
      81 year ago

      True, and this is the annoying thing about people unqualified to talk about AI giving their opinions online. People not involved in the industry hear “AI” and expect HAL-9000 or Ava from Ex Machina rather than the software that the weather service uses to predict if it will rain tomorrow, or the models your doctor uses to help determine your risk of Heart Disease.

      This is compounded further when someone makes a video simplifying what an LLM is and mentioning that the latest models use it, which leads to the chimes of “bUt iT’S jUsT aN Llm BrO iTs nOt AI” and “ItS jUsT a LOaD oF DaTa aND aLGorItHMs, tHaTs NoT AI”. A little bit of knowledge is a dangerous thing.

      • @jumperalex
        link
        English
        31 year ago

        or that people are only exposed to trivial/childish publicly available examples.

      • @assassin_aragornOP
        link
        English
        01 year ago

        This is actually exactly what I mean. Most people hear AI and envision something much, much more complex. It’s easier to argue that HAL-9000 is like a human and should therefore be allowed to freely view book content like a human, versus argue that a sophisticated LLM is like a human and should be allowed to freely view books like a human. That’s moreso where I’m coming from. And politicians are stupid enough to pass laws envisioning these as HAL-9000.

    • Peruvian_Skies
      link
      fedilink
      7
      edit-2
      1 year ago

      On the flip side, the same battle is also fought between giant corporations that amass intellectual property and the people who want to actually use that intellectual property instead of letting it sit in some patent troll’s hoard until a lawsuit op presents itself. Seeing as there are quite a few reasonably decent open-source LLMs out there like Koala and Alpaca also training on data freely available on the Internet, I’m actually rooting for the AI companies in this case, in the hopes of establishing a disruptive precedent.

    • @assassin_aragornOP
      link
      English
      31 year ago

      Right, where I’m coming from is that I don’t think the personhood arguments you see for why content should be free for it really hold any water. Whatever the case on its intelligence, it isn’t comparable to humans for copyright law