Which of the following sounds more reasonable?

  • I shouldn’t have to pay for the content that I use to tune my LLM model and algorithm.

  • We shouldn’t have to pay for the content we use to train and teach an AI.

By calling it AI, the corporations are able to advocate for a position that’s blatantly pro corporate and anti writer/artist, and trick people into supporting it under the guise of a technological development.

  • @pensivepangolin
    link
    English
    981 year ago

    I think it’s the same reason the CEO’s of these corporations are clamoring about their own products being doomsday devices: it gives them massive power over crafting regulatory policy, thus letting them make sure it’s favorable to their business interests.

    Even more frustrating when you realize, and feel free to correct me if I’m wrong, these new “AI” programs and LLMs aren’t really novel in terms of theoretical approach: the real revolution is the amount of computing power and data to throw at them.

    • @assassin_aragornOP
      link
      English
      591 year ago

      The funniest thing I’ve seen on this is the ChatGPT CEO, Altman, talking about how he’s a bit afraid of what they’ve created and how it needs limitations – and then when the EU begins to look at regulations, he immediately rejects the concept, to the point of threatening to leave the European market. It’s incredibly transparent what they’re doing.

      Unfortunately I don’t know enough about the technology to say if the algorithms and concepts themselves are novel, but without a doubt they couldn’t exist without modern computing power capabilities.

      • FancyGUI
        link
        fedilink
        English
        20
        edit-2
        1 year ago

        I can tell for a fact that there’s nothing new going on. Only the MASSIVE investment from Microsoft to allow them to train on an insane amount of data. I am no “expert” per se, but I’ve been studying and working with AI for over a decade - so feel free to judge my reply as you please

        • @[email protected]
          link
          fedilink
          English
          -21 year ago

          nothing new going on

          Uhhhh the available models are improving by leaps and bounds by the month, and there’s quite a bit of tangible advancement happening every week. Even more critically the models that can be run on a single computer are very quickly catching up to those that just a year or two ago required some percentage of a hyperscaler’s datacenter to operate

          Unless you mean to say that the current insane pace of advancement is all built off of decades of research and a lot of the specific advancements recently happen to be fairly small innovations into previous research infused with a crapload of cash and hype (far more than most researchers could only dream of)

          • FancyGUI
            link
            fedilink
            English
            9
            edit-2
            1 year ago

            all built off of decades of research and a lot of the specific advancements recently happen to be fairly small innovations into previous research infused with a crapload of cash and hype>

            That’s exactly what I mean! The research projects I’ve been 5-7 years ago had already created LLMs like this that were as impressive as GPT. I don’t mean that the things that are going on aren’t impressive, I just mean that there’s nothing actually new. That’s all. IT’s similar to the previous hype wave that happened in AI with machine learning models when google was pushing deep learning. I really just want to point that out.

            EDIT: Typo

        • @SCB
          link
          English
          -31 year ago

          nothing new going on

          I can’t think of anything less accurate to say about LLMs other than that they’re a world-ending threat.

          This is a bit like saying “The internet is a cute thing for tech nerds but will never go mainstream” in like 1995.

      • Peruvian_Skies
        link
        fedilink
        111 year ago

        The concepts themselves are some 30 years old, but storage capacity and processing speed have only recently reached a point where generative AI outperforms competing solutions.

        But regarding the regulation thing, I don’t know what was said or proposed, and this is just me playing devil’s advocate: but could it be that the CEO simply doesn’t agree with the specifics of the proposed regulations while still believing that some other, different kind of regulation should exist?

        • rainh
          link
          fedilink
          151 year ago

          Certainly could be, but probably an optimistic take. Most likely they’re just trying to do what corporations have been doing for ages, which is to weaponize government policy to prevent competition. They don’t want restrictions that will materially impact their product, they want restrictions that will materially impact startups to make it more difficult for them to intrude on the established space.

          • @jumperalex
            link
            English
            71 year ago

            I think if you fed your response into ChatGPT and asked it to summarize in two words it would return,

            “Regulatory Capture”

      • MxM111
        link
        fedilink
        -31 year ago

        And what are they doing? To remind, OpenAI is non-profit.

    • @[email protected]
      link
      fedilink
      English
      241 year ago

      Even more frustrating when you realize, and feel free to correct me if I’m wrong, these new “AI” programs and LLMs aren’t really novel in terms of theoretical approach: the real revolution is the amount of computing power and data to throw at them.

      This is 100% true. LLMs, neural networks, markov chains, gradient descent, etc. etc. on down the line is nothing particularly new. They’ve collectively been studied academically for 30+ years. It’s only recently that we’ve been able to throw huge amounts of data, computing capacity, and time to tweak said models to achieve results unthinkable 10-ish years ago.

      There have been efficiencies, breakthroughs, tweaks, and changes over this time too, but that’s just to be expected. But largely its just sheer raw size/scale that’s just been achievable recently.

      • @[email protected]
        link
        fedilink
        English
        71 year ago

        LLMs aren’t really novel in terms of theoretical approach: the real revolution is the amount of computing power and data to throw at them.

        This is 100% true. LLMs, neural networks, markov chains, gradient descent, etc. etc. on down the line is nothing particularly new. They’ve collectively been studied academically for 30+ years.

        Well LLMs and particularly GPT and its competitors rely on Transformers, which is a relatively recent theoretical development in the machine learning field. Of course it’s based in prior research, and maybe there even is prior art buried in some obscure paper or 404 link, but if that’s your measure then there is no “novel theoretical approach” for anything, ever.

        I mean I’ll grant that the available input data and compute for machine learning has increased exponentially, and that’s certainly an obvious factor in the improved output quality. But that’s not all there is to the current “AI” summer, general scientific progress played a non-minor part as well.

        In summary, I disagree on data/compute scale being the deciding factor here, it’s deep learning architecture IMHO. The former didn’t change that much over the last half decade, the latter did.

        • @pensivepangolin
          link
          English
          31 year ago

          Now as I stated in my first comment in these threads, I don’t know terribly much about the technical details behind current LLM’s and I’m basing my comments on my layman’s reading.

          Could you elaborate on what you mean about the development of of deep learning architecture in recent years? I’m curious; I’m not trying to be argumentative.

          • @[email protected]
            link
            fedilink
            English
            21 year ago

            Could you elaborate on what you mean about the development of deep learning architecture in recent years?

            Transformers. Fun fact, the T in GPT and BERT stands for “transformer”. They are a neural network architecture that was first proposed in 2017 (or 2014 depending on how you want to measure). Their key novelty is the method of implementing an attention mechanism and a context window without recursion, which was the method most earlier NNs used for that.

            The wiki page I linked above is admittedly a bit technical, this articles explanation might be a bit more friendly to the layperson.

            • @pensivepangolin
              link
              English
              11 year ago

              Thanks for the reading material: I’m really not familiar with Transformers other than the most basic info. I’ll give it a read when I get a break from work.

      • @pensivepangolin
        link
        English
        51 year ago

        Okay, I’m glad I’m not too far off the mark then (I’m not an AI expert/it’s not my field of study).

        I think this also points to/is a great example of another worrying trend: the consolidation of computing power in the hands of a few large companies. Without even factoring in the development of true AI/whether that can or will happen anytime soon, the LLMs really show off the massive scale of both computational power consolidation AMD data harvesting by only a very few entities. I’m guessing I’m not alone here in finding that increasingly concerning, particularly since a lot of development is driving towards surveillance applications.

      • @jumperalex
        link
        English
        31 year ago

        by that logic there was nothing novel about solid state transistors since they just did the same thing as vacuum tubes; no innovation there I guess. No new ideas came from finally having a way to pack cooler, less power hungry, smaller components together.

    • @[email protected]
      link
      fedilink
      English
      81 year ago

      LLMs are pretty novel. They are made possible by invention of the Transformer model, that operates significantly different compared to, say, RNN.

    • @assassinatedbyCIA
      link
      English
      61 year ago

      It also plays into the hype cycle they’re trying to create. Saying you’ve made an AI is more likely to capture the attention of the masses then saying you have a LLM. Ditto that point for the existential doomerism that they ceo’s have. Saying your tech is so powerful that it might lead to humanity’s extinction does wonders in building hype.

      • @pensivepangolin
        link
        English
        41 year ago

        Agreed. And all you really need to do is browse any of the headlines from even respectable news outlets to see how well it’s working. It’s just article after article uncritically parroting whatever claims these CEO’s make at face value at least 50% of the time. It’s mind-numbing.

    • @[email protected]
      link
      fedilink
      English
      31 year ago

      The fear mongering is pretty ridiculous.

      “AI could DESTROY HUMANITY. It’s like the ATOMIC BOMB! Look at it’s RAW POWER!”

      AI generates an image of cats playing canasta.

      “By God…”

    • RossoErcole
      link
      fedilink
      01 year ago

      We could say that the human brain isn’t novel in terms of biological composition: the real evolution is the size increase compared to the body.

      The fact that insects exist doesn’t make us less intelligent.

      But I agree with the sentiment of the argument.