• @3volver
    link
    English
    36 months ago

    You’re referring to something that is changing and getting better constantly. In the long term LLMs are going to be even better than they are now. It’s ridiculous to think that it won’t be able to replace any of the workers that were fired. LLMs are going to allow 1 person to do the job of multiple people. Will it replace all people? No. But even if it allows 1 person to do the job of 2 people, that’s 50% of the workforce unemployed. This isn’t even mentioning how good robotics have gotten over the past 10 years.

    • @[email protected]
      link
      fedilink
      English
      226 months ago

      You must have one person constantly checking for hallucinations in everything that is generated: how is that going to be faster?

      • @[email protected]
        link
        fedilink
        English
        -4
        edit-2
        6 months ago

        Sure you sort of need that at the moment (not actually everything, but I get your hyperbole), but you seem to be working under the assumption that LLMs are not going to improve beyond what they are now. It is still very much in its infancy, and as the tech matures this will be less and less until it only requires few people to manage LLMs that solve the tasks of a much larger work force.

        • @SupraMario
          link
          English
          76 months ago

          It’s hard to improve when the data in is human and the data out cannot be error checked back against its own data in. It’s like trying to solve a math problem with two calculators that both think 2+2 = 6 because the data they were given said that it’s true.

        • @[email protected]
          link
          fedilink
          English
          26 months ago

          (not actually everything, but I get your hyperbole)

          How is it hyperbole? All artificial neural networks have “hallucinations”, no matter their size. What’s your magic way of knowing when that happens?

        • @[email protected]
          link
          fedilink
          English
          06 months ago

          LLMs now are trained on data generated by other LLMs. If you look at the “writing prompt” stuff 90% is machine generated (or so bad that I assume it’s machine generated) and that’s the data that is being bought right now.

    • @MeanEYE
      link
      English
      26 months ago

      There is a plateau to be hit at some point. How close it is, depends who you ask. Some say we are close, others say we are not but it definitely exists. LLMs suffer, just like other forms of machine learning, from data overload. You simply can’t be infinitely feeding it data and keep getting better and better results. ChatGPT’s models got famous because value function for learning had humans involved who helped curate quality of responses.