• @[email protected]
    link
    fedilink
    English
    31 year ago

    What you are describing is true of older LLMs. GPT4, it’s less true of. GPT5 or whatever it is they are training now will likely begin to shed these issues.

    The shocking thing that we discovered that lead to all of this is that this sort of LLM continues to scale in capabilities with the quality and size of the training set. AI researchers were convinced that this was not possible until GPT proved that it was.

    So the idea that you can look at the limitations of the current generation of LLM and make blanket statements about the limitations of all future generations is demonstrably flawed.

    • @jocanibOP
      link
      English
      21 year ago

      They cannot be anything other than stochastic parrots because that is all the technology allows them to be. They are not intelligent, they don’t understand the question you ask or the answer they give you, they don’t know what truth is let alone how to determine it. They’re just good at producing answers that sound like a human might have written them. They’re a parlour trick. Hi-tech magic 8balls.

      • @[email protected]
        link
        fedilink
        English
        41 year ago

        They cannot be anything other than stochastic parrots because that is all the technology allows them to be.

        Are you referring to humans or AI? I’m not sure you’re wrong about humans…

        • @jocanibOP
          link
          English
          -41 year ago

          FFS

          Sam Altman is a know-nothing grifter. HTH

          • nulldev
            link
            fedilink
            English
            41 year ago

            Have you even read the article?

            IMO it does not do a good job of disproving that “humans are stochastic parrots”.

            The example with the octopus isn’t really about stochastic parrots. It’s more about how LLMs are not multi-modal.

          • tate
            link
            fedilink
            English
            01 year ago

            That article is super helpful.