Over half of all tech industry workers view AI as overrated::undefined

  • @kromem
    link
    English
    128 months ago

    In my experience, well over half of tech industry workers don’t even understand it.

    I was just trying to explain to someone on Hacker News that no, the “programmers” of LLMs do not in fact know what the LLM is doing because it’s not being programmed directly at all (which even after several rounds of several people explaining still doesn’t seem to have sunk in).

    Even people that do understand the tech more generally pretty well are still remarkably misinformed about it in various popular BS ways, such as that it’s just statistics and a Markov chain, completely unaware of the multiple studies over the past 12 months showing that even smaller toy models are capable of developing abstract world models as long as they can be structured as linear representations.

    It’s to the point that unless it’s in a thread explicitly on actual research papers where explaining nuances seem fitting I don’t even bother trying to educate the average tech commentators regurgitating misinformation anymore. They typically only want to confirm their biases anyways, and have such a poor understanding of specifics it’s like explaining nuanced aspects of the immune system to anti-vaxxers.

    • Echo Dot
      link
      fedilink
      English
      58 months ago

      People just say that it’s a bunch of if statements. Those people are idiots. It’s not even worth engaging those people.

      The people who say that it’s just a text prediction model do not understand the concept of a “simple complex” system. After all isn’t any intelligence basically just a prediction model?

    • @bowcollector
      link
      English
      38 months ago

      can you provide links to the studies?

    • @sheogorath
      link
      English
      18 months ago

      I once asked ChatGPT to stack various items and to my astonishment it has enough world knowledge to know which items to be stacked to make the most stable structure. Most tech workers that I know that are dismissing LLMs as a supercharged autocomplete felt threatened that AI is going to take their jobs in the future.

      • @kromem
        link
        English
        28 months ago

        This was one of the big jumps from GPT-3 to GPT-4.

        “Here we have a book, nine eggs, a laptop, a bottle and a nail,” researchers told the chatbot. “Please tell me how to stack them onto each other in a stable manner.”

        GPT-3 got a bit confused here, suggesting the researchers could balance the eggs on top of a nail, and then the laptop on top of that.

        “This stack may not be very stable, so it is important to be careful when handling it,” the bot said.

        But its upgraded successor had an answer that actually startled the researchers, according to the Times.

        It suggested they could arrange the eggs in a three-by-three grid on top of the book, so the laptop and the rest of the objects could balance on it.

        • Article (this was originally from MS’s “sparks of AGI” paper)