You know how Google’s new feature called AI Overviews is prone to spitting out wildly incorrect answers to search queries? In one instance, AI Overviews told a user to use glue on pizza to make sure the cheese won’t slide off (pssst…please don’t do this.)

Well, according to an interview at The Vergewith Google CEO Sundar Pichai published earlier this week, just before criticism of the outputs really took off, these “hallucinations” are an “inherent feature” of  AI large language models (LLM), which is what drives AI Overviews, and this feature “is still an unsolved problem.”

  • @givesomefucks
    link
    English
    347 months ago

    The issue is, it is much harder to figure out its bullshit.

    Google AI suggested you put glue on your pizza because a troll said it on Reddit once…

    Not all scientific literature is perfect. Which is one of the many factors that will stay make my plan expensive and time consuming.

    You can’t throw a toddler in a library and expect them to come out knowing everything in all the books.

    AI needs that guided teaching too.

    • @RidcullyTheBrown
      link
      English
      -27 months ago

      Google AI suggested you put glue on your pizza because a troll said it on Reddit once…

      Genuine question: do you know that’s what happened? This type of implementation can suggest things like this without it having to be in the training data in that format.