You know how Google’s new feature called AI Overviews is prone to spitting out wildly incorrect answers to search queries? In one instance, AI Overviews told a user to use glue on pizza to make sure the cheese won’t slide off (pssst…please don’t do this.)

Well, according to an interview at The Vergewith Google CEO Sundar Pichai published earlier this week, just before criticism of the outputs really took off, these “hallucinations” are an “inherent feature” of  AI large language models (LLM), which is what drives AI Overviews, and this feature “is still an unsolved problem.”

  • @Snapz
    link
    English
    77 months ago

    So crazy that humanity has so far allowed the idea of “hallucinations”, even just the term, to be normalized and acceptable to any level into a product that’s being forced into every layer of our daily existence.

    Stop just going with it. Call out hallucinations on their face.