- cross-posted to:
- fuck_ai
- cross-posted to:
- fuck_ai
You know how Google’s new feature called AI Overviews is prone to spitting out wildly incorrect answers to search queries? In one instance, AI Overviews told a user to use glue on pizza to make sure the cheese won’t slide off (pssst…please don’t do this.)
Well, according to an interview at The Vergewith Google CEO Sundar Pichai published earlier this week, just before criticism of the outputs really took off, these “hallucinations” are an “inherent feature” of AI large language models (LLM), which is what drives AI Overviews, and this feature “is still an unsolved problem.”
Journalists are also in a panic about LLMs, they feel their jobs are threatened by its potential. This is why (in my opinion) we’re seeing a lot of news stories that will focus on any imperfections that can be found in LLMs.
They’re not threatened by its potential. They, like artists, are threatened by management who think that LLMs are good enough today to replace part or all of their staff.
There was a story from earlier this year of a company that owns 12-15 different gaming news outlets who fired about 80% of their writing staff and journalists - replacing 100% of their staff at the majority of the outlets with LLMs and leaving a skeleton crew at the rest.
What you’re seeing isn’t some slant trying to discredit LLMs. It’s the results of management who are using them wrong.
What I mean is that Journalists feel threatened by it in someway (whether I use the word “potential” here or not is mostly irrelevant).
In the end this is just a theory, but it makes sense to me.
I absolutely agree that management has greatly misunderstood how LLMs should be used. They should be used as a tool, but treated like an intern who’s speaking out loud without citing any sources. All of their statements and work should be double checked.