Because in a lot of applications you can bypass hallucinations.
getting sources for something
as a jump off point for a topic
to get a second opinion
to help argue for r against your position on a topic
get information in a specific format
In all these applications you can bypass hallucinations because either it’s task is non-factual, or it’s verifiable while promoting, or because you will be able to verify in any of the superseding tasks.
Just because it makes shit up sometimes doesn’t mean it’s useless. Like an idiot friend, you can still ask it for opinions or something and it will definitely start you off somewhere helpful.
Because in a lot of applications you can bypass hallucinations.
In all these applications you can bypass hallucinations because either it’s task is non-factual, or it’s verifiable while promoting, or because you will be able to verify in any of the superseding tasks.
Just because it makes shit up sometimes doesn’t mean it’s useless. Like an idiot friend, you can still ask it for opinions or something and it will definitely start you off somewhere helpful.
All LLMs are text completion engines, no matter what fancy bells they tack on.
If your task is some kind of text completion or repetition of text provided in the prompt context LLMs perform wonderfully.
For everything else you are wading through territory you could probably do easier using other methods.
Also just searching the web in general.
Google is useless for searching the web today.
so, basically, even a broken clock is right twice a day?
No, maybe more like, even a functional clock is wrong every 0.8 days.
https://superuser.com/questions/759730/how-much-clock-drift-is-considered-normal-for-a-non-networked-windows-7-pc
The frequency is probably way higher for most LLMs though lol