@phoneymouse to People [email protected] • 1 month agoWhy is no one talking about how unproductive it is to have verify every "hallucination" ChatGPT gives you?imagemessage-square115arrow-up11.09Karrow-down124
arrow-up11.06Karrow-down1imageWhy is no one talking about how unproductive it is to have verify every "hallucination" ChatGPT gives you?@phoneymouse to People [email protected] • 1 month agomessage-square115
minus-square@[email protected]linkfedilink5•1 month ago referencing its data sources Have you actually checked whether those sources exist yourself? It’s been quite a while since I’ve used GPT, and I would be positively surprised if they’ve managed to prevent its generation of nonexistent citations.
minus-square@UnderpantsWeevillinkEnglish0•1 month ago Have you actually checked whether those sources exist yourself When I’m curious enough, yes. While you can find plenty of “AI lied to me” examples online, they’re much harder to fish for in the application itself. 99 times out of 100, the references are good. But those cases aren’t fun to dunk on.
Have you actually checked whether those sources exist yourself? It’s been quite a while since I’ve used GPT, and I would be positively surprised if they’ve managed to prevent its generation of nonexistent citations.
When I’m curious enough, yes. While you can find plenty of “AI lied to me” examples online, they’re much harder to fish for in the application itself.
99 times out of 100, the references are good. But those cases aren’t fun to dunk on.