@[email protected] to TechnologyEnglish • 7 months agoWe have to stop ignoring AI’s hallucination problemwww.theverge.comexternal-linkmessage-square208fedilinkarrow-up1532arrow-down129
arrow-up1503arrow-down1external-linkWe have to stop ignoring AI’s hallucination problemwww.theverge.com@[email protected] to TechnologyEnglish • 7 months agomessage-square208fedilink
minus-square@UnsavoryMollusklinkEnglish2•edit-27 months agoThey are right though. LLM at their core are just about determining what is statistically the most probable to spit out.
minus-square@CyberflunklinkEnglish0•7 months agoYour 1 sentence makes more sense than the slop above.
They are right though. LLM at their core are just about determining what is statistically the most probable to spit out.
Your 1 sentence makes more sense than the slop above.