@[email protected] to TechnologyEnglish • 3 months agoApple study exposes deep cracks in LLMs’ “reasoning” capabilitiesarstechnica.comexternal-linkmessage-square109fedilinkarrow-up1513arrow-down119cross-posted to: apple_enthusiast[email protected]
arrow-up1494arrow-down1external-linkApple study exposes deep cracks in LLMs’ “reasoning” capabilitiesarstechnica.com@[email protected] to TechnologyEnglish • 3 months agomessage-square109fedilinkcross-posted to: apple_enthusiast[email protected]
minus-squareHeyListenWatchOutlinkEnglish11•3 months ago …a spellchecker on steroids. Ask literally any of the LLM chat bots out there still using any headless GPT instances from 2023 how many Rs there are in “strawberry,” and enjoy. 🍓
minus-squareSemperveruslinkEnglish11•3 months agoThis problem is due to the fact that the AI isnt using english words internally, it’s tokenizing. There are no Rs in {35006}.
minus-square@Sterile_TechniquelinkEnglish5•3 months agoThat was both hilarious and painful. And I don’t mean to always hate on it - the tech is useful in some contexts, I just can’t stand that we call it ‘intelligence’.
minus-square@PieisawesomelinkEnglish3•3 months agoLLMs don’t see words, they see tokens. They were always just guessing
Ask literally any of the LLM chat bots out there still using any headless GPT instances from 2023 how many Rs there are in “strawberry,” and enjoy. 🍓
This problem is due to the fact that the AI isnt using english words internally, it’s tokenizing. There are no Rs in {35006}.
That was both hilarious and painful.
And I don’t mean to always hate on it - the tech is useful in some contexts, I just can’t stand that we call it ‘intelligence’.
LLMs don’t see words, they see tokens. They were always just guessing