@[email protected] to TechnologyEnglish • 2 months agoApple study exposes deep cracks in LLMs’ “reasoning” capabilitiesarstechnica.comexternal-linkmessage-square109fedilinkarrow-up1513arrow-down119cross-posted to: apple_enthusiast[email protected]
arrow-up1494arrow-down1external-linkApple study exposes deep cracks in LLMs’ “reasoning” capabilitiesarstechnica.com@[email protected] to TechnologyEnglish • 2 months agomessage-square109fedilinkcross-posted to: apple_enthusiast[email protected]
minus-square@ChickenstalkerlinkEnglish1•edit-22 months agoAre we not flawed too? Does that not makes AI…human?
minus-square@[email protected]linkfedilinkEnglish24•2 months agoHow dare you imply that humans just make shit up when they don’t know the truth
minus-square@[email protected]linkfedilinkEnglish6•2 months agoDid I misremember something, or is my memory easily influenced by external stimuli? No, the Mandela Effect must be real! /s
Are we not flawed too? Does that not makes AI…human?
How dare you imply that humans just make shit up when they don’t know the truth
Did I misremember something, or is my memory easily influenced by external stimuli? No, the Mandela Effect must be real!
/s