@[email protected] to TechnologyEnglish • 1 month agoApple study exposes deep cracks in LLMs’ “reasoning” capabilitiesarstechnica.comexternal-linkmessage-square109fedilinkarrow-up1513arrow-down119cross-posted to: apple_enthusiast[email protected]
arrow-up1494arrow-down1external-linkApple study exposes deep cracks in LLMs’ “reasoning” capabilitiesarstechnica.com@[email protected] to TechnologyEnglish • 1 month agomessage-square109fedilinkcross-posted to: apple_enthusiast[email protected]
minus-square@ChickenstalkerlinkEnglish1•edit-21 month agoAre we not flawed too? Does that not makes AI…human?
minus-square@[email protected]linkfedilinkEnglish24•1 month agoHow dare you imply that humans just make shit up when they don’t know the truth
minus-square@[email protected]linkfedilinkEnglish6•1 month agoDid I misremember something, or is my memory easily influenced by external stimuli? No, the Mandela Effect must be real! /s
Are we not flawed too? Does that not makes AI…human?
How dare you imply that humans just make shit up when they don’t know the truth
Did I misremember something, or is my memory easily influenced by external stimuli? No, the Mandela Effect must be real!
/s