@[email protected] to TechnologyEnglish • 2 months agoApple study exposes deep cracks in LLMs’ “reasoning” capabilitiesarstechnica.comexternal-linkmessage-square109fedilinkarrow-up1513arrow-down119cross-posted to: apple_enthusiast[email protected]
arrow-up1494arrow-down1external-linkApple study exposes deep cracks in LLMs’ “reasoning” capabilitiesarstechnica.com@[email protected] to TechnologyEnglish • 2 months agomessage-square109fedilinkcross-posted to: apple_enthusiast[email protected]
minus-square@[email protected]linkfedilinkEnglish18•2 months agoPeople working with these technologies have known this for quite awhile. It’s nice of Apple’s researchers to formalize it, but nobody is really surprised-- Least of all the companies funnelling traincars of money into the LLM furnace.
minus-square@[email protected]linkfedilinkEnglish1•2 months agoIf they know about this then they aren’t thinking of the security implications
People working with these technologies have known this for quite awhile. It’s nice of Apple’s researchers to formalize it, but nobody is really surprised-- Least of all the companies funnelling traincars of money into the LLM furnace.
If they know about this then they aren’t thinking of the security implications
Security implications?