@[email protected] to TechnologyEnglish • 1 month agoApple study exposes deep cracks in LLMs’ “reasoning” capabilitiesarstechnica.comexternal-linkmessage-square109fedilinkarrow-up1513arrow-down119cross-posted to: apple_enthusiast[email protected]
arrow-up1494arrow-down1external-linkApple study exposes deep cracks in LLMs’ “reasoning” capabilitiesarstechnica.com@[email protected] to TechnologyEnglish • 1 month agomessage-square109fedilinkcross-posted to: apple_enthusiast[email protected]
minus-square@[email protected]linkfedilinkEnglish18•1 month agoPeople working with these technologies have known this for quite awhile. It’s nice of Apple’s researchers to formalize it, but nobody is really surprised-- Least of all the companies funnelling traincars of money into the LLM furnace.
minus-square@[email protected]linkfedilinkEnglish1•1 month agoIf they know about this then they aren’t thinking of the security implications
People working with these technologies have known this for quite awhile. It’s nice of Apple’s researchers to formalize it, but nobody is really surprised-- Least of all the companies funnelling traincars of money into the LLM furnace.
If they know about this then they aren’t thinking of the security implications
Security implications?