@[email protected] to TechnologyEnglish • 1 month agoApple study exposes deep cracks in LLMs’ “reasoning” capabilitiesarstechnica.comexternal-linkmessage-square109fedilinkarrow-up1513arrow-down119cross-posted to: apple_enthusiast[email protected]
arrow-up1494arrow-down1external-linkApple study exposes deep cracks in LLMs’ “reasoning” capabilitiesarstechnica.com@[email protected] to TechnologyEnglish • 1 month agomessage-square109fedilinkcross-posted to: apple_enthusiast[email protected]
minus-square@[email protected]linkfedilinkEnglish-2•1 month agoWhat’s an example of a claim Altman has made that you’d consider bullshit?
minus-square@sinceasdflinkEnglish9•1 month agoThe entirety of “open” ai is complete bullshit. They’re no longer even pretending to be nonprofit at all and there is nothing “open” about them since like 2018.
minus-square@[email protected]linkfedilinkEnglish2•edit-21 month agoThat’s not a claim, it’s the name of the company. I’m not aware of Altman being the one who even came up with it.
What’s an example of a claim Altman has made that you’d consider bullshit?
The entirety of “open” ai is complete bullshit. They’re no longer even pretending to be nonprofit at all and there is nothing “open” about them since like 2018.
That’s not a claim, it’s the name of the company. I’m not aware of Altman being the one who even came up with it.