@[email protected] to [email protected]English • 1 year agoOver just a few months, ChatGPT went from correctly answering a simple math problem 98% of the time to just 2%, study findsfortune.comexternal-linkmessage-square8fedilinkarrow-up173arrow-down15cross-posted to: [email protected][email protected][email protected]technologytechnology[email protected]
arrow-up168arrow-down1external-linkOver just a few months, ChatGPT went from correctly answering a simple math problem 98% of the time to just 2%, study findsfortune.com@[email protected] to [email protected]English • 1 year agomessage-square8fedilinkcross-posted to: [email protected][email protected][email protected]technologytechnology[email protected]
minus-square@[email protected]linkfedilinkEnglish18•1 year agoA system that has no idea if what it is saying is true or false or what true or false even mean is not very consistent in answering things truthfully?
minus-square@[email protected]OPlinkfedilinkEnglish4•edit-21 year agoWait for the next version which will be trained on data that includes gpt generated word salad
minus-square@[email protected]linkfedilink-1•1 year agoNo that is not the thesis of this story. If I’m reading the headline correctly, the rate of its being correct has changed from one stable distribution to another one.
A system that has no idea if what it is saying is true or false or what true or false even mean is not very consistent in answering things truthfully?
Wait for the next version which will be trained on data that includes gpt generated word salad
Detroit: Become Human moment.
No that is not the thesis of this story. If I’m reading the headline correctly, the rate of its being correct has changed from one stable distribution to another one.