20
A prevailing sentiment online is that GPT-4 still does not understand what it talks about. We can argue semantics over what “understanding” truly means. I think it’s useful, at least today, to draw the line at whether GPT-4 has succesfully modeled parts of the world. Is it just picking words and connecting them with correct grammar? Or does the token selection actually reflect parts of the physical world?
One of the most remarkable things I’ve heard about GPT-4 comes from an episode of This American Life titled “Greetings, People of Earth”.
So you’re claiming that it’s lying, and the broad expert consensus that it is indeed neither creative nor intelligent is also a lie? ok.
Lying would imply intelligence. But yes, it hallucinates/lies all the time.
What consensus are you taking about? Example link that says otherwise. and another one.
I mean I started this conversation from an amateur article that did not and does not understand AI, so the fact you can find more is not exactly surprising. You just need to do more research on this; feel free to talk to ChatGPT itself about this or Google for better sources.