20
A prevailing sentiment online is that GPT-4 still does not understand what it talks about. We can argue semantics over what “understanding” truly means. I think it’s useful, at least today, to draw the line at whether GPT-4 has succesfully modeled parts of the world. Is it just picking words and connecting them with correct grammar? Or does the token selection actually reflect parts of the physical world?
One of the most remarkable things I’ve heard about GPT-4 comes from an episode of This American Life titled “Greetings, People of Earth”.
I wouldn’t put much stock in that either way TBH. OpenAI has probably forced a particular type of response from it with RLHF for that question
It is an accurate assessment of its capabilities and design. While obviously LLMs hallucinate, this is basically the expert consensus as well.
Can you point to any sort of expert consensus or support your claim in any way? Or are you just hallucinating like an LLM does?