20
A prevailing sentiment online is that GPT-4 still does not understand what it talks about. We can argue semantics over what “understanding” truly means. I think it’s useful, at least today, to draw the line at whether GPT-4 has succesfully modeled parts of the world. Is it just picking words and connecting them with correct grammar? Or does the token selection actually reflect parts of the physical world?
One of the most remarkable things I’ve heard about GPT-4 comes from an episode of This American Life titled “Greetings, People of Earth”.
Oh good, so I can have a cockroach write code for me like ChatGPT can?
LLMs can reason and don’t just regurgitate their training data. Have you actually used one? It’s plainly obvious that they can reason. It’s relatively primitive to what we’ll see in the future because they’re only getting better, but they can do it today.
I have; it being “obvious” does not mean it is correct, and in fact, in this instance, you are wrong.
What exactly is the behavior that an LLM could display that would change your mind?