20
A prevailing sentiment online is that GPT-4 still does not understand what it talks about. We can argue semantics over what “understanding” truly means. I think it’s useful, at least today, to draw the line at whether GPT-4 has succesfully modeled parts of the world. Is it just picking words and connecting them with correct grammar? Or does the token selection actually reflect parts of the physical world?
One of the most remarkable things I’ve heard about GPT-4 comes from an episode of This American Life titled “Greetings, People of Earth”.
It can come up with new solutions never seen before, so there is absolutely some intelligence. Hell, animals have to do the simplest shit to get called intelligent.
Even animals are more intelligent than LLMs. An LLM is a text creation tool; if it comes up with a new solution, it’s not because it thought of it. It’s because there was already some connection for the solution in the text it ingested.
Oh good, so I can have a cockroach write code for me like ChatGPT can?
LLMs can reason and don’t just regurgitate their training data. Have you actually used one? It’s plainly obvious that they can reason. It’s relatively primitive to what we’ll see in the future because they’re only getting better, but they can do it today.
I have; it being “obvious” does not mean it is correct, and in fact, in this instance, you are wrong.
What exactly is the behavior that an LLM could display that would change your mind?
Nonsense. Just make something up yourself and ask it. If you then think “maybe that was on the internet before” then try again with an even more absurd question. Ask it how to stack XYZ. Whatever you want.
Let’s do better, and ask GPT4 itself what it thinks about this!
I wouldn’t put much stock in that either way TBH. OpenAI has probably forced a particular type of response from it with RLHF for that question
It is an accurate assessment of its capabilities and design. While obviously LLMs hallucinate, this is basically the expert consensus as well.
Can you point to any sort of expert consensus or support your claim in any way? Or are you just hallucinating like an LLM does?
So you’re claiming that it’s lying, and the broad expert consensus that it is indeed neither creative nor intelligent is also a lie? ok.
Lying would imply intelligence. But yes, it hallucinates/lies all the time.
What consensus are you taking about? Example link that says otherwise. and another one.
I mean I started this conversation from an amateur article that did not and does not understand AI, so the fact you can find more is not exactly surprising. You just need to do more research on this; feel free to talk to ChatGPT itself about this or Google for better sources.