20
A prevailing sentiment online is that GPT-4 still does not understand what it talks about. We can argue semantics over what “understanding” truly means. I think it’s useful, at least today, to draw the line at whether GPT-4 has succesfully modeled parts of the world. Is it just picking words and connecting them with correct grammar? Or does the token selection actually reflect parts of the physical world?
One of the most remarkable things I’ve heard about GPT-4 comes from an episode of This American Life titled “Greetings, People of Earth”.
To add something, as you mentioned gpt-4 neurons are only a fraction of a human brain.
The entire human brain runs on 10-20 watt, thats about a single lightbulb to do all the computing needed for conscious intelligence.
Its crazy how optimized natural life is and we have a lot left to learn.
It’s a fun balance of both excellent and terrible optimization. The higher amount of noise is a feature and may be a significant part of what shapes our personalities and ability to create novel things. We can do things with our meat-computers that are really hard to approximate in machines, despite having much slower and lossier interconnects (not to mention much less reliable memory and sensory systems).