Human brains also do processing of audio, video, self learning, feelings, and many more that are definitely not statistical text. There are even people without “inner monologue” that function just fine
Some research does use LLM in combination with other AI to get better results overall, but purely LLM isn’t going to work.
But language is a big thing in the human intelligence and consciousness.
I don’t know, and I would assume that anyone really know. But people without internal monologue I have a feeling that they have it but they are not aware of it. Or maybe they talk so much that all the monologue is external.
Interesting you focus on language. Because that’s exactly what LLMs cannot understand. There’s no LLM that actually has a concept of the meaning of words. Here’s an excellent essay illustrating my point.
The fundamental problem is that deep learning ignores a core finding of cognitive science: sophisticated use of language relies upon world models and abstract representations. Systems like LLMs, which train on text-only data and use statistical learning to predict words, cannot understand language for two key reasons: first, even with vast scale, their training and data do not have the required information; and second, LLMs lack the world-modeling and symbolic reasoning systems that underpin the most important aspects of human language.
The data that LLMs rely upon has a fundamental problem: it is entirely linguistic. All LMs receive are streams of symbols detached from their referents, and all they can do is find predictive patterns in those streams. But critically, understanding language requires having a grasp of the situation in the external world, representing other agents with their emotions and motivations, and connecting all of these factors to syntactic structures and semantic terms. Since LLMs rely solely on text data that is not grounded in any external or extra-linguistic representation, the models are stuck within the system of language, and thus cannot understand it. This is the symbol grounding problem: with access to just formal symbol system, one cannot figure out what these symbols are connected to outside the system (Harnad, 1990). Syntax alone is not enough to infer semantics. Training on just the form of language can allow LLMs to leverage artifacts in the data, but “cannot in principle lead to the learning of meaning” (Bender & Koller, 2020). Without any extralinguistic grounding, LLMs will inevitably misuse words, fail to pick up communicative intents, and misunderstand language.
One of the most successful application of LLMs might actually be quite enlightening in that respect: Language translation. B2 level seems to be little issue for LLMs, large cracks can be seen in C1, and forget everything about C2: Things that require cultural context. Another area where they break down is spotting the need to reformulate, that’s actually B-level skills. Source: Open a random page on deepl.com that’s not in English.
Durch weniger Zeitaufwand beim Übersetzen und Lektorieren können Wissensarbeitende ihre Produktivität steigern, sodass sich Teams besser auf andere wichtige Aufgaben konzentrieren können.
“because less time required” cannot be a cause in idiomatic German, you’d say “by faster translating”. “knowledge workers”… why are we doing job descriptions, on top of that an abstract category? Someone is a translator when they translate things, not when that’s their job description. How about plain and simple “employees” or “workers”. Then, “knowledge workers can increase their productivity”? That’s an S-tier Americanism, why should knowledge workers care? Why bring people into it in the first place? In German thought work becoming easier is the sales pitch, not how much more employees can self-identify as a well-lubricated cog. “so that teams can better focus on other important tasks”? Why only teams? “improvements don’t apply if you’re working on your own?” The fuck have teams to do with anything you’re saying, American PR guy who wrote this?
…I’ll believe that deepl understands stuff once I can’t tell, at a fucking glance, that the original was written in English, in particular, US English.
But this “concepts” of things are built on the relation and iteration of this concepts with our brain.
A baby doesn’t born knowing that a table is a table. But they see a table, their parents say the word table, and they end up imprinting that what they have to say when they see that thing is the word table. That then they can relation with other things they know. I’ve watched some kids grow and learn how to talk lately and it’s pretty evident how repetition precedes understanding. Many kids will just repeat words that they parents said in certain situation when they happen to be in the same situation. It’s pretty obvious with small kids. But it’s a behavior you can also see a lot with adults, just repeating something they heard once they see that particular words fit the context
Also it’s interesting that language can actually influence the way concepts are constructed in the brain. For instance ancient greeks saw blue and green as the same colour, because they did only have one word for both colours.
I’m not sure if you’re disagreeing with the essay or not? But in any case what you’re describing is in the same vein, that is simply repeating a word without knowing what it actually means in context is exactly what LLMs do. They can get pretty good at getting it right most of the times but without actually being able to learn the concept and context of ‘table’ they will never be able to use it correctly 100% of the time. Or even more importantly for AGI apply reason and critical thinking. Much like a child repeating a word without much clue what it actually means.
Just for fun, this is what Gemini has to say:
Here’s a breakdown of why this “parrot-like” behavior hinders true AI:
Lack of Conceptual Grounding: LLMs excel at statistical associations. They learn to predict the next word in a sequence based on massive amounts of text data. However, this doesn’t translate to understanding the underlying meaning or implications of those words.
Limited Generalization: A child learning “table” can apply that knowledge to various scenarios – a dining table, a coffee table, a work table. LLMs struggle to generalize, often getting tripped up by subtle shifts in context or nuanced language.
Inability for Reasoning and Critical Thinking: True intelligence involves not just recognizing patterns but also applying logic, identifying cause and effect, and drawing inferences. LLMs, while impressive in their own right, fall short in these areas.
I mostly agree with it. What I’m saying is the understanding of the words come from the self dialogue made of those same words. How many times has a baby to repeat the word “mom” until they understand what a mother is? I think that without that previous repetition the more complex "understanding is impossible. That human understanding of concepts, especially the more complex concepts that make us humans, come from we being able to have a dialogue with ourselves and with other humans.
But this dialogue initiates as a Parrot, non-intelligent animals with brains that are very similar to ours are parrots. Small children are parrots (are even some adults). But it seems that after being a Parrot for some time it comes the ability to become an Human. That parrot is needed, and it also keeps itself in our consciousness. If you don’t put a lot of effort in your thoughts and says you’ll see that the Parrot is there, that you just express the most appropriate answer for that situation giving what you know.
The “understanding” of concepts seems just like a complex and big interconnection of Neural-Network-like outputs of different things (words, images, smells, sounds…). But language keeps feeling like the more important of those things for intelligent consciousness.
I have yet to read another article that other user posted that explained why the jump from Parrot to Human is impossible in current AI architecture. But at a glance it seems valid. But that does not invalidate the idea of Parrots being the genesis of Humans. Just that a different architecture is needed, and not in the statistical answer department, the article I was linked was more about size and topology of the “brain”.
A baby doesn’t learn concepts by repeating words over and certainly knows what a mother is before it has any label or language to articulate the concept. The label gets associated with the concept later and is not purely by parroting and indeed excessive parroting normally indicates speech development issues.
Many babies start saying mama, and papa, at barely 6 months.
Do you really and actually think that a 6-12 month infant have a concept in his mind of what a mother is, or what kind of relationship there is between they and their mother? Do they know what the reproductive process is? Do they also know the familiar relationship with their political-great-aunt or that comes casually at 15 months? One thing is object recognition, and even beings recognition, and one VERY different is consciousness. Many animals do recognize other beings (this I like, this I don’t like), but understanding what another being is… only humans. And not right as they are born, obviously.
There are amplitude of studies about why “mama” and “papa” are the most common first words. They are the easiest to pronounce. It’s not that they think “Oh I must require the attention of my mother I better call her right now, but I can’t quite remember her single name right now, better call her mama”. No, no. They are just making the sound that’s easier to them. And they get a positive reaction out of that sound they are making. Most times also that being that is closer to you and whom you feel attached is also making that sound, so you repeat, get positive reaction, keep repeating easy sound. It’s only later that they figure that the sound they are making actually refers to another being. And at the beginning is just a sound of recognition, that’s not a symbol of intelligence, some animals can make sounds of recognition. Excessive parroting would obviously mean issues as I said parroting is the first stage to human consciousness, if they are stuck there there’s obviously a problem. But without any parroting, then your baby do indeed have a big issue.
Only when there is a developed chain of thoughts in some kind of language the human starts really thinking, starts having what I call a consciousness (the ability to talk to yourself to modify your own behaviour). How would a being be able to talk to themselves to heavily modify some sensorial experience, or to modify your own behaviour if not with a speech of some sort.
I think we see this with one observation. Human beings are distinct to the rest of the animals because we have this ability (I’m into the assumption that you think that humans are the same or really close to the rest of animals). But an infant baby is not that different in behaviour than an animal. And it’s only later in time when they show this fundamental difference. So I think is safe to assume that this difference does not appear at conception or at birth, but some time after birth, it starts developing until is ready.
There are also plenty studies of development issues with deaf children ( https://www.deafchildrenaustralia.org.au/wp-content/uploads/2021/06/language-development-deaf-children.pdf ). It’s studied that deafness in children impairs development greatly, and that other means to introduce a language to them is fundamental for their development. If language would be not fundamental for the development of the human experience deaf children would not have problems, as you stated they’ll “naturally” learn concepts before they are introduced to the language to express those concepts. But this is proven false. And deaf children actually have severe issues learning concepts and understanding them at this early stages. And the remedy, of course, is to introduce language to them by other ways than talking. That’s why this issue is not shown on deaf children born from deaf parents, as parents are able to introduce language to they kids by other ways than sound speech.
language is a big thing in the human intelligence and consciousness.
But an LLM isn’t actually language. It’s numbers that represent tokens that build words. It doesn’t have the concept of a table, just the numerical weighting of other tokens related to “tab” & “le”.
I don’t know how to tell you this. But your brain does not have words imprinted in it…
The concept of this is, funnily enough, something that is being studied that derived from language. For instance ancient greeks did not distinguish between green and blue, as both colours had the same word.
Words are not imprinted, they are a series of electrical impulses that we learn over time. As a reference about the complain that LLM does not have words but tokens that represent values within the network.
And those impulses and how we generate them while we think are of great importance on out consciousness.
Human brains also do processing of audio, video, self learning, feelings, and many more that are definitely not statistical text. There are even people without “inner monologue” that function just fine
Some research does use LLM in combination with other AI to get better results overall, but purely LLM isn’t going to work.
Yep, of course. We do more things.
But language is a big thing in the human intelligence and consciousness.
I don’t know, and I would assume that anyone really know. But people without internal monologue I have a feeling that they have it but they are not aware of it. Or maybe they talk so much that all the monologue is external.
Interesting you focus on language. Because that’s exactly what LLMs cannot understand. There’s no LLM that actually has a concept of the meaning of words. Here’s an excellent essay illustrating my point.
One of the most successful application of LLMs might actually be quite enlightening in that respect: Language translation. B2 level seems to be little issue for LLMs, large cracks can be seen in C1, and forget everything about C2: Things that require cultural context. Another area where they break down is spotting the need to reformulate, that’s actually B-level skills. Source: Open a random page on deepl.com that’s not in English.
Like, this:
“because less time required” cannot be a cause in idiomatic German, you’d say “by faster translating”. “knowledge workers”… why are we doing job descriptions, on top of that an abstract category? Someone is a translator when they translate things, not when that’s their job description. How about plain and simple “employees” or “workers”. Then, “knowledge workers can increase their productivity”? That’s an S-tier Americanism, why should knowledge workers care? Why bring people into it in the first place? In German thought work becoming easier is the sales pitch, not how much more employees can self-identify as a well-lubricated cog. “so that teams can better focus on other important tasks”? Why only teams? “improvements don’t apply if you’re working on your own?” The fuck have teams to do with anything you’re saying, American PR guy who wrote this?
…I’ll believe that deepl understands stuff once I can’t tell, at a fucking glance, that the original was written in English, in particular, US English.
But this “concepts” of things are built on the relation and iteration of this concepts with our brain.
A baby doesn’t born knowing that a table is a table. But they see a table, their parents say the word table, and they end up imprinting that what they have to say when they see that thing is the word table. That then they can relation with other things they know. I’ve watched some kids grow and learn how to talk lately and it’s pretty evident how repetition precedes understanding. Many kids will just repeat words that they parents said in certain situation when they happen to be in the same situation. It’s pretty obvious with small kids. But it’s a behavior you can also see a lot with adults, just repeating something they heard once they see that particular words fit the context
Also it’s interesting that language can actually influence the way concepts are constructed in the brain. For instance ancient greeks saw blue and green as the same colour, because they did only have one word for both colours.
I’m not sure if you’re disagreeing with the essay or not? But in any case what you’re describing is in the same vein, that is simply repeating a word without knowing what it actually means in context is exactly what LLMs do. They can get pretty good at getting it right most of the times but without actually being able to learn the concept and context of ‘table’ they will never be able to use it correctly 100% of the time. Or even more importantly for AGI apply reason and critical thinking. Much like a child repeating a word without much clue what it actually means.
Just for fun, this is what Gemini has to say:
I mostly agree with it. What I’m saying is the understanding of the words come from the self dialogue made of those same words. How many times has a baby to repeat the word “mom” until they understand what a mother is? I think that without that previous repetition the more complex "understanding is impossible. That human understanding of concepts, especially the more complex concepts that make us humans, come from we being able to have a dialogue with ourselves and with other humans. But this dialogue initiates as a Parrot, non-intelligent animals with brains that are very similar to ours are parrots. Small children are parrots (are even some adults). But it seems that after being a Parrot for some time it comes the ability to become an Human. That parrot is needed, and it also keeps itself in our consciousness. If you don’t put a lot of effort in your thoughts and says you’ll see that the Parrot is there, that you just express the most appropriate answer for that situation giving what you know.
The “understanding” of concepts seems just like a complex and big interconnection of Neural-Network-like outputs of different things (words, images, smells, sounds…). But language keeps feeling like the more important of those things for intelligent consciousness.
I have yet to read another article that other user posted that explained why the jump from Parrot to Human is impossible in current AI architecture. But at a glance it seems valid. But that does not invalidate the idea of Parrots being the genesis of Humans. Just that a different architecture is needed, and not in the statistical answer department, the article I was linked was more about size and topology of the “brain”.
A baby doesn’t learn concepts by repeating words over and certainly knows what a mother is before it has any label or language to articulate the concept. The label gets associated with the concept later and is not purely by parroting and indeed excessive parroting normally indicates speech development issues.
Many babies start saying mama, and papa, at barely 6 months. Do you really and actually think that a 6-12 month infant have a concept in his mind of what a mother is, or what kind of relationship there is between they and their mother? Do they know what the reproductive process is? Do they also know the familiar relationship with their political-great-aunt or that comes casually at 15 months? One thing is object recognition, and even beings recognition, and one VERY different is consciousness. Many animals do recognize other beings (this I like, this I don’t like), but understanding what another being is… only humans. And not right as they are born, obviously.
There are amplitude of studies about why “mama” and “papa” are the most common first words. They are the easiest to pronounce. It’s not that they think “Oh I must require the attention of my mother I better call her right now, but I can’t quite remember her single name right now, better call her mama”. No, no. They are just making the sound that’s easier to them. And they get a positive reaction out of that sound they are making. Most times also that being that is closer to you and whom you feel attached is also making that sound, so you repeat, get positive reaction, keep repeating easy sound. It’s only later that they figure that the sound they are making actually refers to another being. And at the beginning is just a sound of recognition, that’s not a symbol of intelligence, some animals can make sounds of recognition. Excessive parroting would obviously mean issues as I said parroting is the first stage to human consciousness, if they are stuck there there’s obviously a problem. But without any parroting, then your baby do indeed have a big issue.
Only when there is a developed chain of thoughts in some kind of language the human starts really thinking, starts having what I call a consciousness (the ability to talk to yourself to modify your own behaviour). How would a being be able to talk to themselves to heavily modify some sensorial experience, or to modify your own behaviour if not with a speech of some sort.
I think we see this with one observation. Human beings are distinct to the rest of the animals because we have this ability (I’m into the assumption that you think that humans are the same or really close to the rest of animals). But an infant baby is not that different in behaviour than an animal. And it’s only later in time when they show this fundamental difference. So I think is safe to assume that this difference does not appear at conception or at birth, but some time after birth, it starts developing until is ready.
There are also plenty studies of development issues with deaf children ( https://www.deafchildrenaustralia.org.au/wp-content/uploads/2021/06/language-development-deaf-children.pdf ). It’s studied that deafness in children impairs development greatly, and that other means to introduce a language to them is fundamental for their development. If language would be not fundamental for the development of the human experience deaf children would not have problems, as you stated they’ll “naturally” learn concepts before they are introduced to the language to express those concepts. But this is proven false. And deaf children actually have severe issues learning concepts and understanding them at this early stages. And the remedy, of course, is to introduce language to them by other ways than talking. That’s why this issue is not shown on deaf children born from deaf parents, as parents are able to introduce language to they kids by other ways than sound speech.
But an LLM isn’t actually language. It’s numbers that represent tokens that build words. It doesn’t have the concept of a table, just the numerical weighting of other tokens related to “tab” & “le”.
I don’t know how to tell you this. But your brain does not have words imprinted in it…
The concept of this is, funnily enough, something that is being studied that derived from language. For instance ancient greeks did not distinguish between green and blue, as both colours had the same word.
You said
You also said
You need to pick an argument and stick to it.
what do you not understand?
Words are not imprinted, they are a series of electrical impulses that we learn over time. As a reference about the complain that LLM does not have words but tokens that represent values within the network.
And those impulses and how we generate them while we think are of great importance on out consciousness.
So, are words important to the human brain or not? You are not consistent.
Sorry, I don’t know how to make my answer more simple or clear. Sorry you did not understand what I wrote.