1 - Well, it can simulate them, and we will probably never use simulation when we want intelligence. (We will for understanding the brain, though)
2 - It doesn’t matter at all, intelligence doesn’t need to think like us.
3 - We are nowhere close to any general one, and the more investors bet all their money and markets sell all their hardware to the same few companies that will burn at their local maximum, the further away we will become.
2 - It doesn’t matter at all, intelligence doesn’t need to think like us.
Agreed, but look at the history of how humans have thought about the presumed intelligence (or lack of it) in animals; we seem to be bad at recognizing intelligence that doesn’t mirror our own.
“You think we won’t be able to use AI” – If there is some day actual artificial intelligence, I have no idea if humans can “use” it.
“we can’t recognize intelligence?” – I think you can make the case that historically we haven’t been great about recognizing non-human intelligence.
What I am saying is that if we ever invent an actual AGI, unless it thinks and, more importantly, speaks in a way we recognize, we won’t even realize what we invented.
Hm? I was agreeing with your 2nd point. I was merely adding to that by pointing out that we’ve only recently begun to recognize non-human intelligence in species like crows (tool use), cetaceans (language), higher primates (tool use, language, and social organization); which leaves me concerned that, if an AI were to “emerge” that was very different than human intelligence, we’d likely fail to notice it, potentially cutting off an otherwise promising development path.
I don’t think we will fail to spot intelligence in AIs, since they have advocates, something that animals never had. But we have a problem in that “intelligence” seems to be a multidimensional continuum, so until we solve lots of different kinds of it, there will exist things that fit some form of it but really don’t deserve the unqualified name.
1 - Well, it can simulate them, and we will probably never use simulation when we want intelligence. (We will for understanding the brain, though)
2 - It doesn’t matter at all, intelligence doesn’t need to think like us.
3 - We are nowhere close to any general one, and the more investors bet all their money and markets sell all their hardware to the same few companies that will burn at their local maximum, the further away we will become.
Agreed, but look at the history of how humans have thought about the presumed intelligence (or lack of it) in animals; we seem to be bad at recognizing intelligence that doesn’t mirror our own.
You think we won’t be able to use AI because we can’t recognize intelligence?
Those are two separate questions, I think.
What I am saying is that if we ever invent an actual AGI, unless it thinks and, more importantly, speaks in a way we recognize, we won’t even realize what we invented.
Recognizing the intelligence is something you pushed into the discussion, I just want to know why you think it’s important.
Hm? I was agreeing with your 2nd point. I was merely adding to that by pointing out that we’ve only recently begun to recognize non-human intelligence in species like crows (tool use), cetaceans (language), higher primates (tool use, language, and social organization); which leaves me concerned that, if an AI were to “emerge” that was very different than human intelligence, we’d likely fail to notice it, potentially cutting off an otherwise promising development path.
Oh ok, you have a completely new concern.
I don’t think we will fail to spot intelligence in AIs, since they have advocates, something that animals never had. But we have a problem in that “intelligence” seems to be a multidimensional continuum, so until we solve lots of different kinds of it, there will exist things that fit some form of it but really don’t deserve the unqualified name.