I’ll keep presenting this challenge until someone meets it:
Anyone who thinks LLMs aren’t generally intelligent, can you name a text processing task (ie text in, text out) than a general intelligence can do, that an LLM cannot?
To solve this, let’s first consider the total number of combinations when there are no restrictions. Since there are 4 friends, there are 4! (4 factorial) ways to arrange them, which equals 24 combinations.
Now, let’s count the number of combinations where A and C are next to each other. Since A and C can be arranged in 2 ways, and for each arrangement, the other two friends (B and D) can be arranged in 2! ways, the total number of combinations where A and C are next to each other is 2 * 2! = 4.
So, the number of combinations where A and C cannot be next to each other is the total number of combinations minus the number of combinations where A and C are next to each other:
The difference between 3.5 and 4 is substantial. Here is what 4 says
To find the number of combinations in which four friends (A, B, C, and D) can stand in line such that A and C are not next to each other, we can use the following approach:
Total Combinations: First, calculate the total number of ways in which four people can be arranged in a line. This can be calculated by (4!) (4 factorial), since there are 4 slots to fill, each choice reducing the number of available choices by one for the next slot.
[
4! = 4 \times 3 \times 2 \times 1 = 24
]
Unwanted Combinations (Where A and C are next to each other):
Consider A and C as a single unit. This effectively reduces the number of units to arrange from 4 to 3 (the AC unit, B, and D).
These three units can be arranged in (3!) ways:
[
3! = 3 \times 2 \times 1 = 6
]
However, within the AC unit, A and C can switch places. So, there are 2 ways to arrange A and C within their unit.
Therefore, the total number of arrangements where A and C are next to each other is:
[
3! \times 2 = 6 \times 2 = 12
]
Subtracting Unwanted Combinations: Subtract the number of unwanted combinations (where A and C are next to each other) from the total number of combinations:
[
24 - 12 = 12
]
Thus, there are 12 combinations where A and C are not next to each other.
It is true that newer models that have ingested more training data are better at this kind of thing, but it is not because they are using logic, but because they are copying and following examples they already learnt, if that makes sense. I got the question from a test passed to kids ages 12-13, so arguably it wasn’t really that challenging. If you want to you can try out the more advanced problems from the same place I got it from, although it’s in Spanish, so pass it through Google Translate first.
If you turn to programmers they’ll tell you that AI usually makes mistakes no human would normally make such as inventing variables that don’t exist and that kind of thing. It is because in the examples it learnt from they have mostly existed.
What I mean to say is, if you give an AI a problem that is not in its training data and can only be solved using logic (so, you can’t apply what is used in other problems) it will be incapable of solving it. The Internet is so vast that almost everything has been written about so AIs will seem to know how to solve any problem, but it is no more than an illusion.
HOWEVER, if we manage to integrate AIs and normal, mathematical computation really closely so that they function as one, that problem might be solved. It will probably also have its caveats, though.
I’m tempted to argue that many humans aren’t generally intelligent based on your definition of requiring original thought/solving things they haven’t been told/trained on, but we don’t have to go there. Lol
Can you expand on your last paragraph? You’re saying if the model was trained on more theory and less examples of solved problems it might be improved?
If I’m being completely honest, now that I’ve woken up with a fresh mind, I have no idea where I was going with that last part. Giving LLMs access to tools like running code so that they can fact check or whatever is a really good idea (that is already being tried) but I don’t think it has anything to do with the problem at hand.
The real key issue (I think) is getting AI to keep learning and iterating over itself past the training stage. Which is actually what many people call AGI/the “singularity”.
It implies that “general intelligence” is so ill-defined in the question as to be essentially meaningless.
Even your original question was kind of ridiculous. “Ignoring everything LLMs aren’t designed to do, what the difference between an LLM and a general intelligence?”
I mean, if we follow that logic…Give me a math equation that proves my calculator isn’t a general intelligence.
Calculators don’t do anything with equations. They perform logical operations via substitution in order to determine the numerical value of terms.
But if you really want to go by that logic I agree: if you can represent a real world situation in terms of mathematical terms to be calculated into a final value, then a calculator is competent to navigate that situation.
I agree with that the term “general intelligence” is poorly-defined. My reason for posing a precise challenge is to put a spotlight on this fact.
We want to categorize LLMs as other than us, via this term “general intelligence”, because it is less terrifying than acknowledging that there’s a new intelligence operating next to us. That we have new neighbors, and that we are not keeping up with them.
My overall goal is to foster respect for the severity of our situation, by nullifying this “oh don’t worry it’s not real artificial intelligence”.
As for the calculator-vs-LLM question, I’d say LLMs are more likely to post a threat to human hegemony, because it is easier to reduce the world to a textual narrative than it is to reduce it to a mathematical term to be calculated.
And I agree. For all our sake, we must stop using “yeah but is it a GeNeRaL InTeLliGeNcE” as our excuse for pretending the singularity isn’t happening.
I just…you seem to have a fair grasp of mathematics and logic (or you copied a portion of your reply from some other source that does) but you either don’t have a grasp of how LLMs work and are built or you have an extremely nieve view consciousness or I’m missing some prior assumption you used in coming to the conclusion that LLMs are anywhere near the level you seem to be implying instead of statistical models. The input you provide to an LLM does not alter the underlying weights of the nodes in the network unless it is kept in training mode. When that happens, they quickly break down, and all the output becomes garbage because they have no reality checking mechanism, and they don’t have context in the way people or even animals we consider intelligent.
Sort of like a human who isn’t allowed to sleep, in my opinion.
They may have a solid grasp on Mathematics, but they’ve got a poor grasp on Biology.
Human beings can push themselves well beyond the limits of needing to sleep, this is why sleep deprivation happens. It is not simply a mechanical matter of “there’s only so much time in the day”.
Okay that’s a good point. LLMs, without retraining, are limited in the overall amount of complexity they can successfully navigate.
Sort of like a human who isn’t allowed to sleep, in my opinion. A human may be capable of designing an airplane, but not if the human never sleeps, because the complexity is beyond what a human can do in a single day without becoming exhausted and producing errors.
Do you believe that a series of LLMs, with each LLM being trained on the previous LLM’s training data plus the “input/output completions” that the previous iteration performed, would be a general intelligence?
If I sound naive it is because I am trying to apply Occam’s Razor to my own thinking, and minimize the conversation to the absolute minimum necessary set of involvements to move it forward. I’ll consider anything you ask me to, but so far I haven’t seen a reason to involve consciousness in questions of general intelligence. Do you think they are linked?
By the way, if you have a better definition of “general intelligence” than whatever definition was implied by my original challenge, I’m all ears.
It’s more than being limited in the overall complexity. The locked node weights mean that the LLM is fully deterministic…that is, it has no will or goals, no opinion, no sense of self/sense of the environment/sense of the separation between self/environment. It has no comprehension.
Iterative training cycles are already used with LLMs and don’t solve any of those issues.
From the standpoint of psychology, there’s not a wholly agreed upon definition for ‘intelligence’ but most working definitions require the ability to learn from experience, the ability to recognize problems and to generalize and adapt that experience to solve the problem.
Theoretically, if an LLM had “intelligence,” you could ask it about a problem that was completely dereferenced in the training data. An intelligent LLM would be able to comprehend that problem, generalize it to a level that it could relate to some previous experience, then use details about that prior experience to come up with potential solutions to the new problem. LLMs can’t achieve any of those things individually, never mind all together. If someone pulled that off, it wouldn’t convince me their model was worth the level of concern you articulated earlier, but it would get my attention and would be something I’d watch pretty closely.
So you’re assuming determinism is incompatible with consciousness now? Comprehension? I might be “naive about the nature of consciousness” but you’re gullible about it if you think you know those things.
But at least you’ve made a definite claim now about a thing which an LLM cannot do, which is:
Theoretically, if an LLM had “intelligence,” you could ask it about a problem that was completely dereferenced in the training data.
That brings me back to the original challenge: can you articulate such a problem? We can experiment with ChatGPT and see how it handles it.
Okay so assuming the camera’s output can be represented as a series of bits, and that the arms’ input can be represented as another series of bits, you have successfully identified a text processing task.
Your assertion then, is that this is a task outside the ability of an LLM to succeed at?
How do you know that an LLM-steered robot cannot perform that task? Has that been tried?
I’ll keep presenting this challenge until someone meets it:
Anyone who thinks LLMs aren’t generally intelligent, can you name a text processing task (ie text in, text out) than a general intelligence can do, that an LLM cannot?
Logic. As an example, non textbook math questions. I asked ChatGPT 3.5 this:
It answered 20, the correct answer is 12.
All possible conbinations
abcd,abdc,adbc,adcb, cbad,cbda,cdba,cdab, bcda,badc, dcba,dabc
Its answer
To solve this, let’s first consider the total number of combinations when there are no restrictions. Since there are 4 friends, there are 4! (4 factorial) ways to arrange them, which equals 24 combinations.
Now, let’s count the number of combinations where A and C are next to each other. Since A and C can be arranged in 2 ways, and for each arrangement, the other two friends (B and D) can be arranged in 2! ways, the total number of combinations where A and C are next to each other is 2 * 2! = 4.
So, the number of combinations where A and C cannot be next to each other is the total number of combinations minus the number of combinations where A and C are next to each other:
24 - 4 = 20 combinations.
The difference between 3.5 and 4 is substantial. Here is what 4 says
Total Combinations: First, calculate the total number of ways in which four people can be arranged in a line. This can be calculated by (4!) (4 factorial), since there are 4 slots to fill, each choice reducing the number of available choices by one for the next slot. [ 4! = 4 \times 3 \times 2 \times 1 = 24 ]
Unwanted Combinations (Where A and C are next to each other):
Subtracting Unwanted Combinations: Subtract the number of unwanted combinations (where A and C are next to each other) from the total number of combinations: [ 24 - 12 = 12 ]
Thus, there are 12 combinations where A and C are not next to each other.
It is true that newer models that have ingested more training data are better at this kind of thing, but it is not because they are using logic, but because they are copying and following examples they already learnt, if that makes sense. I got the question from a test passed to kids ages 12-13, so arguably it wasn’t really that challenging. If you want to you can try out the more advanced problems from the same place I got it from, although it’s in Spanish, so pass it through Google Translate first.
If you turn to programmers they’ll tell you that AI usually makes mistakes no human would normally make such as inventing variables that don’t exist and that kind of thing. It is because in the examples it learnt from they have mostly existed.
What I mean to say is, if you give an AI a problem that is not in its training data and can only be solved using logic (so, you can’t apply what is used in other problems) it will be incapable of solving it. The Internet is so vast that almost everything has been written about so AIs will seem to know how to solve any problem, but it is no more than an illusion.
HOWEVER, if we manage to integrate AIs and normal, mathematical computation really closely so that they function as one, that problem might be solved. It will probably also have its caveats, though.
I hear you. You make very good points.
I’m tempted to argue that many humans aren’t generally intelligent based on your definition of requiring original thought/solving things they haven’t been told/trained on, but we don’t have to go there. Lol
Can you expand on your last paragraph? You’re saying if the model was trained on more theory and less examples of solved problems it might be improved?
If I’m being completely honest, now that I’ve woken up with a fresh mind, I have no idea where I was going with that last part. Giving LLMs access to tools like running code so that they can fact check or whatever is a really good idea (that is already being tried) but I don’t think it has anything to do with the problem at hand.
The real key issue (I think) is getting AI to keep learning and iterating over itself past the training stage. Which is actually what many people call AGI/the “singularity”.
Text in: a statement
Text out: confirmation whether statement is factually true or not
To be honest, even the human mind has this faculty not in all cases.
Is that something a human can do consistently?
If it’s not, does that imply a human does not possess general intelligence?
It implies that “general intelligence” is so ill-defined in the question as to be essentially meaningless.
Even your original question was kind of ridiculous. “Ignoring everything LLMs aren’t designed to do, what the difference between an LLM and a general intelligence?”
I mean, if we follow that logic…Give me a math equation that proves my calculator isn’t a general intelligence.
Calculators don’t do anything with equations. They perform logical operations via substitution in order to determine the numerical value of terms.
But if you really want to go by that logic I agree: if you can represent a real world situation in terms of mathematical terms to be calculated into a final value, then a calculator is competent to navigate that situation.
I agree with that the term “general intelligence” is poorly-defined. My reason for posing a precise challenge is to put a spotlight on this fact.
We want to categorize LLMs as other than us, via this term “general intelligence”, because it is less terrifying than acknowledging that there’s a new intelligence operating next to us. That we have new neighbors, and that we are not keeping up with them.
My overall goal is to foster respect for the severity of our situation, by nullifying this “oh don’t worry it’s not real artificial intelligence”.
As for the calculator-vs-LLM question, I’d say LLMs are more likely to post a threat to human hegemony, because it is easier to reduce the world to a textual narrative than it is to reduce it to a mathematical term to be calculated.
And I agree. For all our sake, we must stop using “yeah but is it a GeNeRaL InTeLliGeNcE” as our excuse for pretending the singularity isn’t happening.
I just…you seem to have a fair grasp of mathematics and logic (or you copied a portion of your reply from some other source that does) but you either don’t have a grasp of how LLMs work and are built or you have an extremely nieve view consciousness or I’m missing some prior assumption you used in coming to the conclusion that LLMs are anywhere near the level you seem to be implying instead of statistical models. The input you provide to an LLM does not alter the underlying weights of the nodes in the network unless it is kept in training mode. When that happens, they quickly break down, and all the output becomes garbage because they have no reality checking mechanism, and they don’t have context in the way people or even animals we consider intelligent.
They may have a solid grasp on Mathematics, but they’ve got a poor grasp on Biology.
Human beings can push themselves well beyond the limits of needing to sleep, this is why sleep deprivation happens. It is not simply a mechanical matter of “there’s only so much time in the day”.
Cocaine exists.
Okay that’s a good point. LLMs, without retraining, are limited in the overall amount of complexity they can successfully navigate.
Sort of like a human who isn’t allowed to sleep, in my opinion. A human may be capable of designing an airplane, but not if the human never sleeps, because the complexity is beyond what a human can do in a single day without becoming exhausted and producing errors.
Do you believe that a series of LLMs, with each LLM being trained on the previous LLM’s training data plus the “input/output completions” that the previous iteration performed, would be a general intelligence?
If I sound naive it is because I am trying to apply Occam’s Razor to my own thinking, and minimize the conversation to the absolute minimum necessary set of involvements to move it forward. I’ll consider anything you ask me to, but so far I haven’t seen a reason to involve consciousness in questions of general intelligence. Do you think they are linked?
By the way, if you have a better definition of “general intelligence” than whatever definition was implied by my original challenge, I’m all ears.
It’s more than being limited in the overall complexity. The locked node weights mean that the LLM is fully deterministic…that is, it has no will or goals, no opinion, no sense of self/sense of the environment/sense of the separation between self/environment. It has no comprehension.
Iterative training cycles are already used with LLMs and don’t solve any of those issues.
From the standpoint of psychology, there’s not a wholly agreed upon definition for ‘intelligence’ but most working definitions require the ability to learn from experience, the ability to recognize problems and to generalize and adapt that experience to solve the problem.
Theoretically, if an LLM had “intelligence,” you could ask it about a problem that was completely dereferenced in the training data. An intelligent LLM would be able to comprehend that problem, generalize it to a level that it could relate to some previous experience, then use details about that prior experience to come up with potential solutions to the new problem. LLMs can’t achieve any of those things individually, never mind all together. If someone pulled that off, it wouldn’t convince me their model was worth the level of concern you articulated earlier, but it would get my attention and would be something I’d watch pretty closely.
So you’re assuming determinism is incompatible with consciousness now? Comprehension? I might be “naive about the nature of consciousness” but you’re gullible about it if you think you know those things.
But at least you’ve made a definite claim now about a thing which an LLM cannot do, which is:
That brings me back to the original challenge: can you articulate such a problem? We can experiment with ChatGPT and see how it handles it.
deleted by creator
Is that the only problem?
ie, is that the only thing that’s not general about an LLM’s intelligence: that it lacks access to a certain set of data?
deleted by creator
Okay so assuming the camera’s output can be represented as a series of bits, and that the arms’ input can be represented as another series of bits, you have successfully identified a text processing task.
Your assertion then, is that this is a task outside the ability of an LLM to succeed at?
How do you know that an LLM-steered robot cannot perform that task? Has that been tried?