It’s weird that suddenly a lot of people outside the AI field call many things not real AI because it’s not close to general intelligence. We’ve been calling things like chess machines and video game behavior trees AI for a long time and now suddenly LLMs are not AI. Algorithms can be dumb and still be called AI. Like a fish has nowhere near the intelligence of an average human yet people would still assign a degree of intelligence to a fish.
I think you’ve misunderstood the debate. No one questions the term AI itself, the “artificial” part covers that.
It’s when pop science articles or people with vested interest in the field imply that any current AI understands what they’re doing. They don’t, it’s clever mathematics.
I’ve always thought of it as being different uses of the same word. In games ai refers to the logic behind npc’s whether they are a few lines of if-else or a complex machine learning model.
In more general tech I have understood it as being more specific to the concept of a machine that learns and adapts based on input, not only rewriting parts of it.
I do still think this is a more reasonable approach as almost anything on the internet could be considered ai if held to the standard often used in video games.
I’m in no way an ai expert but have implemented a chatGPT integration solution at my day job. I’m not really opposed to LLM’s being called ai colloquially but I don’t think the arguments against it being technically an ai are too off. I’m not sure that matters to tve general population as much as just understanding the limits of the technology.
What I noticed is that there appeared an “AI” guard brigade that is trying to enforce that AI only applies to LLMs all over the internet. And another that decided that “AI” means “human-equivalent intelligence”.
Of course, both are loud. And dumb for brigading over words instead of ideas.
I’m in the “people are way too anthrocentric and don’t allow for the concept that maybe the language model IS understanding, albeit in a much more simplistic way than humans, in order to generate its output, as it needs an internal world model to even be able to function and has been demonstrated as having one.” camp.
People really aren’t ready to have the crown of human intelligence challenged and will cling onto anything - like saying LLMs only parrot stuff unintelligently, or “its just really clever math” (your brain is also really clever math implemented in meatspace) - to maintain a level of comfort.
It’s weird that suddenly a lot of people outside the AI field call many things not real AI because it’s not close to general intelligence. We’ve been calling things like chess machines and video game behavior trees AI for a long time and now suddenly LLMs are not AI. Algorithms can be dumb and still be called AI. Like a fish has nowhere near the intelligence of an average human yet people would still assign a degree of intelligence to a fish.
I think you’ve misunderstood the debate. No one questions the term AI itself, the “artificial” part covers that.
It’s when pop science articles or people with vested interest in the field imply that any current AI understands what they’re doing. They don’t, it’s clever mathematics.
I’ve never heard someone make that claim, or even imply it.
Gestures at whole thread.
Link a single example in this thread and I’ll eat my shoe
It’s implied in the meme: LLMS JUST PARROT WHAT THEY CALCULATE WILL BRING APPROVAL WITHOUT UNDERSTANDING
Even_Adder debated under the same premise:
There are other examples too. I’ll let you off the shoe eating.
I’ve always thought of it as being different uses of the same word. In games ai refers to the logic behind npc’s whether they are a few lines of if-else or a complex machine learning model. In more general tech I have understood it as being more specific to the concept of a machine that learns and adapts based on input, not only rewriting parts of it.
I do still think this is a more reasonable approach as almost anything on the internet could be considered ai if held to the standard often used in video games. I’m in no way an ai expert but have implemented a chatGPT integration solution at my day job. I’m not really opposed to LLM’s being called ai colloquially but I don’t think the arguments against it being technically an ai are too off. I’m not sure that matters to tve general population as much as just understanding the limits of the technology.
Interesting.
What I noticed is that there appeared an “AI” guard brigade that is trying to enforce that AI only applies to LLMs all over the internet. And another that decided that “AI” means “human-equivalent intelligence”.
Of course, both are loud. And dumb for brigading over words instead of ideas.
I’ve never seen that.
The two camps are:
LLM isn’t AI because it isn’t human equivalent. This camp is commonly associated with the meme, “LLM is nothing but marketing hype.”
LLM is one of many types of AI. This camp is associated with the meme, “LLM’s can solve some problems.”
I’m in the “people are way too anthrocentric and don’t allow for the concept that maybe the language model IS understanding, albeit in a much more simplistic way than humans, in order to generate its output, as it needs an internal world model to even be able to function and has been demonstrated as having one.” camp.
People really aren’t ready to have the crown of human intelligence challenged and will cling onto anything - like saying LLMs only parrot stuff unintelligently, or “its just really clever math” (your brain is also really clever math implemented in meatspace) - to maintain a level of comfort.