Seriously, I get that AI is annoying in how it’s being used these days, but has the second guy seriously never heard of “anthropomorphizing”? Never seen Castaway? Or played Portal?
Nobody actually thinks these things are conscious, and for AI I’ve never heard even the most diehard fans of the technology claim it’s “conscious.”
(edit):I guess, to be fair, he did say “imagining” not “believing”. But now I’m even less sure what his point was, tbh.
My interpretation was that they’re exactly talking about anthropomorphization, that’s what we’re good at. Put googly eyes on a random object and people will immediately ascribe it human properties, even though it’s just three objects in a certain arrangement.
In the case of LLMs, the googly eyes are our language and the chat interface that it’s displayed in. The anthropomorphization isn’t inherently bad, but it does mean that people subconsciously ascribe human properties, like intelligence, to an object that’s stringing words together in a certain way.
Ah, yeah you’re right. I guess the part I actually disagree with is that it’s the source of the hype, but I misconstrued the point because of the sub this was posted in lol.
Personally, (before AI pervaded all the spaces it has no business being in) when I first saw things like LLMs and image generators I just thought it was cool that we could make a machine imitate things previously only humans could do. That, and LLMs are generally very impersonal, so I don’t think anthropomorphization is the real reason.
I mean, yeah, it’s possible that it’s not as important of a factor for the hype. I’m a software engineer, and even before the advent of generative AI, we were riding on a (smaller) wave of hype for discriminative AI.
Basically, we had a project which would detect that certain audio cues happened. And it was a very real problem, that if it fucked up once every few minutes, it would cause a lot of problems.
But when you actually used it, when you’d snap your finger and half a second later the screen turned green, it was all too easy to forget these objective problems, even though it didn’t really have any anthropomorphic features.
I’m guessing, it was a combination of people being used to computers making objective decisions, so they’d be more willing to believe that they just snapped badly or something.
But probably also just general optimism, because if the fuck-ups you notice are far enough apart, then you’ll forget about them.
Alas, that project got cancelled for political reasons before anyone realized that this very real limitation is not solvable.
Most discussion I’ve seen about “ai” centers around what the programs are “trying” to do, or what they “know” or “hallucinate”. That’s a lot of agency being given to advanced word predictors.
Like, when describing the path of least resistance in electronics or with water, we’d say it “wants” to go towards the path of least resistance, but that doesn’t mean we think it has a mind or is conscious. It’s just a lot simpler than describing all the mechanisms behind how it behaves every single time.
Both my digital electronics and my geography teachers said stuff like that when I was in highschool, and I’m fairly certain neither of them believe water molecules or electrons have agency.
Seriously, I get that AI is annoying in how it’s being used these days, but has the second guy seriously never heard of “anthropomorphizing”? Never seen Castaway? Or played Portal?
Nobody actually thinks these things are conscious, and for AI I’ve never heard even the most diehard fans of the technology claim it’s “conscious.”
(edit): I guess, to be fair, he did say “imagining” not “believing”. But now I’m even less sure what his point was, tbh.
My interpretation was that they’re exactly talking about anthropomorphization, that’s what we’re good at. Put googly eyes on a random object and people will immediately ascribe it human properties, even though it’s just three objects in a certain arrangement.
In the case of LLMs, the googly eyes are our language and the chat interface that it’s displayed in. The anthropomorphization isn’t inherently bad, but it does mean that people subconsciously ascribe human properties, like intelligence, to an object that’s stringing words together in a certain way.
Ah, yeah you’re right. I guess the part I actually disagree with is that it’s the source of the hype, but I misconstrued the point because of the sub this was posted in lol.
Personally, (before AI pervaded all the spaces it has no business being in) when I first saw things like LLMs and image generators I just thought it was cool that we could make a machine imitate things previously only humans could do. That, and LLMs are generally very impersonal, so I don’t think anthropomorphization is the real reason.
I mean, yeah, it’s possible that it’s not as important of a factor for the hype. I’m a software engineer, and even before the advent of generative AI, we were riding on a (smaller) wave of hype for discriminative AI.
Basically, we had a project which would detect that certain audio cues happened. And it was a very real problem, that if it fucked up once every few minutes, it would cause a lot of problems.
But when you actually used it, when you’d snap your finger and half a second later the screen turned green, it was all too easy to forget these objective problems, even though it didn’t really have any anthropomorphic features.
I’m guessing, it was a combination of people being used to computers making objective decisions, so they’d be more willing to believe that they just snapped badly or something.
But probably also just general optimism, because if the fuck-ups you notice are far enough apart, then you’ll forget about them.
Alas, that project got cancelled for political reasons before anyone realized that this very real limitation is not solvable.
Most discussion I’ve seen about “ai” centers around what the programs are “trying” to do, or what they “know” or “hallucinate”. That’s a lot of agency being given to advanced word predictors.
That’s also anthropomorphizing.
Like, when describing the path of least resistance in electronics or with water, we’d say it “wants” to go towards the path of least resistance, but that doesn’t mean we think it has a mind or is conscious. It’s just a lot simpler than describing all the mechanisms behind how it behaves every single time.
Both my digital electronics and my geography teachers said stuff like that when I was in highschool, and I’m fairly certain neither of them believe water molecules or electrons have agency.