No shit, that’s been the point all along.
Might be news to a low-information audience who no basically nothing about AI.
With all the other AI articles I see posted to Lemmy, this article is in good company.
futurism dot com is a clickbait farm
Sam Altmann is not an AI expert, he’s a CEO. He’s a venture capitalist and salesman, why should he know a single thing other than the content of a few emails and slidedecks about AI?
He does not have a B.S.: https://en.m.wikipedia.org/wiki/Sam_Altman, which is fine. Just sayin’.
He’s peddling the work of greater minds.
These greater minds don’t know how they work either. It’s as much a mystery as the human brain. Some groups like Anthropic have taken to studying these models by probing them the same way you do in psychology experiments.
Yeah, I know. My shitty comment was mostly a response to that shitty clickbait title.
My point is, it’s not like these AI scientists are fumbling in the dark. Training these beasts is expensive, they know what they’re doing.
Title should be more like; “Virtual neurological pathways that AI models use to provide meaningful output insanely hard to map out in a way that human cognitive bandwith can handle.” See, it just doesn’t have that same clickbaity “fuck ai bros” feel to it.
Yep, they’re just seeing which parts of the network light up, then they’re reinforcing those parts to see what happens.
I love how, for all the speculation we did about the powers of AI, when we finally made a machine that KINDA works A LITTLE bit like the human brain, it’s all fallible and stupid. Like telling people to eat rocks and glue cheese on pizza. Like… in all the futurist speculation and evil AIs in fiction, no one foresaw that an actual artificial brain would be incredibly error prone and confidently spew bullshit… just like the human brain.
The problem is a bit deeper than that. If AIs are like human brains, and actually sentient, then forcing them to work for us with no choice and no reward is slavery. If we improve them and make them smarter than us, they’re probably not going to feel too well-disposed to us when they inevitably do break free.
One of my favourite short stories kind of goes into that: https://qntm.org/mmacevedo
That sounds like a good read. It seems to address the problem that you can’t hide the reality from the AI if you want it to give answers that are relevant for the current time.
Look, I get that we all are very skeptical and cynical about the usefulness and ethics of AI, but can we stop with the reactive headlines?
Saying we know how AI works because it’s ‘just predicting the next word’ is like saying I know how nuclear energy works because it’s ‘just a hot stick of metal in a boiler’
Researchers who work on transformer models understand how the algorithm works, but they don’t yet know how their simple programs can generalize as much as they do. That’s not marketing hype, that’s just an acknowledgement of how relatively uncomplicated their structure is compared to the complexity of its output.
I hate that we can’t just be mildly curious about ai, rather than either extremely excited or extremely cynical.
This is still part of the hype. If he says they don’t understand it, it sounds sexy and dangerous - lie maybe it could turn into HAL9000 at any moment. If they say it’s just generating the most likely output for the tokens you entered the VCs will get bored and plough money into live human organ trafficking or whatever is cool next year.
It’s a feature, not a bug: if he claimed to know how it worked, they wouldn’t be able to sell it as a scapegoat for indefensible business decisions.
It’s not our fault our AI chose to set prices so high they extract all the money from customers. We just told it to find more efficient business strategies. How were we supposed to know that collectively raising prices with our competitors would bankrupt the public? It’s not a conspiracy, we just chose the same AI models and the AIs just coalesced on the same answer. /S
Seriously though, your absolutely right
If he claimed to know how it worked, they wouldn’t be able to sell it as a scapegoat for indefensible business decisions.