“In the same way you might use Google Maps to get everywhere and not know how to get there otherwise, AI might cause people to stop learning things they would have otherwise had to learn. Ironically, though, Rosen thinks this could cause more stress as people are inundated with AI and constantly shifting gears and not seeing anything quite clearly.”
This is one of the most concerning aspects of AI IMO. Learning and thinking are some of the most fundamental aspects of being human. When you can outsource thinking to a machine, how is that going to affect your sense of self worth? How are we going to keep kids motivated to learn in school, when they know that they’ll never be able to learn things as well as AI can?
In the same way you might use Google Maps to get everywhere and not know how to get there otherwise, AI might cause people to stop learning things they would have otherwise had to learn.
The argument makes a lot of sense to me but this particular example somewhat falls flat. When I started my new job, I used Google Maps to learn the optimal route for a couple days until I had it committed to memory. Then I didn’t need it anymore.
For a place I am going to exactly once and then probably never again however, why would I want that information to take up valuable brain space? In a pre Google maps world, you spend probably twice as long taking a less optimal route that goes in that general direction, then driving around the area for a bit longer until you hopefully stumble upon what it is youre looking for.
I mean you’re right, but I think there will be some time before everything becomes so integrated that we really do t have to learn anymore. Right now for example we have good translation, but learning a language is still immensely helpful when you’re trying to communicate in another country. I think there will be things like that for quite a while.
I don’t understand why they’re trying to feed AI everything. There’s too much garbage on the internet and elsewhere. If they can’t make a good LLM with academic articles, Wikipedia, and non fiction books, newspapers, and magazines then they need to go back to the drawing board.
They don’t want to make a “good” LLM. They want to make a profitable one that will lure people into buying products and shape their opinions when it comes to government policy.
This has nothing at all to do with AI, we’re already living in a world filled with misinformation. AI doesn’t fundamentally change anything. The reality is that people come up with narratives they want to believe in, and then seek out information that fits in with those narratives.
AI will definitely change the quantity and quality of the misinformation. Remember how effective Cambridge Analytica was? I’m guessing everyone in the non-EU country of England probably does.
The comparison is like fighting in a street fight against an opponent bigger then you, now with the looming threat of having to fight a professionally trained Ivan Drago-esque behemoth dosed to shit on HGH and steroids.
They don’t seem to be coping so well with a non-AI-saturated ‘post truth world’.
This article is making an alarmist case against AI by comparing it to technologies that also didn’t fundamentally change human life. “In the same way you might use Google Maps to get everywhere and not know how to get there otherwise…”, but GPS nav didn’t ruin our ability to drive. It just makes driving more convenient.
If anything, the presence of AI generated fakes will make face-to-face interactions more valuable. Our most trusted source of news will be the people we trust and interact with directly. That’s not a new and horrifying condition for humanity. It’s how we always live.
No one is, picture the following.
You have a friend you talk to online, you’ve never met in real life but you talk online every day. Oneday that connection is severed, you never know it but a bot trained on all the data from your conversations is now talking to you instead, and the inverse has happened to your friend. You’re now both talking to bots of eachother designed to seem just like the real thing while slowly influencing you towards whatever mode of thinking is desireable by the owner of the platform.This is my big concern about AI. Not that it’s going to be making the next citizen kane or painting the mona lisa, because as we’ve seen already people don’t give a shit about any of that. Instead it’s going to be mass producing and delivering propaganda posters, memes and posts with such overwhelming severity that we’ll be stuck in a Russia-esque level of hypernormalization of falsehoods that we’ll eventually become completely cynical and nihilistic in regards to finding the truth, choosing instead to side with the falsehoods that match our personal biases.
We are living in a post truth world already, just check the news.