I have significant problems with AI, particularly around reckless implementation in tasks it is simply incapable of providing real value (most of them), but I struggle to find these kinds of articles as anything but the journalism version of the same lazy application.
Blaming AI for a mental health issue is like blaming alcohol for making someone an asshole. They were an asshole before they got drunk, it just became more obvious while drunk. Same thing here, AI is not causing psychosis, it’s just revealing it in a place that we’re not used to seeing stuff like this come from: a computer.
This article isn’t likely using AI to write it, but the application of AI as the subject matter and tying this person’s crisis to the use of AI seems lazy at best, negligent at worst.
Yeah AI isn’t causing psychosis, it’s amplifying and enabling people who might already have problems.
What is your argument here? That since AI is not the direct cause of it, these articles are pointless? I think we’d want to know if, to use your example, commonly available thing like alcohol was giving people psychosis
If we bring this back to alcohol, the alcohol absolutely is to blame for worsening symptoms. There’s even the term “alcohol-induced psychosis” or “alcohol-related psychosis” to describe the effect. Without the alcohol they are fine, with it they enter psychosis.
If someone is symptom-free without AI and experiences symptoms with AI, then calling it “AI psychosis” would be reasonable.
I guess so. I’m just so wary of everything revolving around AI, even blaming it. If the idea of AI psychosis gets big, but then undermined by shoddy reporting or research, it could cause people to dismiss it like the McDonald’s hot coffee thing.
I want AI companies to be held accountable, but only appropriately so they can’t use shakey arguments against it to get out of that responsibility.
I don’t think anyone here is blaming AI for this woman’s mental issues, those clearly existed before generative AI. AI just happened to enable her issues, which the article points out.
I have significant problems with AI, particularly around reckless implementation in tasks it is simply incapable of providing real value (most of them), but I struggle to find these kinds of articles as anything but the journalism version of the same lazy application.
Blaming AI for a mental health issue is like blaming alcohol for making someone an asshole. They were an asshole before they got drunk, it just became more obvious while drunk. Same thing here, AI is not causing psychosis, it’s just revealing it in a place that we’re not used to seeing stuff like this come from: a computer.
This article isn’t likely using AI to write it, but the application of AI as the subject matter and tying this person’s crisis to the use of AI seems lazy at best, negligent at worst.
Yeah AI isn’t causing psychosis, it’s amplifying and enabling people who might already have problems.
What is your argument here? That since AI is not the direct cause of it, these articles are pointless? I think we’d want to know if, to use your example, commonly available thing like alcohol was giving people psychosis
Alcohol absolutely makes mental health issues worse, I’m not sure what you’re trying to say here.
I think my concern is with it being called “AI Psychosis”
It doesn’t seem like an effective call to action for treatment for the individual if we simply blame the AI not being “safe enough” or something.
If we bring this back to alcohol, the alcohol absolutely is to blame for worsening symptoms. There’s even the term “alcohol-induced psychosis” or “alcohol-related psychosis” to describe the effect. Without the alcohol they are fine, with it they enter psychosis.
If someone is symptom-free without AI and experiences symptoms with AI, then calling it “AI psychosis” would be reasonable.
I guess so. I’m just so wary of everything revolving around AI, even blaming it. If the idea of AI psychosis gets big, but then undermined by shoddy reporting or research, it could cause people to dismiss it like the McDonald’s hot coffee thing.
I want AI companies to be held accountable, but only appropriately so they can’t use shakey arguments against it to get out of that responsibility.
imo it won’t matter until the bubble pops. AI could be powered by literal human sacrifice and it would be dismissed.
I don’t think anyone here is blaming AI for this woman’s mental issues, those clearly existed before generative AI. AI just happened to enable her issues, which the article points out.
You’re right but posting this in the Fuck AI community is like hating nazis in the Tesla community