Character AI has filed a motion to dismiss a case brought against it by the parent of a teen who committed suicide allegedly after becoming hooked on the company's technology.
Folks should agree and understand that no one can be held responsible for what the AI outputs.
Bullshit. The creators of the AI are responsible for how the thing they designed and built operates.
Is there room for unexpected output to not be treated as malicious? Yes. But absolving them of ALL responsibility means someone can build a malicious AI and claim immunity from responsibility.
Current AI frequently has rails that keep it from spitting out stuff on real people or specific topics like building bombs. The same rails should exist for encouraging suicide.
Those rails are full of holes and always will be because patching them is deterministic. It’s just as impossible to write exact rules about what an AI can and can’t say as it is to dictate what people can say. It will always, always, always be possible for an AI to say something malicious no matter how well intentioned the person running it is or how much effort they put in.
So what should be the legally mandated amount of effort? Is it measured in dollars? Lines of code? Because you won’t ever fix the problem, so the question is what is the required amount before it’s on the user to just use their own fucking brain ?
Why would you keep it from saying that when in certain contexts that’s perfectly acceptable? I explained exactly that point in another post.
This is sort of a tangent in this case because what the AI said was very oblique—exactly the sort of thing it would be impossible to guard against. It said something like “come home to me,” which would be patently ridiculous to censor against, and impossible to anticipate that this would be the reaction to that phrase.
It likely is hard-coded against that, and it also didn’t say that in this case.
Did you read the article with the conversation? The teen said he wanted to “come home” to Daenerys Targaryen and she (the AI) replied “please do, my sweet king.”
It’s setting an absurdly high bar to assume an AI is going to understand euphemism and subtext as potential indicators of self-harm. That’s the job of a psychiatrist, a real-world person that the kid’s parents should have taken him to.
Bullshit. The creators of the AI are responsible for how the thing they designed and built operates.
Is there room for unexpected output to not be treated as malicious? Yes. But absolving them of ALL responsibility means someone can build a malicious AI and claim immunity from responsibility.
Current AI frequently has rails that keep it from spitting out stuff on real people or specific topics like building bombs. The same rails should exist for encouraging suicide.
Those rails are full of holes and always will be because patching them is deterministic. It’s just as impossible to write exact rules about what an AI can and can’t say as it is to dictate what people can say. It will always, always, always be possible for an AI to say something malicious no matter how well intentioned the person running it is or how much effort they put in.
So what should be the legally mandated amount of effort? Is it measured in dollars? Lines of code? Because you won’t ever fix the problem, so the question is what is the required amount before it’s on the user to just use their own fucking brain ?
I’m not saying they need to be perfect, but if they can make it recognize specific names they can keep it from saying ‘kill your self’.
Why would you keep it from saying that when in certain contexts that’s perfectly acceptable? I explained exactly that point in another post.
This is sort of a tangent in this case because what the AI said was very oblique—exactly the sort of thing it would be impossible to guard against. It said something like “come home to me,” which would be patently ridiculous to censor against, and impossible to anticipate that this would be the reaction to that phrase.
It likely is hard-coded against that, and it also didn’t say that in this case.
Did you read the article with the conversation? The teen said he wanted to “come home” to Daenerys Targaryen and she (the AI) replied “please do, my sweet king.”
It’s setting an absurdly high bar to assume an AI is going to understand euphemism and subtext as potential indicators of self-harm. That’s the job of a psychiatrist, a real-world person that the kid’s parents should have taken him to.