- cross-posted to:
- [email protected]
- cross-posted to:
- [email protected]
cross-posted from: https://lemmy.zip/post/15863526
Steven Anderegg allegedly used the Stable Diffusion AI model to generate photos; if convicted, he could face up to 70 years in prison
How are they abuse images if no abuse took place to create them?
Because they are images of children being graphically raped, a form of abuse. Is an AI generated picture of a tree not a picture of a tree?
No it isn’t, not anymore than a drawing of a car is a real car, or drawings of money are real money.
Material showing a child being sexually abused is child sexual abuse material.
And an AI generated image does not show a child being abused
Bullshit and you know it.
There is no child being abused by a generated image or a drawing.
If Paedophile Hill is the hill you want to die on, it’s no loss to me, so I’ve got zero interest in your “Ceci n’est pas une child rape” defense.
Nobody is saying they’re real, and I now see what you’re saying.
By your answers, your question is more “at-face-value” than people assume:
You are asking:
“Did violence occur in real life in order to produce this violent picture?”
The answer is, of course, no.
But people are interpreting it as:
“This is a picture of a man being stoned to death. Is this picture violent, if no violence took place in real life?”
To which answer is, yes.
It can be abhorrent and unlikable, its still not abuse
We’re not disagreeing.
The question was:
“Is this an abuse image if it was generated?”
Yes, it is an abuse image.
Is it actual abuse? Of course not.
And yet its being treated as though it is
Well, that’s another story. I just answered your question. “Are these images about abuse even if they’re generated?” Yup, they are.
“Should people be prosecuted because of them?” Welp, someone with more expertise should answer this. Not me.
Images of children being raped are being treated as images of children being raped. Nobody has every been caught with child pornography and charged as if they abused the children themselves, nor is anybody advocating that people generating AI child pornography are charged as if they sexually abused a child.
Everything is being treated as it always has been, but you’re here arguing that it’s moral and harmless as long as an AI does it, using every semantic trick and shifted goalpost you possibly can.
It’s been gross as fuck to watch. I know you’re aiming for a kind of “king of rationality, capable of transcending even your disgust of child abuse” thing, but every argument you make is so trivial and unimportant that you’re coming across as someone hoping CSAM becomes more accessible.
No genius it’s just promoting abuse. Have a good day.
Just like violent video games produce school shooters
You’ve already fucked up your own argument. You’re supposed to be insisting there’s no such thing as a “violent video game”, because representations of violence don’t count, only violence done to actual people.
Oops you forgot to use logic. As per the comment you’re replying to, the more apt analogy would be: is an AI generated picture of a car still a picture of a car.
That has nothing to do with logic? Its pointing out that both drawings and AI gens are not really the things they might depict
It’s a picture of a hallucination of a tree. Distinguishing real from unreal ought to be taken more seriously given the direction technology is moving.
It’s a picture of a hallucination of a tree
So yes, it’s a tree. It’s a tree that might not exist, but it’s still a picture of a tree.
You can’t have an image of a child being raped – regardless of if that child exists or not – that is not CSAM because it’s an image of a child being sexually abused.
Distinguishing real from unreal ought to be taken more seriously given the direction technology is moving.
Okay, so who are you volunteering to go through an endless stream of images and videos of children being raped to verify that each one has been generated by an AI and not a scumbag with a camera? Peados?
Why are neckbeards so enthusiastic about dying on this hill? They seem more upset that there’s something they’re not allowed to jerk off to than by the actual abuse of children.
Functionally, legalising AI generated CSAM means legalising “genuine” CSAM because it will be impossible to distinguish the two, especially as paedophiles dump their pre-AI collections or feed them in as training data.
People who do this are reprehensible, no matter what hair splitting and semantic gymnastics they employ.
Hey man, I’m not the one. I’m literally just saying that the images that AI creates are not real. If you’re going to argue that they are, you’re simply wrong. Should these ones be generated? Obviously I’d prefer that they not be. But they’re still effectively fabrications that I’m better off simply not knowing about.
If you want to get into the weeds and discuss the logistics of enforcing what is essentially thought crime, that is a different discussion I’m frankly not savvy enough to have here. I have no control over the ultimate outcome, but for what it’s worth, my money says thought crime will in fact become a punishable offense within our lifetimes, and this may well be an easy catalyst to use to that end. This should put your mind at ease.
The thread is about “how are they abuse images if no abuse took place” and the answer is “because they’re images of abuse”. I haven’t claimed they’re real at any point.
It’s not a thought crime because it’s not a thought. Nobody is being charged for thinking about raping children, they’re being charged for creating images of children being raped.
If the images are generated and held by a single person, it may as well be a thought crime. If I draw a picture of a man killing an animal, which is an image depicting a heinous crime spawned by my imagination, and I go to prison over this image, I would consider this a crime of incorrect thought. There are no victims, no animals are harmed, but my will spawned an image of a harmed animal. Authorities dictated I am not allowed to imagine this scenario. I am punished for it. I understand that the expression of said thought is what’s being punished, but that is very literally the only way to punish a thought to begin with (for now), hence freedom of expression being a protected right.
The reason this is a hard issue to discuss in this context is because the topic at hand is visceral and charged. No one wants to be caught dead defending the rights of a monster, lest they be labeled a monster themselves. I see this as a failure of society to know what to do about people like this, opting instead to throw them into a box and hope they die there. If our justice system wasn’t so broken, I might give less of a shit, but as it stands I see this response as shortsighted and inhumane.
If the model was trained on csam then it is dependent on abuse
That’s a heck of a slippery slope I just fell down.
If responses generated from AI can be held criminally liable for their training data’s crimes, we can all be held liable for all text responses from GPT, since it’s being trained on reddit data and likely has access to multiple instances of brigading, swatting, man hunts, etc.
You just summarized the ongoing ethical concerns experts and common folk alike have been talking about in the past few years.
As I said in my other comment, the model does not have to be trained on CSAM to create images like this.
That irrelevant, any realistic depiction of children engaged in sexual activity meets the legal definition of csam. Even using filters on images of consenting adults could qualify as csam if the intent was to make the actors appear underage.
doesn’t even have to be that realistic.
I mean… regardless of your moral point of view, you should be able to answer that yourself. Here’s an analogy: suppose I draw a picture of a man murdering a dog. It’s an animal abuse image, even though no actual animal abuse took place.
Its not though, its just a drawing.
Except that it is an animal abuse image, drawing, painting, fiddle, whatever you want to call it. It’s still the depiction of animal abuse.
Same with child abuse, rape, torture, killing or beating.
Now, I know what you mean by your question. You’re trying to establish that the image/drawing/painting/scribble is harmless because no actual living being suffering happened. But that doesn’t mean that they don’t depict it.
Again, I’m seeing this from a very practical point of view. However you see these images through the lens of your own morals or points of view, that’s a totally different thing.
And when characters are killed on screen in movies, are those snuff films?
No, they’re violent films.
Snuff is a different thing, because it’s supposed to be real. Snuff films depict violence in a very real sense. So so they’re violent. Fiction films also depict violence. And so they’re violent too. It’s just that they’re not about real violence.
I guess what you’re really trying to say is that “Generated abuse images are not real abuse images.” I agree with that.
But at face value, “Generated abuse images are not abuse images” is incorrect.
CSAM is illegal all around
It isn’t csam if there was no abuse.
It’s not child sexual assault if there was no abuse. However, the legal definition of csam is any visual depiction, including computer or computer-generated images of sexually explicit conduct, where […]— (A) the production of such visual depiction involves the use of a minor engaging in sexually explicit conduct; (B) such visual depiction is a digital image, computer image, or computer-generated image that is, or is indistinguishable from, that of a minor engaging in sexually explicit conduct; or © such visual depiction has been created, adapted, or modified to appear that an identifiable minor is engaging in sexually explicit conduct.
You may not agree with that definition, but even simulated images that look like kids engaging in sexual activity meet the threshold for CSAM.
Yes it is
CSAM is child pornography
Do you not know that CSAM is an acronym that stands for child sexual abuse material?
True but CSAM is anything that involves minors. Its really up to the court to decide a lot of it but in the case above I’d imagine that the images were quite disturbing.
in this instance, no human children or minors of any kind were involved.
I think the court looked at the phycological aspects of it. When you look at that kind of material you are training your brain and body to be attracted to that stuff in real life.
prove that any “training” is involved, please.
We’re discussing the underpinnings and philosophy of the legality and your comment is simply “it is illegal”
I can only draw from this that your morality is based on laws instead of vice versa.
I’m in the camp if that there is no reason that you should have that kind of imagery especially AI generated imagery. Think about what people often do with pornography. You do not want them doing that with children regardless of if it is AI generated.
What does want have to do with it? I’d rather trust science and psychologists to determine if this, which is objectively harmless, helps them control their feelings and gives them a harmless outlet.
They aren’t banning porn in general. They just don’t want to create any more sexual desires toward children. The CSAM laws came from child protection experts. Admittedly some of these people want to “ban” encryption but that’s irrelevant in this case.
deleted by creator
13.000 images are generated relatively fast. My PC needs like 5 seconds for a picture with SD(depending on settings of course). So not even a day.
Also, if pedos would only create their own shit to fap to i would consider this a win.
The only good pedo is a pedo permanently separated from society. Let’s start with the Catholic Church
Also, if pedos would only create their own shit to fap to i would consider this a win.
Vape logic.
Could you explain why?
Sure.
I mostly referred to the second paragraph. Probably, this person meant that it’s better no child has been harmed in this 13k images’ production, but the wording irked me, especially the ‘win’. It got me a bit salty and I didn’t elaborate, so I don’t know what exactly people thought I’ve meant.
So I don’t consider this a ‘win’ because it doesn’t help their urges or make them less dangerous unlike therapy, like vaping was sometimes marketed as a healthier alternative to cigs or a way to give up smoking. I don’t want to dive into ethics of these two kinds of CSAM, but I find that leaving out the aspect of production (victimless?), it’s still harmful to the society as a whole to (generate,) collect and share it. Why brackets? Usually in court there are different levels or different articles that may be involved, and if production itself may be treated as harmless, merely having a collection and participating in trade\share of such materials are criminal offences themselves. And there we are to pick if we treat them as real or not. Returning to vapes: due to not being regular cigs, when it was a novelty many initially thought it’s okay to smoke them at work or in a classroom, but later they were banned as well. That’s not the only case where the nature of what AI produces and responsibility for that causes arguments, and our codified laws aren’t all bleeding edge to cover this, so I guess we are in the time we decide the framework to evaluate, work with them. And as silly as it is, vape pandemic was the first thing I’ve been reminded of, and it’s not great because both this and AI CSAM I’ve heard of because of it’s usage in schools - the second one is an article from months ago about deepfake nudes boys made of peers. Seemingly gated garden keeps being the most vulnerable.
Sensitive topic - obviously.
However these guard rail laws, and “won’t someone think about the children” cases are a reeeeally easy way for the government to remove more power from the people.
However, I believe if handled correctly, banning this sort of thing is absolutely necessary to combat the mental illness that is pedophilia.
I don’t condone child sexual abuse, and I’m definitely not a pedo (gosh, I can’t believe I have to state this.)
But how does banning AI generated material help combating a mental illness? The mental illness will still be there, with or without images…
There’s something to be said about making it as difficult as possible to enable the behavior. Though that does run the risk of a particularly frustrated individual doing something despicable to an actual child. I don’t exactly have the data on how all this plays out, and frankly I don’t want to be the one to look into it. Society isn’t particularly equipped to handle an issue like this though, focusing on stigma alone to kinda try to shove it under the rug.
Your second sentence is exactly what I was thinking of. The big issue with pedophilia is the fact that kids can be easily manipulated (or forced!) to do heinous acts. Otherwise, what’s the difference with regular porn and topics about prisoners, slavery, necrophilia, etc? Would we say that people who consume rape fantasy porn will go out and rape? If a dude who is sexually attracted to women is not raping women left and right every day all year round, you know, because he knows it’s wrong, if we’re not labeling every heterosexual male as creeps, then why would this be different with other kinds of attractions?
But anyway. I’m not saying anything that hasn’t been discussed in the past (I’m sure.) I’m just glad I don’t have that condition (or anything similar, like attracted to volcanoes), otherwise life would definitely suck.
Mainly it’s a problem of enabling the problem as others have mentioned.
It’s not a solution, per se. It doesn’t solve something specifically- but it doesn’t have to be. It’s about making it less accessible, harsher consequences, and so on to put more pressure on not continuing to participate in the activity. Ultimately it boils down to mental health and trauma. Pedophilia is a paraphilic disorder at the end of the day.
We don’t disagree. But this argument is different
from the OPfrom what you stated earlier. Your current argument is “these images are horrible. Let’s wipe them out of the face of Earth because they’re wrong.”But
OP(Edit: oops, OP is you!) originally said “not having access to these images will help people ‘cure’ their paraphilia.” I don’t think that has any scientific basis, though I’ll be happy to stand corrected.Edit: clarification.
I am the original commentator, unless you’re referring to the poster who just posted a quote and the link to the article
I’m not sure where you’re drawing these argument conclusions from and it’s bordering on muddying the water.
Sorry, yes, I was referring to what you originally said (I thought it was another commenter.)
Well, the same thing I can say about your argument conclusions and the same “muddying the water” opinion.
Your stance is “banning this X type of content will help cure Y,” and I’d like to see the science backing this up. That is all. I’m not defending pedophilia if that’s what you’re implying with “muddying the waters.” It’s just that I’m all for evidence, even if the evidence makes us (yes, me included) uncomfortable.
I’ve literally just said what I meant and you’re ignoring it. I explicitly said that it’s about making it harder to participate the behavior. I even said it’s not a cure.
Obvious troll. Blocked. See ya never edge lord
🤷♂️
please learn the difference between posting and commenting.
I know the difference.
I’ve used “OP” to refer to a parent poster (or commenter) for decades, on Slashdot, Digg, Reddit and now here. I won’t change it unless there’s a major shift in the community.
70 years for… Generating AI CSAM? So that’s apparently worse than actually raping multiple children?
It seems weird that the AI companies aren’t being held responsible too.
It’s open source code that someone ran on their own computer, it’s not like he used paid OpenAI credits to generate the image.
It also would set a bad precedent - it would be like charging Solomons & Fryhle because someone used their (absolutely ubiquitous) organic chemistry textbook to create methamphetamine
Well the American way is not to hold the company accountable, I.e. school shootings, so yeah.
I’m pretty sure you can’t hold a school liable for a school shooting
I think they were referring to the firearm manufacturer and\or seller.
Yes, this, not the school.
Still can’t really hold them liable unless they deliberately sold a weapon to someone who legally was prohibited from having a weapon.
Shooting are more of a mental health and social media issue in my mind. The bigger question is why did someone feel the need to kill others?
Still can’t really hold them liable unless they deliberately sold a weapon to someone who legally was prohibited from having a weapon.
That’s a very American point of view though - America isn’t holding those who create/sell tools that do bad things to account. If gun manufacturers were held responsible for how the things they created were used, you can bet anything suddenly they’d be hell of lot safer. Which is the exact same point about AI.
(Obviously not holding manufacturers/sellers to account is not an America-only issue, but this article is about AI and the USA so that’s the example I’m using.)
The bigger question is why did someone feel the need to kill others?
As a non-American, I think the general question is why on earth does the general public need semi-automatic weapons. Or really, any weapons.
I mean we’re also not suing Toyota or Stolichnaya to stop drunk driving. In America the onus is on you not to do the bad thing, not on the companies or government for not preventing you from doing it. In America if you kill someone it is your fault, not Ruger’s.
Frankly I’m surprised it doesn’t work that way in every country, if you sell a friend your old car and he hits an old lady years or months later would you get charged? That sucks.
I guess its a cultural difference. America likes its guns.
I see the gun issue in America in the same light as the car issue. We’re in way too fucking deep, and it’s a part of our culture now. I hate both, but I acknowledge how difficult it is to do something about it.
Was Kodak ever held responsible for original CSAM?
I think stable diffusion is an open source AI you can run on your own computer, so I don’t see how the developers should be held responsible for that.
The basis of making CSAM illegal was that minors are harmed during the production of the material. Prior to CG, the only way to produce pornographic images involving minors was to use real, flesh-and-blood minors. But if no minors are harmed to create CSAM, then what is the basis for making that CSAM illegal?
Think of it this way: if I make a pencil drawing of a minor being sexually abused, should that be treated as though it is a criminal act? What if it’s just stick figures, and I’ve labeled one as being a minor, and the others as being adults? What if I produce real pornography using real adults, but used actors that appear to be underage, and I tell everyone that the actors were all underage so that people believe it’s CSAM?
It seems to me that, rationally, things like this should only be illegal when real people are being harmed, and that when there is no harm, it should not be illegal. You can make an entirely reasonable argument that pornographic images created using a real person as the basis does cause harm to the person being so depicted. But if it’s not any real person?
This seems like a very bad path to head down.
Simpson CSAM in 2008 in Australia:
Of course he did. That’s the world we live in.