Under moderated and under administrated instances have ended up with child porn on them. If that shit gets federated out it’s a real mess for everyone. I think screening tools are more advanced now thankfully because it’s been a while since the last incident.
That means the CSAM (its not ‘child porn’ its child abuse) remains on the server, which means the instance owner is legally liable. Don’t know about you but if I was an instance owner I wouldn’t want the shame and legal consequences of leaving CSAM up on a server I control.
AI would be able to do a good first pass on it. Except that an AI that was able to reliably recognize child porn would be a useful tool for creating child porn, so maybe don’t advertise that you’ve got one on the job.
I’ve read through a few of your replies, and they generally contain a “so, …” and a generally inaccurate summary of what the conversation thread is about. I don’t know whether there’s a language barrier here or you’re being deliberately obtuse.
It would appear to be that your desire for a community without moderators is so strong, that a platform like Lemmy is not suitable for what you want, and as such you are likely not going to find the answer you want here and spend your time arguing against the flow.
If your questions are concrete and in the context of Lemmy or the Fediverse more broadly, admins provide the service of paying for and operating the servers in addition to moderation.
If it’s more abstract, i.e. “can people talk to each other over the internet without moderators?” then my experience is that they usually can when the group is small, but things deteriorate is it grows larger. The threshold for where that happens is higher if the group has a purpose or if the people already know each other.
Surely filtering out childporn is something that I can do for myself.
Even if that was a viable option as a solution (it isn’t), humans that are employed to filter out this disgusting content (and worse) are frequently psychologically damaged by the exposure. This includes online content moderation companies and those in law enforcement that have to deal with that stuff for evidentiary reasons.
The reason its not a viable solution is if YOU block it out because YOU don’t want to see it but its still there, it becomes a magnet for those that DO want to see it because they know its allowed. The value of the remaining legitimate content goes down because more of your time is spent blocking the objectionable material yourself, until its too much for anyone that doesn’t want that stuff and they leave. Then the community dies.
If you can write an automated filter to block CSAM then Apple, Meta, Alphabet and others would happily shovel billions at you. Blocking CSAM is a constant and highly expensive job… and when they fuck up it’s a PR shit storm.
If lemmy is a hub for those who want to to trade CSAM then it will be taken down by the government. This isn’t something that can be allowed onto the system.
Under moderated and under administrated instances have ended up with child porn on them. If that shit gets federated out it’s a real mess for everyone. I think screening tools are more advanced now thankfully because it’s been a while since the last incident.
Surely filtering out childporn is something that I can do for myself.
That means the CSAM (its not ‘child porn’ its child abuse) remains on the server, which means the instance owner is legally liable. Don’t know about you but if I was an instance owner I wouldn’t want the shame and legal consequences of leaving CSAM up on a server I control.
Make a reliable way to automate that, and you’ll make a lot of money.
Rely on doing it for yourself, and… well good luck with the mental health in a few years time.
AI would be able to do a good first pass on it. Except that an AI that was able to reliably recognize child porn would be a useful tool for creating child porn, so maybe don’t advertise that you’ve got one on the job.
So that’s the indispensable service that admin provides. Childporn filtering.
I didn’t realize it was such a large job. So large that it justifys the presence of a cop in every conversation? I dunno.
I’ve read through a few of your replies, and they generally contain a “so, …” and a generally inaccurate summary of what the conversation thread is about. I don’t know whether there’s a language barrier here or you’re being deliberately obtuse.
It would appear to be that your desire for a community without moderators is so strong, that a platform like Lemmy is not suitable for what you want, and as such you are likely not going to find the answer you want here and spend your time arguing against the flow.
Good luck finding what you’re looking for 👍
I notice that you have yet to point out even one such inaccuracy
If your questions are concrete and in the context of Lemmy or the Fediverse more broadly, admins provide the service of paying for and operating the servers in addition to moderation.
If it’s more abstract, i.e. “can people talk to each other over the internet without moderators?” then my experience is that they usually can when the group is small, but things deteriorate is it grows larger. The threshold for where that happens is higher if the group has a purpose or if the people already know each other.
That sounds right.
So the deterioration needs to be managed.
Even if that was a viable option as a solution (it isn’t), humans that are employed to filter out this disgusting content (and worse) are frequently psychologically damaged by the exposure. This includes online content moderation companies and those in law enforcement that have to deal with that stuff for evidentiary reasons.
The reason its not a viable solution is if YOU block it out because YOU don’t want to see it but its still there, it becomes a magnet for those that DO want to see it because they know its allowed. The value of the remaining legitimate content goes down because more of your time is spent blocking the objectionable material yourself, until its too much for anyone that doesn’t want that stuff and they leave. Then the community dies.
Personal cp filtering automation and a shared blacklist. That would take care of the problem. No moderator required.
If you can write an automated filter to block CSAM then Apple, Meta, Alphabet and others would happily shovel billions at you. Blocking CSAM is a constant and highly expensive job… and when they fuck up it’s a PR shit storm.
Maybe keeping it off the network is a lost cause. If we each block it with personal filtering then that changes the face of the issue.
If lemmy is a hub for those who want to to trade CSAM then it will be taken down by the government. This isn’t something that can be allowed onto the system.
Oh just those, eh?
Just goes to show how little idea you have how difficult this problem is.
This is starting to sound like, “we need constant control and surveillance to protect us from the big bad”.
You know, for the children.
Mate, if you don’t like the way we run things, go somewhere else. You’re not forced to be here.
But you see my point, right?
Of course I see your point you’re trying to make , but I also think you’re naive and don’t understand the repercussions of what you’re suggesting
Bro is talking smack to their own instance host 💀
It is illegal in most countries to host child sexual abuse material.
It still means too much legal trouble for the admin if the offending data would be on the server.