Hello everyone,

We unfortunately have to close the !lemmyshitpost community for the time being. We have been fighting the CSAM (Child Sexual Assault Material) posts all day but there is nothing we can do because they will just post from another instance since we changed our registration policy.

We keep working on a solution, we have a few things in the works but that won’t help us now.

Thank you for your understanding and apologies to our users, moderators and admins of other instances who had to deal with this.

Edit: @[email protected] the moderator of the affected community made a post apologizing for what happened. But this could not be stopped even with 10 moderators. And if it wasn’t his community it would have been another one. And it is clear this could happen on any instance.

But we will not give up. We are lucky to have a very dedicated team and we can hopefully make an announcement about what’s next very soon.

Edit 2: removed that bit about the moderator tools. That came out a bit harsher than how we meant it. It’s been a long day and having to deal with this kind of stuff got some of us a bit salty to say the least. Remember we also had to deal with people posting scat not too long ago so this isn’t the first time we felt helpless. Anyway, I hope we can announce something more positive soon.

  • @ptrckstr
    link
    62
    edit-2
    1 year ago

    I’m afraid the fediverse will need a crowdsec-like decentralized banning platform. Get banned one platform for this shit, get banned everywhere.

    I’m willing to participate in fleshing that out.

    Edit: it’s just an idea, I do not have all the answers, otherwise I’d be building it.

    • @Katana314
      link
      English
      191 year ago

      What you’re basically talking about is centralization. And, as much as it has tremendous benefits of convenience, I think a lot of people here can cite their own feelings as to why that’s generally bad. It’s a hard call to make.

      • @rbar
        link
        71 year ago

        They didn’t say anything about implementation. Why couldn’t you build tooling to keep it decentralized? Servers or even communities could choose to ban from their own communities based on a heuristic based on the moderation actions published by other communities. At the end of the day it is still individual communities making their own decisions.

        I just wouldn’t be so quick to shoot this down.

        • @[email protected]
          link
          fedilink
          11 year ago

          There is something similar to that for Minecraft servers, it’s a website/ plugin where people’s bans get added to and other admins can check usernames on there to see if they’re a troll or whatever and ban them straight away before they cause issues. So it’s definitely possible to do in a decentralised way.

    • @BradleyUffner
      link
      English
      10
      edit-2
      1 year ago

      There is no way that could get abused… Like say, by hosting your own instance and banning anyone you want.

      • @ptrckstr
        link
        51 year ago

        Anything can be abused, but you can also build proper safeguards.

    • @CreeperODeath
      link
      English
      31 year ago

      I feel like this would be difficult to enforce

      • Whitehat Hacker
        link
        English
        11 year ago

        Do people think that someone has to use the same email address, or the same username? If someone uses a different email, username, and IP address (don’t try and argue semantics it can, always can, and always has been done) then whatever you put into the list can’t be applied to them.

        Even if you ask for IDs people can fake those, it’s illegal sure but so is what these assholes did and it didn’t really stop them now did it.

      • @ptrckstr
        link
        3
        edit-2
        1 year ago

        You can have a local banlist supplemented by a shared banlist containing these CSAM individuals for example.

        • @thisisawayoflife
          link
          51 year ago

          That ban list could be a set of rich objects. The user that was banned, date of action, community it happened in, reason, server it happened at. Sysops could choose to not accept any bans from a particular site. Make things fairly granular so there’s flexibility to account for bad actor sysops.

        • newIdentity
          link
          fedilink
          41 year ago

          But how do you know that these people actually spread CSAM and someone isn’t abusing their power?

    • Draconic NEO
      link
      21 year ago

      We already have that, it’s called prison. Can’t go on the internet from Prison (at least I’d assume so, wouldn’t make much sense if people could). That’s not 100% since people need to be caught for it to work but once they are it certainly is.

      Though other Global ban solutions don’t really work well because they require a certain level of compliance that criminals aren’t going to follow though with (i.e. Not commiting identity theft). They can also be abused by malicious actors to falsely ban people (especially with the whole identity theft thing).

    • @atticus88th
      link
      21 year ago

      [ptrck has been permanently banned from all social media]

    • Hello Hotel
      link
      English
      11 year ago

      Mabe FIDO for identity purposes is a good idea. Mabe some process that takes a week to calculate an identity token and an approval and rejection system for known tokens