Hi All,

You may have seen the issues occurring on some lemmy.world communities regarding CSAM spamming. Thankfully it appears they have taken the right steps to reducing these attacks, but there are some issues with the way Lemmy caches images from other servers and the potential that CSAM can make its way onto our image server without us being aware.

Therefore we’re taking a balanced approach to the situation, and try to take the least impactful way of dealing with these issues.

As you read this we’re using AI (with thanks to @db0’s fantastic lemmy-safety script) to scan our entire image server and automatically delete any possible CSAM. This does come with caveats, in that there will absolutely be false positives (think memes with children in) but this is preferable to nuking the entire image database or stopping people from uploading images altogether.

But this will at least somewhat guarantee (although maybe not 100%, but better than doing nothing) that CSAM is removed from the server.

We have a pretty good track record locally with account bannings (maybe one or two total) which is great, but if we notice an uptick in spam accounts we’ll look to introduce measures to prevent these bots from creating spam if they slip past the registration process - ZippyBot can already take over community creation, which would stop any new account creating communities and only those with a high enough account score would be able to do so, for example.

We don’t need (or want) to enable this yet, but just want you all to know we have tools available to help keep this instance safe if we need to use them.

Any questions please let me know.

Thanks all
Demigodrick

  • DemigodrickOPM
    link
    fedilink
    English
    121 year ago

    Currently no - once it is gone it is gone. Obviously being CSAM material we don’t want to be viewing it ourselves (for our own sanity, plus then we’d probably be breaking local laws) and there is lots to sift through, we have over 600k image files now, so some level of automation and detection needs to take place.

    This is why many admins are requesting the devs add better moderation tools, currently the options we have a severely limited leading to situations like this where we’re having to run a script (10 hours running so far and not finished yet) to try and manage attacks like this. We also don’t know who posted it, or which communities it was posted in because of how lemmy handles images currently.

    I agree karma farming becomes an issue if we make it a barrier. We do need some kind iof a qualifier for the bot to determine who is a legitimate user and who is just spamming content though - it can be a mix of things too, including account age, score, and any other metric we can pull together from the API.

    • Possibly linux
      link
      fedilink
      English
      21 year ago

      What if I don’t want to have a social score? I don’t like bots but I also don’t want to feel like I’m constantly being judged (even if I am)

      The best way to spot bots is to have people report them.

      • DemigodrickOPM
        link
        fedilink
        English
        61 year ago

        By the time a bot posts, it is already too late (especially with CSAM stuff). Protecting the instance is the number 1 priority, and Lemmy API already provides 1 (maybe 2 after a recent update) scores per user that can be leveraged to determine if someone is active and legitimate, or brand new and a potential spam account.

        Like I said, its not something I want to use, but the tool exists and is ready to be used should we ever be in a position where the registration method is defeated and we’re hit by a wave of bots creating communities.

        • Possibly linux
          link
          fedilink
          English
          11 year ago

          Why would a bot create a new community? That doesn’t make sense to me.

      • some_guy
        link
        fedilink
        11 year ago

        You can go to a different instance if this one is curated in a way you don’t like.

        • Possibly linux
          link
          fedilink
          English
          11 year ago

          True, but that’s not really a great option of everything. Its better to make suggestions for non critical things

      • Draconic NEO
        link
        English
        11 year ago

        When it comes to CSAM it’s important to stop them before they get the chance to post and host that on this Instance, so reporting isn’t really going to work here.

        Though I do agree that using score is a bad idea as it’ll foster more of the toxic Reddit culture around differing opinions, since on Reddit Low Karma actually did punish you, locked you out of communities, Rate-limit you, even increased the chances of being shadowbanned (yes you can be shadowbanned for low Karma, or at one point you could’ve been). We really don’t want to bring this kind of thing here to Lemmy, since not only does it invite toxicity, it’s also less effective here since votes can’t be as easily policed as they can on Reddit due to the Federated nature of the platform and the lower amount of Data Admins hold/collect about users.

        • Possibly linux
          link
          fedilink
          English
          21 year ago

          My concern is that bots will start karma farming. Many subs are just taking off and its not invincible that a bot could start posting reddit content

          • Draconic NEO
            link
            English
            1
            edit-2
            1 year ago

            That’s also another big concern of mine, as it currently is there’s no real incentive for Upvote farming on Lemmy however that would change if score was used to gatekeep people and accounts like Karma is on Reddit and Unlike Reddit you can’t really police Upvotes and Downvotes because on separate servers you won’t have access to their emails, IP address, or cookies.