Today, a bunch of new instances appeared in the top of the user count list. It appears that these instances are all being bombarded by bot sign-ups.

For now, it seems that the bots are especially targeting instances that have:

  • Open sign-ups
  • No captcha
  • No e-mail verification

I have put together a spreadsheet of some of the most suspicious cases here.

If this is affecting you, I would highly recommend considering one of the following options:

  1. Close sign-ups entirely
  2. Only allow sign-ups with applications
  3. Enable e-mail verification + captcha for sign-ups

Additionally, I would recommend pre-emptively banning as many bot accounts as possible, before they start posting spam!

Please comment below if you have any questions or anything useful to add.


Update: on lemm.ee, I have defederated the most suspicious spambot-infested instances.

To clarify: this means small instances with an unnaturally fast explosion in user counts over the past day and very little organic activity. I plan to federate again if any of these instances get cleaned up. I have heard that other instances are planning (or already doing) this as well.

It’s not a decision I took lightly, but I think protecting users from spam is a very important task for admins. Full info here: https://lemm.ee/post/197715

If you’re an admin of an instance that’s defederated from lemm.ee but wish to DM me, you can find me on Matrix: @sunaurus:matrix.org

  • Lvxferre
    link
    fedilink
    331 year ago

    This might be related but I’ve noticed that someone is [likely automatically] following my posts and downvoting them. Kind of funny in a 'verse without karma.

    • Galloog1
      link
      fedilink
      101 year ago

      Karma may mean nothing but the information space is a strategic domain.

      • Lvxferre
        link
        fedilink
        31 year ago

        I don’t think it’s the case here, as I’ve noticed this after posts in small communities:

        • c/linguistics (~240 members)
        • c/parana (1 member - new comm)

        I think that the person/bot/whatever is following specific people.

    • @[email protected]OP
      link
      fedilink
      121 year ago

      Nice! Would be cool if you could also include current statuses of captchas, emails, and application requirements.

      • db0
        link
        fedilink
        91 year ago

        Tell me how to fetch them and it will. ;)

        • @[email protected]OP
          link
          fedilink
          91 year ago

          I think the easiest option is to just iterate through the list of suspicious instances, and then check {instance_url}/api/v3/site for each of them. Relevant keys of the response json are site_view.local_site.captcha_enabled, site_view.local_site.registration_mode, and site_view.local_site.require_email_verification.

          Since it’s a bunch of separate requests, probably it makes sense to do these in parallel and probably also to cache the results at least for a while.

          • db0
            link
            fedilink
            61 year ago

            It occurs to me that this kind of thing is better left to observer, as it’s set up to poll instances and gather data. I would suggest you ask them to ingest and expose this data as well

    • @couragethebravedog
      link
      191 year ago

      I highly doubt spez did this. Reddit is currently doing fine. Even if it all goes away he’s sitting on over a decade of genuine human conversations he can sell to AI companies and make millions. He isn’t worried.

      • TWeaK
        link
        fedilink
        41 year ago

        Steve Huffman doesn’t do anything, he’s a greedy little pigboy who profits off of the creation of his dead “friend”. He claims ownership of your ideas, for reddit’s exclusive profit, at no benefit (if anything, at penalty) to yourself.

        However it would be naive to assume that he hasn’t directed at least some shade towards reddit. Almost as naive as to think that Google doesn’t create bots to target websites that don’t use their own captcha services.

        PSA: When “proving you’re human”, always try to poison the data. They’re using your input to train visual AI, without paying you for your efforts. With Google, they will typically put the training up front - there will be one or two images that a bot isn’t sure about. If you give the unexpected response, the next test will be one that the machine knows, to check that you’re a human who knows what they’re talking about. With hcaptcha or some others, they might put the obvious one first, then check your guesses are human after.

        The services will determine that you’re human by other means anyway (eg mouse movements) and eventually let you through, but by giving them the wrong answer when they don’t know but the right answer when they do, you can make their AI less effective.

        They should be paying you for your input into their commercial enterprise, so fuck them.

    • YMS
      link
      fedilink
      161 year ago

      Or just the unavoidable spam bot accounts coming as long as it’s easy and the instance operators being still unprepared.

  • @[email protected]
    link
    fedilink
    151 year ago

    I’m noobish, but could they be defederated until they get their act together before they spam everybody?

  • @[email protected]
    link
    fedilink
    141 year ago

    99% of fedi instances should require sign-ups with applications and email. It does not make sense to let in users indiscriminately unless you have a 24h staff in charge of moderation.

    • db0
      link
      fedilink
      351 year ago

      We’re trying to capture the reddit refugees as well. It’s a fine-line to walk.

        • db0
          link
          fedilink
          181 year ago

          yes, that’s the bare minimum until we get better toolset

      • hightrix
        link
        fedilink
        101 year ago

        Agreed. An application that must be human reviewed is a very large gate that many people will see and just close the site. Myself included.

        • @[email protected]
          link
          fedilink
          11 year ago

          Nothing against you but that is a good thing. The idea that applications being reviewed by a human could scare off users means less low-haning-fruit trolls and shit-posters.

    • AlmightySnoo 🐢🇮🇱🇺🇦
      link
      141 year ago

      Email verification + captcha should be enough. The application part is cringe and a bad idea, unless you really want to be your own small high school clique and don’t have any growth ambitions, which is perfectly fine but again should not be expected from general instances looking to welcome Redditors.

    • lixus98
      link
      fedilink
      41 year ago

      Are you already defederating from suspicious instances? If not, What are you planning to do?

    • SysAdmin
      link
      fedilink
      3
      edit-2
      1 year ago

      FYI, startrek.website has purged the bot accounts and enabled CAPTCHA. Just letting you know because they were showing up on your list.

      • db0
        link
        fedilink
        31 year ago

        Good to hear. once fediverse observer updates your usercount, this should clear itself out. Let me know if this is not the case soon

  • Somoon
    link
    fedilink
    111 year ago

    It was brought to my attention that my instance was hit with the spam bots regs. I’ve disabled registration and deleted the accounts from the DB. is there anything else I can do to clear the user stats on the sidebar?

  • SysAdmin
    link
    fedilink
    111 year ago

    Thanks for the heads up, StarTrek.website has enabled CAPTCHA and purged the bots from our database.

  • bread
    link
    111 year ago

    Maybe this is what’s implied or I’m just being silly; What is to stop a bad actor spinning up a Lemmy instance, creating a bunch of bot accounts with no restrictions, and spamming other instances? Would the only route of action be for the non spam instances to individually defederate the spam ones? Seems like that would be a bit of a cat and mouse situation. I’m not too familiar with the inner workings and tools that Lemmy has that would be useful in this situation

    • @PriorProject
      link
      171 year ago

      They can do this, and it is cat and mouse. But…

      1. It generally costs money to stand up an instance. It often requires a credit-card, which reduces anonymity. This will dissuade many folks.
      2. A malicious instance can be defederated, so it might not be all that useful.
      3. People can contact the security team at the host providing infra/internet to the spammer. Reputable hosts will kill the account of a spammer, which again is harder to duplicate if the host requires payment and identity info.
      4. Malicious hosts that fail to address repeated abuse reports can be ip-blocked.
      5. Eventually Lemmy features can be built to protect against this kind of thing by delaying federation, requiring admin approval, or shadow -banning them during a trial period.

      Email has shown us that there’s a playbook that kind of works here, but it’s not easy or pleasant.

      • katie
        link
        fedilink
        41 year ago

        I’ve been thinking about this as well. As a user of a private instance, I would be absolutely fine with a “federation application” much like user registration is currently.

      • @[email protected]
        link
        fedilink
        11 year ago

        Eventually Lemmy features can be built to protect against this kind of thing by delaying federation, requiring admin approval, or shadow -banning them during a trial period.

        That sounds like the best way to handle it.

  • @[email protected]
    link
    fedilink
    81 year ago

    I know from talking to admins when pbpBB was really popular that fighting spammers and unsavory bots was the big workload in running a forum. I’d expect the same for Fediverse instances. I hope a system can be worked out to make it manageable.

    As a user I don’t have a big problem with mechanisms like applications for the sake of spam control. It’s hugely more convenient when an account can be created instantaneously, but I understand the need.

    I do wonder how the fediverse is going to deal with self-hosting bad actors. I would think some kind of vetting process for federation would need to exist. I suppose you could rely on each admin to deal with that locally, but that does not sound like an efficient or particularly effective solution.

  • tal
    link
    fedilink
    6
    edit-2
    1 year ago

    I suspect that there’s going to need to be some analysis software that can run on the kbin and lemmy server logs looking for suspicious stuff.

    Say, for instance, a ton of accounts come from one IP. That’s not a guarantee that they’re malicious – like, could be some institution that NATs connections or something. But it’s probably worth at least looking at, and if someone signed up 50 accounts from a single IP, that’s probably at least worth red-flagging to see if they’re actually acting like a normal account. Especially if the email provider is identical (i.e. they’re all from one domain).

    Might also want to have some kind of clearinghouse for sharing information among instance admins about abuse cases.

    One other point:

    I would recommend pre-emptively banning as many bot accounts as possible,

    A bot is not intrinsically a bad thing. For example, I was suggesting yesterday that it would be neat if there was a bot running that posted equivalent nitter.net links in response to comments providing twitter.com links, for people who want to use those. There were a number of legitimately-helpful bots that ran on Reddit – I personally got a kick out of the haiku bot, that mentioned to a user when their comment was a haiku – and legitimately-helpful bots that run on IRC.

    Though perhaps it would be a good idea to either adopt a convention ("bots must end in “Bot”) or have some other way for bots to disclose that they are bots and provide contact information for a human, in case they malfunction and start causing problems.

    But if someone is signing up hordes of them, then, yeah, that’s probably not a good actor. Shouldn’t need a ton of accounts for any legit reason.