Here you can see 2 day old post warning about the danger of not using email/captcha verification: https://lemmy.ml/post/1345031

And here are stats of lemmy platform where it shows that we gained 200 000 lemmy users in 2 days: https://lemmy.fediverse.observer/dailystats

Another tracking site with the same explosion in users: https://the-federation.info/platform/73

What do you think? Is it some sort of a bug or do people run bot farms?

Edit2: It’s been now 3 days and we went from 150 000 user accounts 3 days ago to 700 000 user accounts today making it 550 000+ bot accounts and counting. Almost 80% accounts on lemmy are now bots and it may end up being an very serious issue for lemmy platform once they become active.

Edit3: It’s now 4th day of the attack and the amount of accounts on lemmy has almost reached 1 200 000. Almost 90% of total userbase are now bots.

Edit 3.1: my numbers are outdated, there are currently 1 700 000 accounts which makes it even worse: https://fedidb.org/software/lemmy

  • @eric5949
    link
    English
    241 year ago

    Are they doing anything to solve this? Because if not this platform will die

    • @[email protected]
      link
      fedilink
      English
      171 year ago

      More robust instances will have to defederate instances with high concentration of bots and monitor their own new users. Maybe also implement email verification or captchas

      • @[email protected]OP
        link
        fedilink
        English
        7
        edit-2
        1 year ago

        Instances already have an ability to turn on both captchas and email verification.

        • @ewe
          link
          English
          91 year ago

          There are almost 1000 lemmy instances already. Getting individuals to fix their signup settings so that they mandate CAPTCHA likely will have to be driven from the lemmy product update level and an agreed upon defederation list for non-conformant instances.

          And bot farms would be able to spin up new instances themselves, so being able to do a blacklist based federation model (federate with all by default except x, y, and z) isn’t going to be viable. There’s going to have to be a whitelist (federate only with a, b, and c) and maintaining that as new instances get added will be problematic without an overarching way of pushing updates of known “good” instances automatically.

          • @gyro
            link
            English
            1
            edit-2
            1 year ago

            yeah. this is exactly what has to be done and ASAP. hope people on github have the same mindset.

            • @[email protected]
              link
              fedilink
              English
              11 year ago

              I do not, and that comment might put you on the list of people behind this attack.

              As well it can put me behind it.

              Are bots creating accounts? Yeah, most definitely. Do we have the tools to fight it, no we do not.

              The only tool would be predicting behavior and, for example, adding additional question/action few days after registering.

    • @[email protected]
      link
      fedilink
      English
      4
      edit-2
      1 year ago

      Anyone can spin up an instance and create a bajillion bots. That doesn’t matter at all. You cant solve that while being open source.

      The question is: is whoever doing this USING the bots? Doesnt seem like it yet. And doing it this way would be stupid as well, those bot instances would just get insta-blocked.