Social media platforms like Twitter and Reddit are increasingly infested with bots and fake accounts, leading to significant manipulation of public discourse. These bots don’t just annoy users—they skew visibility through vote manipulation. Fake accounts and automated scripts systematically downvote posts opposing certain viewpoints, distorting the content that surfaces and amplifying specific agendas.

Before coming to Lemmy, I was systematically downvoted by bots on Reddit for completely normal comments that were relatively neutral and not controversial​ at all. Seemed to be no pattern in it… One time I commented that my favorite game was WoW, down voted -15 for no apparent reason.

For example, a bot on Twitter using an API call to GPT-4o ran out of funding and started posting their prompts and system information publicly.

https://www.dailydot.com/debug/chatgpt-bot-x-russian-campaign-meme/

Example shown here

Bots like these are probably in the tens or hundreds of thousands. They did a huge ban wave of bots on Reddit, and some major top level subreddits were quiet for days because of it. Unbelievable…

How do we even fix this issue or prevent it from affecting Lemmy??

  • @Metz
    link
    English
    211 days ago

    Long before cryptocurrencies existed, proof-of-work was already being used to hinder bots. For every post, vote, etc., a cryptographic task has to be solved by the device used for it. Imperceptibly fast for the normal user, but for a bot trying to perform hundreds or thousands of actions in a row, a really annoying speed bump.

    See e.g. https://wikipedia.org/wiki/Hashcash

    This combined with more classic blockades such as CAPTCHAs (especially image recognition, which is still expensive in mass despite the advances in AI) should at least represent a first major obstacle.

    • @[email protected]
      link
      fedilink
      English
      611 days ago

      Why resort to an expensive decentralized mechanism when we already have a client-server model? We can just implement rate-limiting on the server.

      • @Metz
        link
        English
        511 days ago

        Can’t this simply be circumvented by the attackers operating several Lemmy servers of their own? That way they can pump as many messages into the network as they want. But with PoW the network would only accept the messages work was done for.

        • @[email protected]
          link
          fedilink
          English
          110 days ago

          Rate-limiting could also be applied at the federation level, but I’m less sure of what the implementation would look like. Requiring filters on a per-account basis might be resource intensive.

    • @[email protected]
      link
      fedilink
      English
      510 days ago

      The issue I have with this that basically, now users need to “pay” (with compute time) to speak their mind. This would be similar than if you had to pay to vote in political elections. It favors the rich. A poor user might not be able to afford 20$ additional electricity bill a month, but a large agency (such as state sponsored, corporate agendas) might have a 1000000$.

      • @Metz
        link
        English
        1
        edit-2
        10 days ago

        We’re talking about fractions of a cent here per post. Of course, this all needs to be worked out in detail and variables and scaling needs to be added / calculated. So for someone that posts only 2-3 times a day, costs and delay are practically unmeasurable low. but if you start pushing 100 posts out per minute, the difficulty of the PoW calculation gets up.

        A delay of a fraction of a second to do the PoW for a single post is not a problem. But a spam-bot that is now suddenly limited to making 1 post per minute instead 100 makes a huge difference and could drive up the price even for someone with deep pockets.

        But I’m not an expert in this field. I only know that spambots and similar are a problem that is almost as old as the Internet and that there have been an almost incalculable number of attempts to solve it to date, all of which have more or less failed. But maybe we can find a combination that could work for our specific case.

        Of course, there are still a lot of things to clarify. how do we stop someone from constantly creating new accounts, for example?

        would we have to start with a “harder difficulty” for new users to counteract this?

        do we need some kind of reputation system?

        How do we set them accurately enough not to drive away new users but still fulfill their purpose?

        But as said, not an expert. Just brainstorming here.

      • y0kai
        link
        fedilink
        English
        110 days ago

        I see it more as a tax. While you can evade taxes in a political system, you’re supposed to be paying them if you’re voting.