The best part of the fediverse is that anyone can run their own server. The downside of this is that anyone can easily create hordes of fake accounts, as I will now demonstrate.

Fighting fake accounts is hard and most implementations do not currently have an effective way of filtering out fake accounts. I’m sure that the developers will step in if this becomes a bigger problem. Until then, remember that votes are just a number.

  • @simple
    link
    421 year ago

    Reddit had ways to automatically catch people trying to manipulate votes though, at least the obvious ones. A friend of mine posted a reddit link for everyone to upvote on our group and got temporarily suspended for vote manipulation like an hour later. I don’t know if something like that can be implemented in the Fediverse but some people on github suggested a way for instances to share to other instances how trusted/distrusted a user or instance is.

    • @cynar
      link
      371 year ago

      An automated trust rating will be critical for Lemmy, longer term. It’s the same arms race as email has to fight. There should be a linked trust system of both instances and users. The instance ‘vouches’ for the users trust score. However, if other instances collectively disagree, then the trust score of the instance is also hit. Other instances can then use this information to judge how much to allow from users in that instance.

      • @[email protected]
        link
        fedilink
        31 year ago

        LLM bots has make this approach much less effective though. I can just leave my bots for a few months or a year to get reputation, automate them in a way that they are completely indistinguishable from a natural looking 200 users, making my opinion carry 200x the weight. Mostly for free. A person with money could do so much more.

        • @cynar
          link
          11 year ago

          It’s the same game as email. An arms race between spam detection, and spam detector evasion. The goal isn’t to get all the bots with it, but to clear out the low hanging fruit.

          In your case, if another server noticed a large number of accounts working in lockstep, then it’s fairly obvious bot-like behaviour. If their home server also noticed the pattern and reports it (lowers the users trust rating) then it wont be dinged harshly. If it reports all is fine, then it’s also assumed the instance might be involved.

          If you control the instance, then you can make it lie, but this downgrades the instance’s score. If it’s someone else’s, then there is incentive not to become a bot farm, or at least be honest in how it reports to the rest.

          This is basically what happens with email. It’s FAR from perfect, but a lot better than nothing. I believe 99+% of all emails sent are spam. Almost all get blocked. The spammers have to work to get them through.

      • fmstrat
        link
        fedilink
        11 year ago

        This will be very difficult. With Lemmy being open source (which is good), bot maker’s can just avoid the pitfalls they see in the system (which is bad).

    • @70ms
      link
      23
      edit-2
      1 year ago

      deleted by creator

      • @[email protected]
        link
        fedilink
        81 year ago

        That’s such a hilariously bad metric for detecting a bot network too. It wouldn’t even work to detect a real one, so all that policy ever did was annoy real users.

      • TheSaneWriter
        link
        fedilink
        51 year ago

        Hearing that, I wonder if they were using an IP address based system. That would cause real problems for people using a VPN, but it wouldn’t surprise me.

      • esty
        link
        fedilink
        71 year ago

        nope, i tried manipulating votes from apollo once and got a warning

    • @[email protected]
      link
      fedilink
      51 year ago

      I got that message too when switching accounts to vote several times. They can probably see it’s all coming from the same ip.