The best part of the fediverse is that anyone can run their own server. The downside of this is that anyone can easily create hordes of fake accounts, as I will now demonstrate.
Fighting fake accounts is hard and most implementations do not currently have an effective way of filtering out fake accounts. I’m sure that the developers will step in if this becomes a bigger problem. Until then, remember that votes are just a number.
The nice things about the Federated universe is that, yes, you can bulk create user accounts on your own instance - and that server can then be defederated by other servers when it becomes obvious that it’s going to create problems.
It’s not a perfect fix and as this post demonstrated, is only really effective after a problem has been identified. At least in terms of vote manipulation from across servers, it could act if it, say, detects that 99% of new upvotes are coming from a server created yesterday with 1 post, it could at least flag it for a human to review.
It actually seems like an interesting problem to solve. Instance runners have the sql database with all the voting record, finding manipulative instances seems a bit like a machine learning problem to me
There’s an XKCD for that: https://xkcd.com/810/
Don’t really need machine learning, just well built queries. Probably peak at the top pairs of users that comment on the same posts. If the same users are always commenting/voting on each other’s posts across every subreddit, it points to manipulation.
One other thing is that you can bulk create your own instances, and that’s a lot more effort to defederate. People could be creating those instances right now and just start using them after a year; at least they have incurred some costs during that…
I believe abuse management in openly federated systems (e.g. Lemmy, Mastodon, Matrix) is still an unsolved problem. I doubt good solutions will arrive before they become popular enough to attract commercial spammers.
Well if all the upvotes are coming from tiny instances, that’d be a good indicator. You can’t stop it entirely, but you can at least make it a lot harder. I do wish that instances could’ve been somewhat more “randomly” assigned to new users, as that would make any bias in voting sources a huge and obvious red flag.
Then they will just distribute their bots equally to other legit servers, and by that, defederation is not a viable solution anymore.
One other problem are real human troll farms
If they can do that, they could’ve done it on a traditional site anyway
“Legit” instances are able to moderate/control the spam coming from their users.