It is widely reported that lemmy has been the target of a massive bot sign-up wave where possibly more than a million fake accounts have been created on the federated alternative to reddit.
As of now the consensus seems to be [more below] that these bots are dormant, possibly having been put in place to wreak havoc at a later date.
A strength of the fediverse, of which lemmy is a part, is its spread out nature, over at least forty thousand servers (lemmy: one thousand), but this is also a weakness meaning that co-ordination of action against a bot-wave like this is a more complex task.
A further complication is the recent widespread availability of A.I. or LLM functionality where plausibly human content can be created in vast quantities by those who possess relatively cheap computer systems and the desire. Should the recent wave of fraudulent accounts be fed content from an ‘AI farm’ it could easily lead to lemmy being ridiculed as a notoriously unreliable platform.
As such it is imperative that across the lemmyverse a concerted effort is made to purge fraudulent accounts.
[from above] There is a possibility that such a fake AI posting undertaking has already been started; after all slipping vaguely relevant content into lemmy threads would be very difficult to spot in small quantities.
Is lemmy doing enough to protect itself from the bots?
Valid concerns IMO, I was shocked to see how advanced those GPT bots are.
I was on a discord chat and there were 4-5 of them chatting with people there, at some point they started arguing with each other, if they weren’t clearly marked as bots I probably wouldn’t be able to recognize them.
Hilarious on one side, scary on the other side, I can’t imagine how much damage they could do in malicious hands.
My first contact with GPT was the subredditsimulatorGPT. At least that was contained.
I remember thinking just that was freaky, ChatGPT is even crazier
Really? From what I’ve seen, it becomes obvious after 3-4 back-and-forth conversations that they’re bots. But this was a few months ago, so things may have changed since then.
They get better by the day.
Lemmy, the software, isn’t the solution.
Anyone can create an instance running Lemmy. Then, that instance is federated with all other instances. This allows the owner of that instance to then create as many bots as they please to post to other instances.
Where an instance is being solely used for spam, it should be defederated. Again, this is up to the other individual instances to do.
The bot accounts can be banned from instances but more will just poo up instead.
We had a wave small wave of what really looked like bot accounts. Purged them from the database.
I think the power of fedi will be that even if these accounts do something in the future (I suspect they’re just future astroturf accounts which will be used once aged), it’s not like trying to persuade a behemoth corporation to take action.
It’ll be a few thousand obvious accounts on most instances, and any sensible admin will be easily able to prune. The ratio of account to admin is that much smaller, meaning going in with a fine-tooth comb isn’t impossible.
And any sites that don’t take reasonable actions, will likely get de-federated.
It’s a nice two-step solution. A local action, and a backup global block.
It is widely reported that Lemmy has been the target of a massive bot sign-up wave
Sorry I’m out of the loop. Where has this been reported?
There’s been a few posts on it, including one where the OP intentionally voted the post up with an army of bots to demonstrate the problem.
If you run an instance with closed registration you’ll probably receive a bunch of incredibly similar, clearly AI generated, applications from bot accounts. They seem to prefer existing instances over spinning up bot instances, to avoid defederation.
[email protected] has a lot of discussion about it.
https://lemmy.ml/post/1444329