Reddit CEO says facial verification may be introduced. Ostensibly to prevent bots.
But we all know how dangerous this can be. But most likely Reddit users will just accept it.
Although they have a great free analogue right under their noses - Lemmy. Which is many times better than its competitor.
I wish more people would discover Lemmy, but that’s unlikely.


I don’t know where this is coming from. Nobody is being hired. If anything I’m becoming more anti-mod lately. I feel like put boxes on things that suck oxygen out of the room rapidly. But that’s a different discussion.
Maybe I’m reading this wrong but to clarify I am not saying we need to build our own bot detection but I would be a nice have eventually. I am saying we should be crowd sourcing our collective anger and ADHD or Autism or whatever drives us to post bean moth lemmy slop and instead focus on collection of the worst bot infestations. There are patterns. Bots are not random enough that they can’t be identified with large crowd sourced efforts. They’re also in their infancy which means it will only get harder going forward.
You or I aren’t able to avaliable accurately tell right now. Have you ever seen the Sinclair news video? The one where every news station repeats the same dialogue. Can you or I flip on the news any day of the week and call that out, unlikely. But we can logical understand it is something that happens. It becomes obvious there is a script only when you collect the data and begin to analyze it. That is what I’m saying we need to figure out and gamify.
Name generation, text, patterns. At the start it won’t be accurate. But as more data is collected it’ll become obvious. If the bots were that good, these websites would have left their APIs open. But they closed them so we can’t collect this data. I’m the type of person when powerful people do something like that, I want to know why and work around that. It’s not a coincidence that they locked their sites up when people were given tools where anyone could collect data and feed it into AI for analysis.
Our inaction to do anything when the greatest opportunities are right in front of us but slipping away is a tragedy of this generation.
That’s what “being a moderator” is, mate. You want hundreds of thousands of moderators.
You’re wrong.
You just said:
So, which is it?
There’s a massive difference between local news stations receiving a script to read out, and a bot farm having a “be negative, unfriendly, sow chaos” instruction.
So, it just won’t work? Got it.
I don’t think you undersand what you’re talking about. Don’t get me wrong, I’m not trying to be contrarian here, I just honestly think that your idea of “AI bots” is kind of like “we have prepared one million sentences, and now our bots will be picking between them to generate whole posts on social networks”.
I mean, sure, there can be patterns - like the whole “LinkedIn post” style, where most of the time it’s fairly obvious that you’re reading an AI-generated slop… But that’s not what state-entities or even just hackers use. They have access to much more sophisticated content.
Reddit’s API is no longer open. Didn’t do a thing to stop bots.
You don’t need however many API keys to collect that kind of data. At least not from Reddit.
Your proposed action is the equivalent of Sisyphus and his stone. Because you really seem to be forgetting that the AI tech is getting better all the time. And that any AI-detection actions you take feed that process. “Oh, they’ve detected these posts? OK, let’s tweak the algo until we get through and then flood them with our content”.
Let’s even assume that you somehow pull it off and get a 100% detection rate as of right now. Six months down the line that will go down to 20%. Etc. etc. And you’ll be catching thousands of legitimate users in the crossfire.
An anonymous “proof of humanity” token solves all AI issues without anyone having to spend billions on research and manpower.