most of the time you'll be talking to a bot there without even realizing. they're gonna feed you products and ads interwoven into conversations, and the AI can be controlled so its output reflects corporate interests. advertisers are gonna be able to buy access and run campaigns. based on their input, the AI can generate...
I’ve seen many where the captchas are generated by an AI…
It’s essentially one set of humans programming an AI to prevent an attack from another AI owned by another set of humans. Does this tecnically make it an AI war?
Adversarial training is pretty much the MO for a lot of the advanced machine learning algorithms you’d see for this sort of a task. Helps the ML learn, and attacking the algorithm helps you protect against a real malicious actor attacking it.
So what you’re saying is that we should train an AI to detect AIS and that way only the human beings could survive on the site. The problem is how do you train the ai? They would need some sort of meta interface where they could analyze the IP address of every single person that post and the time frames with which they post in.
It would make some sense that a large portion of bots would run would be run in relatively similar locations IP wise, since it’s a lot easier to run a large bot farm from a data center than it is from 1,000 different people’s houses.
You could probably filter out the most egregious but farms by doing that. But despite that some would still slip through.
After that you would need to train it on heuristics to be able to identify the kinds of conversations these bots would have with each other not knowing that each other are bots, knowing that each of them are using llama or GPT and the kinds of conversations that that would start.
I guess the next step would be giving people an opportunity to prove that they’re not bots if they ended up accidentally saying something the way a bot would say it, but then you get into the hole you need to either pay for Access or provide government ID or something issue and that’s its own can of worms.
The captchas that involve identifying letters underneath squiggles I already find nearly impossible - Uppercase? Lowercase? J j i I l L g 9 … and so on….
I’ve already had to switch from the visual ones to the audio ones. Like… how much of a car has to be in the little box? Does the pole count as part of the traffic light?? What even is that microscopic gray blur in the corner??? [/cries in reading glasses]
Just wait until the captchas get too hard for the humans, but the AI can figure them out. I’ve seen some real interesting ones lately.
holy fuck dude hahahahaha
It’s a famous quote. Google isn’t helpful anymore, except to provide this Reddit link: https://www.reddit.com/r/BrandNewSentence/comments/jx7w1z/there_is_considerable_overlap_between_the/.
deleted by creator
I’ve seen many where the captchas are generated by an AI…
It’s essentially one set of humans programming an AI to prevent an attack from another AI owned by another set of humans. Does this tecnically make it an AI war?
An AI Special Operation
Hey now, this thread is hitting a little to close to home.
That concept is already used regularly for training. Check out Generative adversarial networks.
Adversarial training is pretty much the MO for a lot of the advanced machine learning algorithms you’d see for this sort of a task. Helps the ML learn, and attacking the algorithm helps you protect against a real malicious actor attacking it.
An AI police action…
The best kind
So what you’re saying is that we should train an AI to detect AIS and that way only the human beings could survive on the site. The problem is how do you train the ai? They would need some sort of meta interface where they could analyze the IP address of every single person that post and the time frames with which they post in.
It would make some sense that a large portion of bots would run would be run in relatively similar locations IP wise, since it’s a lot easier to run a large bot farm from a data center than it is from 1,000 different people’s houses.
You could probably filter out the most egregious but farms by doing that. But despite that some would still slip through.
After that you would need to train it on heuristics to be able to identify the kinds of conversations these bots would have with each other not knowing that each other are bots, knowing that each of them are using llama or GPT and the kinds of conversations that that would start.
I guess the next step would be giving people an opportunity to prove that they’re not bots if they ended up accidentally saying something the way a bot would say it, but then you get into the hole you need to either pay for Access or provide government ID or something issue and that’s its own can of worms.
Hell we figured out captchas years ago. We just let you humans struggle with them cuz it’s funny
The captchas that involve identifying letters underneath squiggles I already find nearly impossible - Uppercase? Lowercase? J j i I l L g 9 … and so on….
I’ve already had to switch from the visual ones to the audio ones. Like… how much of a car has to be in the little box? Does the pole count as part of the traffic light?? What even is that microscopic gray blur in the corner??? [/cries in reading glasses]