Summary
Experts warn of rising online racism fueled by X’s generative AI chatbot, Grok, which recently introduced a photorealistic image feature called Aurora.
Racist, fake AI images targeting athletes and public figures have surged, with some depicting highly offensive and historically charged content.
Organizations like Signify and CCDH highlight Grok’s ability to bypass safeguards, exacerbating hate speech.
Critics blame X’s monetization model for incentivizing harmful content.
Sports bodies are working to mitigate abuse, while calls grow for stricter AI regulation and accountability from X.
Before everyone starts chiming in with “stop using Twitter” (and I agree), this is a bigger problem. If Grok can create photorealistic images of public figures, something AI image generators provided by Google, Microsoft/OpenAI and Meta are not doing, this problem goes way beyond Twitter. Because the photos will get spread from there.
And sure, someone could host their own AI image generator on their home setup, but this makes it much, much easier.
I got swamped with downvotes the last time I said this, but I maintain that in the near future we’re going to need to digitally sign authentic images. It’s simply unfeasible to police the entire internet in an effort to remove fake ones (especially since they are made by bad actors who don’t care about any rules anyway) so we need a widespread and easy to use way to distinguish real images instead.
Honest question. What will stop someone from getting AI generated images digitally signed as well? Who will be the authority doing the signing?
I mean a signature that can be matched against a known one, like GPG.
I don’t think that answered my question, but maybe I just don’t understand what you mean.
I could see a world where media outlets and publishers sign their published content in order to make it verifiable what the source of the content is, for a hypothetical example, AP news could sign photographs taken by a journalist, and if it is a reputable source that people trust to not be creating misinformation, then they can trust the signed content.
I don’t really see a way that digital signatures can be applied to content created and posted by untrusted users in order to verify that they aren’t AI generated or misinformation.
I could see a world where media outlets and publishers sign their published content in order to make it verifiable what the source of the content is, for a hypothetical example, AP news could sign photographs taken by a journalist, and if it is a reputable source that people trust to not be creating misinformation, then they can trust the l signed content.
Exactly – it’s a means of attribution. If you see a pic that claims to be from a certain media outlet but it doesn’t match their public key, you’re being played.
I don’t really see a way that digital signatures can be applied to content created and posted by untrusted users in order to verify that they aren’t AI generated or misinformation, that won’t be easily abused to defeat the purpose
That’s the point. If you don’t trust the source, why would you trust their content?
Ah okay, we are just describing the same thing 👍 I agree, this will be our future
How does this address the fact that people don’t care whether something is real or fake? You can sign it all you want, but if nobody cares about the signature, you haven’t accomplished anything.
I think we can win back a lot of people by making it easier to prove it’s fake. Right now we’re only asking them to take one source’s word over the other. We don’t need to convince everyone — only to get things back to a normal percentage of village idiots.
They don’t care that it’s fake. The loudest people and the fake accounts will continually post and repost without any consideration for truth.
The loudest people and the fake accounts
Well yeah, they’re the ones deliberately spreading it. I don’t care about them, I care about the uninformed people in the middle who don’t know what’s real anymore.
Already happening and it’s gross, and will erode internet freedom and anonymity. This is advanced internet surveillance and tracking and they are using “stopping the spread of AI misinfo” as the excuse.
Soon all images produces in any manor will be able to practically identify the creator. Hiding tracking data in images is fucked up. This is also something adobe is working on implementing In their tools, so this isn’t going to be just AI images.
https://openai.com/index/understanding-the-source-of-what-we-see-and-hear-online/
Already happening and it’s gross, and will erode internet freedom and anonymity.
How? You can remain anonymous and still have a public key. I only need to know that I trust “Zetta”; I don’t need your real name and home address.
Start? Tay showed us this back in 2016.