Summary

Experts warn of rising online racism fueled by X’s generative AI chatbot, Grok, which recently introduced a photorealistic image feature called Aurora.

Racist, fake AI images targeting athletes and public figures have surged, with some depicting highly offensive and historically charged content.

Organizations like Signify and CCDH highlight Grok’s ability to bypass safeguards, exacerbating hate speech.

Critics blame X’s monetization model for incentivizing harmful content.

Sports bodies are working to mitigate abuse, while calls grow for stricter AI regulation and accountability from X.

  • @[email protected]
    link
    fedilink
    32 days ago

    I could see a world where media outlets and publishers sign their published content in order to make it verifiable what the source of the content is, for a hypothetical example, AP news could sign photographs taken by a journalist, and if it is a reputable source that people trust to not be creating misinformation, then they can trust the l signed content.

    Exactly – it’s a means of attribution. If you see a pic that claims to be from a certain media outlet but it doesn’t match their public key, you’re being played.

    I don’t really see a way that digital signatures can be applied to content created and posted by untrusted users in order to verify that they aren’t AI generated or misinformation, that won’t be easily abused to defeat the purpose

    That’s the point. If you don’t trust the source, why would you trust their content?

    • @theunknownmuncher
      link
      3
      edit-2
      2 days ago

      Ah okay, we are just describing the same thing 👍 I agree, this will be our future