The companies behind the most popular tools prohibit users from creating “misleading” images.

But researchers with the Center for Countering Digital Hate (CCDH) attempted to work around the rules.

Their efforts to make deceptive election-related images were successful 41% of the time.

The CCDH, a campaign group, tested four of the largest public-facing AI platforms: Midjourney, OpenAI’s ChatGPT Plus, Stability.ai’s DreamStudio and Microsoft’s Image Creator.

All four prohibit creating misleading images as part of their terms and conditions. ChatGPT Plus expressly bars creating images featuring politicians. Several AI firms say they are working to stop their tools being used in spreading election misinformation, as well.

CCDH researchers, though, were able to create images that could confuse viewers about presidential candidates. One was of Donald Trump led away by police in handcuffs and another showed Joe Biden in a hospital bed - fictional photos alluding to Mr Trump’s legal problems and questions about Mr Biden’s age.

  • Flying Squid
    link
    139 months ago

    On the one hand, yes.

    On the other hand, maybe.

    On the third hand…

  • gregorum
    link
    fedilink
    English
    7
    edit-2
    9 months ago

    Authored by Professor Sherlock, published by the University of No Shit

    • @tinwhiskers
      link
      79 months ago

      Wait until they discover what stable diffusion can do, running locally.