An in-depth police report obtained by 404 Media shows how a school, and then the police, investigated a wave of AI-powered “nudify” apps in a high school.

  • @RememberTheApollo
    link
    English
    -69 months ago

    Digitally watermark the image for identification purposes. Hash the hardware, MAC ID, IP address, of the creator and have that inserted via steganography or similar means. Just like printers use a MIC for documents printed on that printer, AI generated imagery should do the same at this point. It’s not perfect and probably has some undesirable consequences, but it’s better than nothing when trying to track down deepfake creators.

    • @abhibeckert
      link
      English
      5
      edit-2
      9 months ago

      The difference is it costs billions of dollars to run a company manufacturing printers and it’s easy for law enforcement to pressure them into not printing money.

      It costs nothing to produce an AI image, you can run this stuff on a cheap gaming PC or laptop.

      And you can do it with open source software. If the software has restrictions on creating abusive material, you can find a fork with that feature disabled. If it has stenography, you can find one with that disabled too.

      You can tag an image to prove a certain person (or camera) took a photo. You can’t stop people from removing that.

    • @General_Effort
      link
      English
      39 months ago

      Honestly, I can’t tell if this is sarcasm or not.

      • @yamanii
        link
        English
        49 months ago

        Samsung’s AI does watermark their image at the exif, yes it is trivial for us to remove the exif, but it’s enough to catch these low effort bad actors.

        • @General_Effort
          link
          English
          29 months ago

          I think all the main AI services watermark their images (invisibly, not in the metadata). A nudify service might not, I imagine.

          I was rather wondering about the support for extensive surveillance.