Google’s DeepMind unit is unveiling today a new method it says can invisibly and permanently label images that have been generated by artificial intelligence.

  • @[email protected]
    link
    fedilink
    English
    35
    edit-2
    1 year ago

    TBF, I don’t think the purpose of this watermark is to prevent bad people for passing AI as real. It would be a welcome side-effect but that’s not why google wants this. Ultimately this is supposed to prevent AI training data from being contaminated with other AI generated content. You could imagine if the data set for training contains a million images generated with previous models having mangled fingers and crooked eyes, it would be hard to train a good AI out of that. Garbage in, garbage out.

    • Rob T Firefly
      link
      English
      141 year ago

      So theoretically, those of us who put original images online could add this invisible watermark to make AI models leave our stuff out of their “steal this” pile?

    • @[email protected]
      link
      fedilink
      English
      21 year ago

      I’m not sure that’s the case. For instance, a lot of smaller local models leverage GPT4 to generate synthetic training data, which drastically improves the model’s output quality. The issue comes in when there is no QC on the model’s output. The same applies to Stable Diffusion.

    • @[email protected]
      link
      fedilink
      English
      21 year ago

      AI-generated images are becoming increasingly realistic, AI can’t tell them apart anymore.

      • @CheeseNoodle
        link
        English
        151 year ago

        iirc AI models becoming worse after being trained with AI generated data is an actual issue right now. Even if we (or the AI) can’t distinguish them from real images there are subtle differences that can be compounded into quite large differences if the AI is fed its own work over several generations and lead to a degraded output.