All good and well till someone takes a screenshot.
It might be resistant to screenshots - unless I missed it, the article didn’t clarify whether the obfuscation process is applied to the image on a per-pixel basis, or within the file format itself…
If it was that easy to bypass it would be a pretty futile mechanism IMO, one would just need to convert the image to strip out the obfuscation 🫠 or just take a screenshot as you said
Sounds like it’s tiny changes to the image data to trick it. But it also sounds dependent on each algorithm. So while you might trick Stable Diffusion, another like Midjourney would be unaffected.
And either way, I’d bet mere jpeg compression would be enough to destroy your tiny changes.
a couple minutes of photoshop and a smudge or burn tool would also negate all the effects
These things never work in the real world. We’ve seen this over and over. It’s snakeoil. Latent space mapping may survive compression but don’t work across encoders.
Yeah it might work in the original format under some conditions but won’t survive a screenshot or saving to another format.
Once again there comes the time to manually shop oneself to handshake with celebrities.
The white paper linked’s title is very pragmatic sounding “Raising the Cost of Malicious AI-Powered Image Edit”. Would like to read it deeper later to see what the actual mechanisms deployed are. I know ive considered some form of attestation embedding both in the data and form linked with cryptographic signature. You know for emportant things like politics, diplomacy and celeberty endorsement. /s