A Telegram user who advertises their services on Twitter will create an AI-generated pornographic image of anyone in the world for as little as $10 if users send them pictures of that person. Like many other Telegram communities and users producing nonconsensual AI-generated sexual images, this user creates fake nude images of celebrities, including images of minors in swimsuits, but is particularly notable because it plainly and openly shows one of the most severe harms of generative AI tools: easily creating nonconsensual pornography of ordinary people.

  • @neatchee
    link
    English
    4
    edit-2
    9 months ago

    That’s why I put “real threat” in quotes ; I was paraphrasing what I consider to be the excessive focus on FR

    I’m a security professional. FR is not the easiest way to track everybody/anybody. It’s just the most visible and easily grok’d by the general public because it’s been in movies and TV forever

    To whit, FR itself isn’t what makes it “easy”, but rather the massive corpus of freely available data for training combined with the willingness of various entities to share resources (e.g. Sharing surveillance video with law enforcement).

    What’s “easiest” entirely depends on the context, and usually it’s not FR. If I’m trying to identify the source of a particular set of communications, FR is mostly useless (unless I get lucky and identify, like, the mailbox they’re using or something silly like that). I’m much more interested in voice identification, fingerprinting, geolocation, etc in that scenario

    Again, FR is just…known. And visible. And observable in its use for nefarious purposes by shitty governments and such.

    It’s the stuff you don’t see on the news or in the movies that you should really be worried about

    (and I’m not downvoting you either; that’s for when things don’t contribute, or deserve to be less visible because of disinformation; not for when you disagree with someone)