A Florida man is facing 20 counts of obscenity for allegedly creating and distributing AI-generated child pornography, highlighting the danger and ubiquity of generative AI being used for nefarious reasons.

Phillip Michael McCorkle was arrested last week while he was working at a movie theater in Vero Beach, Florida, according to TV station CBS 12 News. A crew from the TV station captured the arrest, which made for dramatic video footage due to law enforcement leading away the uniform-wearing McCorkle from the theater in handcuffs.

  • @MataVatnik
    link
    -17
    edit-2
    4 months ago

    Pretty sure the training data sets are CSAM.

    Edit, to those downvoting me and not reading the article:

    A 2023 study from Stanford University also revealed that hundreds of child sex abuse images were found in widely-used generative AI image data sets.

    “The content that we’ve seen, we believe is actually being generated using open source software, which has been downloaded and run locally on people’s computers and then modified,” Internet Watch Foundation chief technology officer Dan Sexton told The Guardian last year. “And that is a much harder problem to fix.”

    • @[email protected]
      link
      fedilink
      174 months ago

      I would imagine that AI having been trained on both pictures of kids and on adult sexual content would be somewhat enough to mix the two. Even if the output might end up uncanny.

      • @MataVatnik
        link
        04 months ago

        That’s the most likely case, now my question is was he using somebody else’s generator or did he train this one himself

        • Blaster M
          link
          English
          2
          edit-2
          4 months ago

          They mentioned it looks to be a local model he trained.

    • @[email protected]
      link
      fedilink
      English
      124 months ago

      One doesn’t need to browse AI generated images for longer than 5 seconds to realize it can generate a ton of stuff that you for absolute certainty can know wasn’t on the training data. I don’t get why people insist on the narrative that it can only output copies of what it has already seen. What’s generative about that?

      • @MataVatnik
        link
        -24 months ago

        If you took a minute to read the article:

        A 2023 study from Stanford University also revealed that hundreds of child sex abuse images were found in widely-used generative AI image data sets.

        “The content that we’ve seen, we believe is actually being generated using open source software, which has been downloaded and run locally on people’s computers and then modified,” Internet Watch Foundation chief technology officer Dan Sexton told The Guardian last year. “And that is a much harder problem to fix.”

        So not only do the online models have CSAM, but people are downloading open source software and I’d be very surprised if they weren’t feeding it CSAM

        • @[email protected]
          link
          fedilink
          English
          54 months ago

          That doesn’t dispute my argument; generative AI can create images that are not in the training data. It doesn’t need to know what something looks like as long as the person using it does and can write the correct prompt for it. The corn dog I posted below is a good example. You can be sure that wasn’t in the training data yet it was still able to generate it.

        • Blaster M
          link
          English
          14 months ago

          Online models since that discovery have scrubbed the offending sources and retrained, as well as added safeguards to their models to try and prevent it.

    • @Cryophilia
      link
      74 months ago

      If that’s the basis for making it illegal, then all AI is illegal.

      Which…eh maybe that’s not such a bad idea

    • Blaster M
      link
      English
      1
      edit-2
      4 months ago

      Since that study, every legit AI model has removed said images from their datasets and all models trained afterwards no longer include knowledge about those source images.

      I know one AI model has specifically not included photos of underage people at all, to minimize the possibility this can happen even on accident. Making CSAM from an AI model is something anyone determined and patient enough can do with a good model trainer and a dataset of source images that have the features they want, even if the underage images are completely clean.

      Making CSAM with an AI model is a deliberate act in almost every case… and in this case, he was arrested for distributing these images, which is super illegal for obvious reasons.