• @[email protected]
    link
    fedilink
    English
    20
    edit-2
    11 months ago

    Given that Facebook and meta’s other platforms are one of the largest distributors of that, if they scrape Facebook for data this is not exactly a surprise unfortunately…

  • @[email protected]
    link
    fedilink
    English
    511 months ago

    How are you supposed to train the dam thing to detect something without using that thing though?

    • Sapphire VelvetOP
      link
      fedilink
      English
      811 months ago

      How are you supposed to train the dam thing to detect something without using that thing though?

      There are various organizations which have clearances to handle child abuse images. Where the data is handled like the plutonium it is, and everybody is vetted. I’m sure they’ve already experimented with developing a bot to detect images.

      They even make available their traditional hash databases to server admins who want to run their images against their hash databank.

      The issue as the report states is that nobody will willingly link said bot/database to the training data, because either they don’t want a copyright fight or they don’t want to acknowledge the issue.

    • GigglyBobble
      link
      fedilink
      511 months ago

      It’s about generators which certainly should not be trained with such material.

  • AutoTL;DRB
    link
    fedilink
    English
    411 months ago

    This is the best summary I could come up with:


    More than 1,000 known child sexual abuse materials (CSAM) were found in a large open dataset—known as LAION-5B—that was used to train popular text-to-image generators such as Stable Diffusion, Stanford Internet Observatory (SIO) researcher David Thiel revealed on Wednesday.

    His goal was to find out what role CSAM may play in the training process of AI models powering the image generators spouting this illicit content.

    “Our new investigation reveals that these models are trained directly on CSAM present in a public dataset of billions of images, known as LAION-5B,” Thiel’s report said.

    But because users were dissatisfied by these later, more filtered versions, Stable Diffusion 1.5 remains “the most popular model for generating explicit imagery,” Thiel’s report said.

    While a YCombinator thread linking to a blog—titled “Why we chose not to release Stable Diffusion 1.5 as quickly”—from Stability AI’s former chief information officer, Daniel Jeffries, may have provided some clarity on this, it has since been deleted.

    Thiel’s report warned that both figures are “inherently a significant undercount” due to researchers’ limited ability to detect and flag all the CSAM in the datasets.


    The original article contains 837 words, the summary contains 182 words. Saved 78%. I’m a bot and I’m open source!

  • @BetaDoggo_
    link
    English
    111 months ago

    Between 0.00002% and 0.00006%

      • @[email protected]
        link
        fedilink
        English
        8
        edit-2
        11 months ago

        While I agree with the sentiment, that’s 2-6 in 10,000,000 images; even if someone was personally reviewing all of the images that went into these data sets, which I strongly doubt, that’s a pretty easy mistake to make, when looking at that many images.

        • @[email protected]
          link
          fedilink
          English
          811 months ago

          “Known CSAM” suggests researchers ran it through automated detection tools which the dataset authors could have used.

        • Sapphire VelvetOP
          link
          fedilink
          English
          111 months ago

          They’re not looking at the images though. They’re scraping. And their own legal defenses rely on them not looking too carefully else they cede their position to the copyright holders.

          • snooggums
            link
            fedilink
            411 months ago

            Technically they violated the copyright of the CSAM creators!