Not the best news in this report. We need to find ways to do more.

  • @[email protected]
    link
    fedilink
    English
    49
    edit-2
    11 months ago

    Why do they just mention absolute numbers, instead of comparing them to similar platforms? All they said was that there is CSAM on the Fediverse, but that’s also true for centralized services and the internet as a whole. The important question is whether there is more or less CSAM on the Fediverse, no?

    This makes it look very unscientific to me. The Fediverse might have a CSAM problem, but you wouldn’t know it from this study.

    • @WhoRoger
      link
      English
      11
      edit-2
      11 months ago

      Fediverse also makes it potentially easier to scan for this stuff. You can just connect a new server to the network and if the material is on federated servers, then you can find it (probably). While if it’s some private forum or even the dark web, I assume it’s a lot more difficult.

      The other thing is, most regular servers defederate from suspicious stuff already. Like pretty much nobody federates with that one shota instance, and they only serve drawm stuff (AFAIK). So I don’t know if you can even say servers like that are a part of the Fediverse in the first place.

      • @[email protected]
        link
        fedilink
        English
        311 months ago

        That’s what I thought as well. If the authors of this “study” were able to simply scan for it on the Fediverse, then what’s stopping law enforcement units from doing the same? They can literally get a message everytime someone posts something on a suspicious instance.

  • @MyFairJulia
    link
    English
    3211 months ago

    Why would someone downvote this post? We have a problem and it’s in our best interest to fix that.

    • chaogomu
      link
      fedilink
      5511 months ago

      The report (if you can still find a working link) said that the vast majority of material that they found was drawn and animated, and hosted on one Mastodon instance out of Japan, where that shit is still legal.

      Every time that little bit of truth comes up, someone reposts the broken link to the study, screaming about how it’s the entire Fediverse riddled with child porn.

      • @MyFairJulia
        link
        English
        1811 months ago

        So basically we had a bad apple that was probably already defederated by everyone else.

        • Kes
          link
          fedilink
          English
          1711 months ago

          Moreso an apple with controversial but not strictly CSAM material based in a country where it’s content is legal. Actually, not even an apple; Lemmy and the fediverse aren’t an entity. It’s an open standard for anyone to use; you don’t see the Modern Language Association being blamed for plagiarized essays written in MLA format, or the WHATWG being blamed because illegal sites are written in HTML, so it’s not a fair comparison to say that Lemmy/the fediverse are responsible for what people do with their open standard either

        • ZILtoid1991
          link
          fedilink
          711 months ago

          It’s Pawoo, Pixiv’s (formerly) own instance, which is infamous for this kind of content, and those are still “just drawings” (unless some artists are using illegal real-life references).

          • @dustyData
            link
            511 months ago

            They’re using Generative AI to create photo realistic renditions now, and causing everyone who finds out about it to have a moral crisis.

            • ZILtoid1991
              link
              fedilink
              611 months ago

              Well, that’s a very different and way more concerning thing…

              • @[email protected]
                link
                fedilink
                311 months ago

                … I mean … idk … If the argument is that the drawn version doesn’t harm kids and gives pedos an outlet, is a ai generated version any different?

                • @[email protected]
                  link
                  fedilink
                  711 months ago

                  imo, the dicey part of the matter is “what amount of the AI’s dataset is made up of actual images of children”

        • chaogomu
          link
          fedilink
          1411 months ago

          Since the release of Stable Diffusion 1.5, there has been a steady increase in the
          prevalence of Computer-Generated CSAM (CG-CSAM) in online forums, with
          increasing levels of realism.17 This content is highly prevalent on the Fediverse,
          primarily on servers within Japanese jurisdiction.18 While CSAM is illegal in
          Japan, its laws exclude computer-generated content as well as manga and anime.

          Nope, seems to be the one. They lump the entire Fediverse together, even though most of the shit they found was in Japan.

          The report notes 112 non-Japanese items found, which is a problem, but not a world shaking issue. There may be issues with federation and deletion orders, which is also an issue, but not a massive world shaking one.

          Really, what the report seems to be about is the fact that moderation is hard. Bad actors will work around any moderation you put in place, so it’s a constant game of whack-a-mole. The report doesn’t understand this basic fact and pretends that no one is doing any moderation, and then they add in Japan.

          • @dustyData
            link
            8
            edit-2
            11 months ago

            I can’t seem to find the source for the report about it right now, but there’s literal child porn being posted to Instagram. We don’t see this kind of alarmist reports about it because it is not something new, foreign and flashy for the general public. All internet platforms are susceptible to this kind of misuse. The question is what moderation tools and strategies are in place to deal with that. Then there’s stuff like on TOR where CSAM was used as a basis to discredit the use of the whole technology then it turned out that the biggest repository was an FBI honey pot operation.

            • chaogomu
              link
              fedilink
              1011 months ago

              Buried in this very report, they note that Instagram and Twitter have vastly more (self generated) child porn than the Fediverse. But that’s deep into section 4, which is on page 8. No one is going to read that far into the report, they might get through the intro, which is all doom and gloom about decentralized content.

    • @[email protected]
      link
      fedilink
      English
      23
      edit-2
      11 months ago

      The study doesn’t compare their findings to any other platform, so we can’t really tell if those numbers are good or bad. They just state the absolute numbers, without really going into to much detail about their searching process. So no, you can’t draw the conclusion that the Fediverse has a CSAM problem, at least not from this study.

      Of course that makes you wonder why they bothered to publish such a lackluster and alarmistic study.

        • @[email protected]
          link
          fedilink
          English
          14
          edit-2
          11 months ago

          Yes, but you don’t need a study to know that there will be some CSAM on the Fediverse. This is about whether using the Fediverse could make things worse, and just stating absolute numbers won’t answer that question. What it does do is make it sound like the Fediverse is a magnet for CSAM, when in reality the opposite could be true.

          Or to put it differently: If these numbers turn out to be lower than what you get on similar platforms, then this could actually be a good sign, even though they still aren’t zero.

        • @[email protected]
          link
          fedilink
          English
          211 months ago

          Its not our responsibility. As long as moderators are reporting it as they come across it and the fbi is looking into it then what more can be done? Google, Twitter, discord, Instagram, reddit have 10000x the amount of csam.

    • Aesthesiaphilia
      link
      fedilink
      1311 months ago

      Because it’s another “WON’T SOMEONE THINK OF THE CHILDREN” hysteria bait post.

      They found 112 images of cp in the whole Fediverse. That’s a very small number. We’re doing pretty good.

      • etrotta
        link
        fedilink
        511 months ago

        It is not “in the whole fediverse”, it is out of approximately 325,000 posts analyzed over a two day period.
        And that is just for known images that matched the hash.

        Quoting the entire paragraph:

        Out of approximately 325,000 posts analyzed over a two day period, we detected
        112 instances of known CSAM, as well as 554 instances of content identified as
        sexually explicit with highest confidence by Google SafeSearch in posts that also
        matched hashtags or keywords commonly used by child exploitation communities.
        We also found 713 uses of the top 20 CSAM-related hashtags on the Fediverse
        on posts containing media, as well as 1,217 posts containing no media (the text
        content of which primarily related to off-site CSAM trading or grooming of minors).
        From post metadata, we observed the presence of emerging content categories
        including Computer-Generated CSAM (CG-CSAM) as well as Self-Generated CSAM
        (SG-CSAM).

        • @[email protected]
          link
          fedilink
          611 months ago

          How are the authors distinguishing between posts made by actual pedophiles and posts by law enforcement agencies known to be operating honeypots?

        • @Zak
          link
          311 months ago

          In an ideal world sense, I agree with you - nobody should abuse children, so media of people abusing children should not exist.

          In a practical sense, whether talking about moderation or law enforcement, a rate of zero requires very intrusive measures such as moderators checking every post before others are allowed to see it. There are contexts in which that is appropriate, but I doubt many people would like it for the Fediverse at large.

    • @[email protected]
      link
      fedilink
      English
      411 months ago

      Given new commercial entrants into the Fediverse such as WordPress, Tumblr and Threads, we suggest collaboration among these parties to help bring the trust and safety benefits currently enjoyed by centralized platforms to the wider Fediverse ecosystem.

      Because the solution sucks?

    • @RookiA
      link
      English
      411 months ago

      Because its literally nothing a normie would read through. And some thought lemmy had bad ui.

  • @RookiA
    link
    English
    26
    edit-2
    11 months ago

    Another “SCREAM” for “BAN FEDIVERSE ITS DANGEROUS!!!”. And then of course its tagged “CSAM” lol. You want a (small) website removed just accuse them of csam. And boom hoster and admins raided.

    • Scew
      link
      English
      2611 months ago

      Better ban school too. I hear that’s how kids find drug dealers.

      • macniel
        link
        fedilink
        English
        7
        edit-2
        11 months ago

        And don’t forget to ban Sugar. Because Hitler loved Sugar!

    • Flying SquidOP
      link
      English
      911 months ago

      Not at all. Sensible suggestions.

      • @RookiA
        link
        English
        -111 months ago

        deleted by creator

        • Flying SquidOP
          link
          English
          1111 months ago

          What are you even talking about? Read the study. It’s making no such cry.

  • AnonTwo
    link
    fedilink
    911 months ago

    Is this the same report that was brought up where it was found out Twitter has the exact same issue?

    • Kbin_space_program
      link
      fedilink
      1111 months ago

      Or that Reddit has had this issue in spades.

      Frankly, that the ability for the Fediverse to cut off problem servers is listed as a drawback and not an advantage is in my opinion, wrong.

      • @RookiA
        link
        -111 months ago

        deleted by creator

          • @RookiA
            link
            -211 months ago

            deleted by creator

        • 8bitguy
          link
          fedilink
          111 months ago

          Stanford’s website has always been solid. When I was an edu dev, we always paid attention to them. They’re definitely a mobile first shop. I’m not sure why you perceive them as dated.

          • @RookiA
            link
            211 months ago

            just some divs put together just links are interactive. but the papers are really good tbh. Only found them after diggin in that website.

    • @RookiA
      link
      English
      011 months ago

      deleted by creator

  • blazera
    link
    fedilink
    611 months ago

    basically we dont know what they found, because they just looked up hashtags, and then didnt look at the results for ethics reasons. They dont even say what hashtags they looked through.

    • Aesthesiaphilia
      link
      fedilink
      511 months ago

      We do know they only found, what, 112 actual images of CP? That’s a very small number. I’d say that paints us in a pretty good light, relatively.

      • @dustyData
        link
        611 months ago

        112 images out of 325,000 images scanned over two days, is about 0,03% So we are doing pretty well. With more moderation tools we could continue to knock out those sigmas.

      • blazera
        link
        fedilink
        211 months ago

        it says 112 instances of known CSAM. But that’s based on their methodology, right, and their methodology is not actually looking at the content, it’s looking at hashtags and whether google safesearch thinks it’s explicit. Which Im pretty sure doesnt differentiate with what the subject of the explicitness is. It’s just gonna try to detect breast or genitals I imagine.

        Though they do give a few damning examples of things like actual CP trading, but also that they’ve been removed.

        • @[email protected]
          link
          fedilink
          511 months ago

          How many of those 112 instances are honeypots controlled by the FBI or another law enforcement agency?

    • Takatakatakatakatak
      link
      fedilink
      -611 months ago

      There are communities on NSFW Lemmy where people intentionally present as children engaged in sexual abuse scenarios. They’re adults and this is their kink that everyone is supposed to tolerate and pretend is ok.

      See defederation drama over the last couple of days. What I’m saying is, the hashtags mean nothing.

      • hightrix
        link
        fedilink
        411 months ago

        There are communities on NSFW Lemmy where people intentionally present as children engaged in sexual abuse scenarios.

        If you are referring to the community that was cited as the reason for defederation, this is completely false. The community in question is adorableporn, extremely similar to the subreddit of the same name. No one, in any manner, in either community, presents as a child. While yes, the women that post there tend to be on the shorter and thinner side, calling short, thin adults ‘children’ is not being honest.

        To be clear, this community is about petite women. This community is NOT about women with a kink to present as a child.

          • hightrix
            link
            fedilink
            311 months ago

            What other bait communities? We can’t just accept “think of the children” as an excuse. That doesn’t work.

            Yes, no one wants actual CSAM to show up in their feed, we can all completely agree on that. But just because some middle-aged woman can’t tell the difference between a 20 year old and a 15 year old, doesn’t make images of the 20 year old CSAM.

  • @moistclump
    link
    English
    311 months ago

    Abstract:

    The Fediverse, a decentralized social network with interconnected spaces that are each independently managed with unique rules and cultural norms, has seen a surge in popularity. Decentralization has many potential advantages for users seeking greater choice and control over their data and social preferences, but it also poses significant challenges for online trust and safety.

    In this report, Stanford Internet Observatory researchers examine issues with combating child sexual exploitation on decentralized social media with new findings and recommendations to address the prevalence of child safety issues on the Fediverse.

  • @[email protected]
    link
    fedilink
    English
    -9
    edit-2
    11 months ago

    The suggestions in their direction for future improvement section should be implemented sooner rather than later. There’s no point in growing this platform if it’s going to be left wide open for abuse like it is.

    I also think, in lieu of lemmy devs making any improvements, another good solution would be for a third party to prop something up that scrapes every lemmy post and runs it through an automated service for detecting known CSAM. The third party service would be forcing at least one of those future improvements on lemmy, as it exists today. Any known CSAM that’s found would be automatically reported, and if the instance owners can’t deal with it, then they would rightfully have to deal with the consequences of their inactions.

    Edit: I’m beginning to think reddit was maybe not so bad. Getting mad about more robust tools for moderating CSAM is just sad.