Summary

A Danish study found Instagram’s moderation of self-harm content to be “extremely inadequate,” with Meta failing to remove any of 85 harmful posts shared in a private network created for the experiment.

Despite claiming to proactively remove 99% of such content, the platform’s algorithms were shown to promote self-harm networks by connecting users.

Critics, including psychologists and researchers, accuse Meta of prioritizing engagement over safety, with vulnerable teens at risk of severe harm.

The findings suggest potential non-compliance with the EU’s Digital Services Act.

  • @[email protected]
    link
    fedilink
    122 days ago

    Guess its time to ban them too. Since the last social media ban was bipartisan Im sure this one will happen quickly /s

    • @[email protected]
      link
      fedilink
      English
      10
      edit-2
      2 days ago

      To be clear the last ban was a ByteDance ban, not a TicTok ban. If ByteDance sells off it’s portion of TikTok, TikTok would be able to remain.

      But in this case, the Section 230 needs to re-visited. It was written before social networks started algorithmically deciding what to show people. It needs to be adapted so these companies can be held liable for the content they “recommend” to users. Once they can be sued for their algorithm’s decisions, those algorithms will magically get better. Or at least less harmful.

      • @brucethemoose
        link
        52 days ago

        I feel like we need a phrase for this.

        “It’s the algorithm, stupid!”

        And if they can be sued for algorithmic recommendations at all, I feel like their resort would be going back to pure “user” recommendation like community managed feeds, following other’s recs and such. Which would kill SO MANY birds with one stone.

        • @[email protected]
          link
          fedilink
          English
          52 days ago

          Not really. Because the law gives them quasi common-carrier protections. So they can’t be held liable by the courts.

          That made sense at the time; When the feed was just a simple reverse chronology of whatever you decided to subscribe to. But now, they actually decide what you see and don’t. The laws need to catch up.

  • @jaybone
    link
    42 days ago

    Citing AI use, what does that mean?

    • ℍ𝕂-𝟞𝟝
      link
      fedilink
      English
      42 days ago

      Where did you read that?

      How AI comes into the picture is that Meta claims that they do this via “AI”, which cannot be known how it works, so they can’t be blamed for it being ineffective, and the research group put together a piece of software that did a better job in 3 days, so Meta should either git gud or stop bullshitting about trying their best.

      • @jaybone
        link
        32 days ago

        I thought I read this in the article but I can’t seem to find it now

        Anyway yes, it appears as if they are trying to blame the fact that they use shitty AI that doesn’t do a very good job. That’s a pretty shitty excuse.

        • @[email protected]
          link
          fedilink
          English
          12 days ago

          Until some regulatory agency fines them to the degree that it actually hurts their bottom line, they have no incentive to do better. You can’t appeal to the morality of a corporation. As a business, they only respond to numbers.

  • @xc2215x
    link
    22 days ago

    The content they watch there is questionable.

  • @Telodzrum
    link
    -52 days ago

    Yeah, no shit. The Internet remains mankind’s greatest mistake.

    • @[email protected]
      link
      fedilink
      English
      112 days ago

      Centralized, engagement-driven, advertisement financed, social media, is the internet’s greatest mistake.
      Prior to that shit, the internet was pretty great. The fediverse is way to fix much of it.