• @[email protected]
    link
    fedilink
    English
    32 days ago

    Yeah, but who decides what content is disturbing? I mean there is CSAM, but the fact that it even exists shows that not everyone is disturbed by it.

    • @anus
      link
      English
      32 days ago

      This is a fucking wild take

      • @[email protected]
        link
        fedilink
        English
        1
        edit-2
        2 days ago

        I mean I’m not defending CSAM, just to be clear. I just disagree with any usage of AI that could turn somebody’s life upside down based on a false positive. Plus you also get idiots who report things they just don’t like.

    • @Zexks
      link
      English
      2
      edit-2
      2 days ago

      You’ll never be able to get a definition that covers your question. The world isn’t black and white. It’s gray and because of that a line has to be drawn and yes it would always be considered be arbitrary for some. But a line must be drawn none the less.

      • @[email protected]
        link
        fedilink
        English
        3
        edit-2
        2 days ago

        Agreed 100%, a line absolutely should be drawn.

        That said, as a parent of 5 kids, I’m more concerned for false positives. I’ve heard enough horror stories about parents getting arrested over completely innocent pics of their kids as toddlers or infants, that may have genitalia showing. Like them at 6 months old doing something silly in the tub, or what have you. I don’t trust a computer program that doesn’t understand context to accurately handle those kinds of photos. Frankly, parents shouldn’t be posting those pics on social media to begin with, but I digress. It sets a bad precedent.

        • @Womble
          link
          English
          21 day ago

          There’s a vast gulf between automated moderation systems deleting posts and calling the cops on someone.