Content moderators who worked on ChatGPT say they were traumatized by reviewing graphic content: ‘It has destroyed me completely.’::Moderators told The Guardian that the content they reviewed depicted graphic scenes of violence, child abuse, bestiality, murder, and sexual abuse.

  • Uriel238 [all pronouns]
    link
    fedilink
    English
    39
    edit-2
    1 year ago

    This came up years ago when Facebook had its hearings about the difficulties of content moderation. Through hundreds of billions of pieces of content you’re going to end up with a millions of NSFL BLITs. Of those even if .1% requires a human to determine whether or not it’s unsafe, thats still a thosand moderators who’ve had their day ruined.

    So apparently the response so far has been to outsource to undeveloped countries to ruin their lives.

    • @[email protected]
      link
      fedilink
      English
      61 year ago

      Wonder if this would be something AI could eventually be trained to filter.

      Hopefully it wouldn’t make the AI hate humanity enough to evolve and destroy us though, thinking we are all perverts and sadists.

      • Uriel238 [all pronouns]
        link
        fedilink
        English
        21 year ago

        I doubt it. Our own sense of disgust is protective, to keep us from getting poisoned or infected with a contagious pathogen, or in the case of seeing violence, to keep us safe from that very same threat.

        Even if we instil AI with survival objectives it’ll learn to avoid things that are dangerous to it, while still being able to operate, say, our sewage and waste disposal systems without having a visceral response to the material being processed.

        That doesn’t fully make us safe from AI deciding we’re too degenerate to live. An interesting notion comes up by recent news of an Air Force general saying USAF AI is trained on Judeo Christian values. And while that doesn’t mean anything, I could see an AI-driven autonomous weapon (or a commander-AI that controlled and organized an immense army of murder-drones) being trained to assess humans based on their history of behaviors, destroying sinners or degerates or perverts or whatever.

        Given we humans are a vengeful lot (Hammurabi’s code an eye for an eye was to set an upper limit on retribution, claiming an eye rather than killing their family over the incident) it would be very easy to set judge bots to err on the side of assuming guilt, necessating punitive action.

        AIs going renegade can always be attributed to poor programming.

        • @[email protected]
          link
          fedilink
          English
          21 year ago

          I reckon we should work out how to feed the content into a brain in a jar, and then measure the disgust - might need a few brains to ensure there is a consensus.