• @thedeadwalking4242
    link
    223 hours ago

    Do these companies actually have a group of people who read through and target specific concepts like this? Seems insane. If I was a intern somewhere punching these filters in I’d just throw a fit

    • @[email protected]
      link
      fedilink
      English
      423 hours ago

      they usually outsource any of the menial tasks to people in the global south. I work with someone who had a startup dating app where they used “AI” to match couples, but it was actually just a university student in Indonesia who they paid to do 8 hour stints sorting people’s profiles manually.

      • OBJECTION!
        link
        fedilink
        522 hours ago

        The classic “Mechanical Turk” scheme from the 1700’s lol

    • @[email protected]OP
      link
      fedilink
      221 hours ago

      ChatGPT undoubtably gives way different answers when asked about Palestine than when asked about other human rights violations and/or genocides. Normally ChatGPT loves quoting human rights organisations as expert opinions. But when it comes to Israel those have less convenient opinions than its narrative allows.

      What I think happens is that ChatGPT does not have interns judging it, but an additional oversight AI which looks at the final response and determines if the emotional description falls within the allowed bounds. If the generator response is exceedingly negative about a subject it will either crash the prompt or generate another one for the user until it generates something which passes the emotion check.