An Asian MIT student asked AI to turn an image of her into a professional headshot. It made her white with lighter skin and blue eyes.::Rona Wang, a 24-year-old MIT student, was experimenting with the AI image creator Playground AI to create a professional LinkedIn photo.

  • @postmateDumbass
    link
    English
    11 year ago

    The datasets will get better because people have started to care.

    Historically much of the data used was what was easy and cheap to acquire. Surveys of class mates. Arrest reports. Public available, government curated data.

    Good data costs money and time to create.

    The more people fact check, the more flaws can be found and corrected. The more attention the dataset gets the more funding is likely to come to resurvey or w/e.

    It part of the peer review thing.

    • @Pipoca
      link
      English
      11 year ago

      It’s not necessarily a matter of fact checking, but of correcting for systemic biases in the data. That’s often not the easiest thing to do. Systems run by humans often have outcomes that reflect the biases of the people involved.

      The power of suggestion runs fairly deep with people. You can change a hiring manager’s opinion of a resume by only changing the name at the top of it. You can change the terms a college kid enrolled in a winemaking program uses to describe a white wine using a bit of red food coloring. Blind auditions for orchestras result in significantly more women being picked than unblinded auditions.

      Correcting for biases is difficult, and it’s especially difficult on very large data sets like the ones you’d use to train chatgpt. I’m really not very hopeful that chatgpt will ever reflect only justified biases, rather than the biases of the broader culture.