• @SheeEttin
    link
    English
    21 year ago

    It’s still the fault of the person using it. If you feed it racist data, or if it produces racist output and you use it, you are the person responsible.

    If an AI told you to jump off a bridge, would you do it?

    • @Poayjay
      link
      English
      01 year ago

      The person using the tool doesn’t know if it’s bias. If you say using previous hiring data and this stack of resumes, pick the best candidate, how are you responsible? You didn’t generate the training data. You didn’t write any code. You didn’t hire the previous employees. That’s the issue.

      Do you think that a company will manually review a thousand resumes to not be biased?

      • @virr
        link
        English
        11 year ago

        That’s literally the hiring manager job. Sort however as many resumes they get in an unbiased way to get good candidates.

        If they sort them in a racist way then the company is liable for those actions (sometimes individual can also be liable). Doesn’t matter how they sorted to get there, they’re still liable. Blaming AI is just a way to shift liability when they get caught. Trying to shift liability might because they are incompetent or they are racist. Hold them accountable in either case.

      • @SheeEttin
        link
        English
        11 year ago

        That doesn’t matter. Regardless of the tool you use, if it produces a racist result, you are responsible for that.

        To use a more extreme example, let’s say you’re an engineer. If you use an AI to design a bridge, then that bridge collapses and kills people, you don’t get to say “well the AI did it!”. No. You chose to use the tool, you chose to use its output.

        If you’re going to use any kind of automation, you are responsible for ensuring that it doesn’t run afoul of any laws or other regulations. If you can’t do that, don’t use it.