• @yesman
    link
    English
    41 month ago

    The article is not about how the AI is responsible for the death. It’s likely that the woman would have died in the counterfactual.

    The question is not “how effective is AI”? The question is should life or death decisions be made by an electrified Oracle at Delphi. You must answer this question before “is AI effective” becomes relevant.

    If somebody was adjudicating traffic court with Tarot cards, would you ask: well how accurate are the cards compared to a judge?

    • @surewhynotlem
      link
      English
      51 month ago

      Decisions should be made by whomever or whatever is most effective. That’s not even a debate. If the tarot cards were right more often than the judge, fire the judge and get me a deck. Because the judge is clearly ineffective.

      You can’t privilege an approach just because it sounds more reasonable. It also has to BE more reasonable. It’s crazy to say “I’m happy being wrong because I’m more comfortable with the process”

      The trick of course is to find fair ways to measure effectiveness accurately and make sure it’s repeatable. That’s a rabbit hole of challenges.

      • @rottingleaf
        link
        English
        11 month ago

        The judge can bear legal responsibility. It’s a feedback loop - somebody should be responsible for failures. We live in a society. If that somebody is not the side causing failures, things will get bad.

        With a deck of cards it should be decided, how the responsibility is distributed between the party replacing humans with it, company producing cards, those interpreting the results.

    • @madsen
      link
      English
      41 month ago

      Your point is valid regardless but the article mentions nothing about AI. (“Algorithm” doesn’t mean “AI”.)