• @[email protected]
    link
    fedilink
    English
    11 month ago

    Just like a “corporate veil” I’m sure this will get spun into a way to avoid responsibility. I mean, it shouldn’t, but that’s precisely why it will.

    “Your honor, Alphabet Inc. deeply regrets the misidentification & destruction of a civilian area, but the AI made an honest mistake. It’s unreasonable to expect manual review of EVERY neighborhood we target.”

    • Lvxferre [he/him]
      link
      fedilink
      English
      21 month ago

      This is likely true for what we already see. Fuck, people use even dumb word filters to avoid responsibility! (Cough Reddit mods “I didn’t do it, AutoMod did it” cough cough)

      That said this specific problem could be solved by AGI or another truly intelligent system. My concern is more like the AGI knowingly bombing a civilian neighbourhood, because it claims to be better for everyone else, due to the lack of morality. That would be way, waaaaay worse than the false positives like in your example.

      • @[email protected]
        link
        fedilink
        English
        21 month ago

        Ugh, yes, the machines “know what’s best”.

        I was just assuming it would be used for blame management, regardless if it was an accident or not.

        “See, it wasn’t me! It was ScapegoatGPT!”