• @big_slap
    link
    English
    1
    edit-2
    1 month ago

    I feel like the answer would be the person who decided to kill the kids, right? they are the one who made the call to commit the war crime.

    • @Fondots
      link
      English
      111 month ago

      The issue people are worried about is that no one is making the decision to kill kids, it’s the AI making the call. It’s being given another objective and in the process of carrying that out makes the call to kill kids as part of that objective.

      For example, you give an AI drone instructions to fly over an area to identify and drop bombs on military installations, and the AI misidentifies a school as a military base and bombs it. Or you send a dog bot in to patrol an area for intruders, and it misidentifies kids playing out in the streets as armed insurgents.

      In a situation where it’s human pilots, soldiers, and analysts and such making the call, we would (or at least should) expect the people involved to face some sort of repercussions- jail time, fines, demotions, etc.

      None of which you can really do for a drone.

      And that’s of course before you get into the really crazy sci Fi dystopia stuff, where you send a team of robots into a city with general instructions to clear it of insurgents, and the AI comes to the conclusion somehow that the fastest and most efficient way to accomplish that is to just kill every person in the city since it can’t be absolutely sure who is and isn’t a terrorist

      • @big_slap
        link
        English
        21 month ago

        good points made, I agree