• @KISSmyOSFeddit
    link
    01 month ago

    A robot may not injure a human or through inaction allow a human being to come to harm.

    What’s an injury? Does this keep medical robots from cutting people open to perform surgary? What if the two parts conflict, like in a hostage situation? What even is “harm”? People usually disagree about what’s actually harming or helping, how is a robot to decide this?

    A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.

    If a human orders a robot to tear down a wall, how does the robot know whose wall it is or if there’s still someone inside?
    It would have to check all kinds of edge cases to make sure its actions are harming no one before it starts working.
    Or it doesn’t, in which case anyone could walk by my house and by yelling at it order my robot around, cause it must always obey human orders.

    A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

    OK, so if a dog runs up to the robot, the robot MUST kill it to be on the safe side.

    • @maryjayjay
      link
      9
      edit-2
      1 month ago

      And Asimov spent years and dozens of stories exploring exactly those kinds of edge cases, particularly how the laws interact with each other. It’s literally the point of the books. You can take any law and pick it apart like that. That’s why we have so many lawyers

      The dog example is stupid “if you think about it for one minute” (I know it isn’t your quote, but you’re defending the position of the person the person I originally responded to). Several of your other scenarios are explicitly discussed in the literature, like the surgery.