• Evotech
    link
    fedilink
    arrow-up
    13
    ·
    1 year ago

    Yeah, telling ai what not to do is highly ineffective

        • WhiskyTangoFoxtrot
          link
          fedilink
          arrow-up
          1
          ·
          1 year ago

          Yeah, but in Asimov’s case it was because a strict adherence to the Three Laws often prevented the robots from doing what the humans actually wanted them to do. They wouldn’t just ignore a crucial “not” and do the opposite of what a law said.