• @Evotech
    link
    138 months ago

    Yeah, telling ai what not to do is highly ineffective

    • @WhiskyTangoFoxtrot
      link
      148 months ago

      “Do not injure a human or through inaction allow a human to come to harm.”

        • @WhiskyTangoFoxtrot
          link
          18 months ago

          Yeah, but in Asimov’s case it was because a strict adherence to the Three Laws often prevented the robots from doing what the humans actually wanted them to do. They wouldn’t just ignore a crucial “not” and do the opposite of what a law said.