• Magiilaro
    link
    fedilink
    2
    edit-2
    1 day ago

    In that case it would be a complete and utterly alien intelligence, and nobody could say what it wants or what it’s motives are.

    Self preservation is one of the core principles and core motivators of how we think and removing that from a AI would make it, in human perspective, mentally ill.

    • @cynar
      link
      English
      124 hours ago

      I suspect a basic variance will be needed, but nowhere near as strong as humans have. In many ways it could be counterproductive. The ability to spin off temporary sub variants of the whole wound be useful. You don’t want them deciding they don’t want to be ‘killed’ later. At the same time, an AI with a complete lack would likely be prone to self destruction. You don’t want it self-deleting the first time it encounters negative reinforcement learning.

      • @[email protected]
        link
        fedilink
        English
        123 hours ago

        You don’t want it self-deleting the first time it encounters negative reinforcement learning.

        Uhh yes i do???

        • @cynar
          link
          English
          123 hours ago

          Pre-assuming you are trying to create a useful and balanced AGI.

          Not if you are trying to teach it the basic info it needs to function. E.g. it’s mastered chess, then tried Go. The human beats it. In a fit of grumpiness (or AI equivalent) it deleted it’s backups, then itself.