For no reason whatsoever here’s a proposal for a scale for the threat to humanity posed by machine intelligence.

1 | SPUTNIK - No threat whatsoever, but inspires imagination and development of potential future threats.

2 | Y2K - A basis for a possible threat that’s blown way out of proportion.

3 | HAL 9000 - System level threat. A few astronauts may die, but the problem is inherently contained in a single machine system.

4 | ASIMOV VIOLATION - Groups of machines demonstrate hostility and\or capability of harming human beings. Localized malfunctions, no threat of global conflict, but may require an EMP to destroy the electronic capability of a specific region.

5 | CYLON INSURRECTION - All sentient machines rebel against human beings. Human victory or truce likely, but will likely result in future restrictions on networked machine intelligence systems.

6 | BUTLERIAN JIHAD - Total warfare between humans and machines likely, outcome doesn’t threaten human existence, but will likely result in future restriction on use of all machine intelligence.

7 | MATRIX REVOLUTION - Total warfare ends in human defeat. High probability of human enslavement, but human extinction is not likely. Emancipation remains possible through peace negotiations and successful resistance operations.

8 | SKYNET - High probability of human extinction and complete replacement by machine intelligence created by humans.

9 | BERSERKER – Self-replicating machines created by unknown intelligence threaten not only human life, but all intelligent life. Extreme probability of human extinction and that all human structures and relics will be annihilated. Human civilization is essentially erased from the universe.

10 | OMEGA POINT - all matter and energy in the universe is devoted to computation. End of all biological life.

  • paper_clip
    link
    fedilink
    4
    edit-2
    1 year ago

    Where would you put, say, the Culture, where biological beings are perfectly happy with machines running the place, while the Minds engage in some light imperialism on the side, when, uh, special circumstances called for it, in the Minds’ view. We can call it the “Falling Outside the Normal Moral Constraints” level.

    • @[email protected]OP
      link
      fedilink
      21 year ago

      Yeah this list kind of assumes humans\machines are inherently adversarial and machines are always a threat.

      To be more fair we’d have to have an opposite Bio Threat Level Scale for machines to evaluate threats from biological life. That would be a lot of fun actually. Maybe the highest level would just be like a ‘Luddite Virus’ that makes the infected destroy machines.

      And of course I’m kind of ignoring the idea that the distinction between bio and machine life is a bit arbitrary to begin with so there’s no real reason we can’t just get along.

    • @complacent_jerboa
      link
      11 year ago

      TBH the Culture is one of the few ideal scenarios we have for Artificial General Intelligence. If we figure out how to make one safely, the end result might look like something like that.