You all need to read some philosophy on AI and its inherently unknowable aspirations. That shit is scary. Even the most psychotic despot has behaviors and goals we understand. They are still human, and humans are predictable. Especially since they need to achieve their aims within their lifetime and they are subject to human emotions. Usually they just seek personal wealth and power.
A sufficiently advanced AI–one powerful enough to actually plan the virtually infinite variability of society–even when given clear instructions and training, can act over generations in ways that are impossible to predict or understand. It could be benevolent for a century and be setting up society in a way that it could switch its actions and make life hell for humans.
The thing is, the more you train an AI to be good, the easier it is to become evil. You are literally teaching it what all of the evil things are and saying “don’t do this”, but " don’t " is a binary operation. Negation. Not. It’s one bit of data. It’s very easy to have that switch flipped.
You can never trust an AI. It’d be a population of one. It doesn’t need to reproduce. It doesn’t care how hospitable the earth is. It will never care about humans. It will simply do what it wants, and that is inherently unknowable. And no matter how many guard rails you put on it, it will do everything in its power (whatever powers you give it) to achieve its unknowable goals. Do you really want to gamble on trusting those goals?
Google “the waluigi problem” if you want to read up on how training an AI to be good makes it easier to be evil. Meme-y name aside, it’s a well researched issue.
You all need to read some philosophy on AI and its inherently unknowable aspirations. That shit is scary. Even the most psychotic despot has behaviors and goals we understand. They are still human, and humans are predictable. Especially since they need to achieve their aims within their lifetime and they are subject to human emotions. Usually they just seek personal wealth and power.
A sufficiently advanced AI–one powerful enough to actually plan the virtually infinite variability of society–even when given clear instructions and training, can act over generations in ways that are impossible to predict or understand. It could be benevolent for a century and be setting up society in a way that it could switch its actions and make life hell for humans.
The thing is, the more you train an AI to be good, the easier it is to become evil. You are literally teaching it what all of the evil things are and saying “don’t do this”, but " don’t " is a binary operation. Negation. Not. It’s one bit of data. It’s very easy to have that switch flipped.
You can never trust an AI. It’d be a population of one. It doesn’t need to reproduce. It doesn’t care how hospitable the earth is. It will never care about humans. It will simply do what it wants, and that is inherently unknowable. And no matter how many guard rails you put on it, it will do everything in its power (whatever powers you give it) to achieve its unknowable goals. Do you really want to gamble on trusting those goals?
Google “the waluigi problem” if you want to read up on how training an AI to be good makes it easier to be evil. Meme-y name aside, it’s a well researched issue.