It seems that when you train an AI on a historical summary of human behavior, it’s going to pick up some human-like traits. I wonder if this means we should be training a “good guy” AI with only ethical, virtuous material?
It seems that when you train an AI on a historical summary of human behavior, it’s going to pick up some human-like traits. I wonder if this means we should be training a “good guy” AI with only ethical, virtuous material?
That’s horrible! Horrific! Distopian!
I would watch that show.