- cross-posted to:
- aiop
- [email protected]
- cross-posted to:
- aiop
- [email protected]
OK, its just a deer, but the future is clear. These things are going to start kill people left and right.
How many kids is Elon going to kill before we shut him down? Whats the number of children we’re going to allow Elon to murder every year?
This idea has a serious problem: THE BUG.
We hear this idea very often, but you are disregarding the problem of a programmed solution: it makes it’s mistakes all the time. Infinitely.
So this is not exactly true.
Humans can learn, and humans can tell when they made an error, and try to do it differently next time. And all humans are different. They make different mistakes. This tiny fact is very important. It secures our survival.
The car does not know when it made a mistake, for example, when it killed a deer, or a person, and crashed it’s windshield and bent lot’s of it’s metal. It does not learn from it.
It would do it again and again.
And all the others would do exactly the same, because they run the same software with the same bug.
Now imagine 250 million people having 250 million Teslas, and then comes the day when each one of them decides to kill a person…
Tesla can detect a crash and send the last minute of data back so all cars learn from is. I don’t know if they do but they can.
"Today on Oct 30 I ran into a deer but I was too dumb to see it, not even see any obstacle at all. I just did nothing. My driver had to do it all.
Grrrrrr.
Everybody please learn from that, wise up and get yourself some LIDAR!"