- cross-posted to:
- technology
- cross-posted to:
- technology
Ask an AI a question then doubt the questions facts. AI’s always believe their answer is correct. Imagine when the AI thinks “is this street clear?” Then answers yes to itself and slams into a parade full of people at top speed and just keeps driving
Language models are a completely different kind of AI though.
Fair, when IBM’s Watson was on Jeopardy it did show percentages of what it’s confidence level was. So I guess in my parade scenario Elon has told engineers that 51% is more than halfway correct so they have to program the AI to accept it as 100% correct.
My main point is that Elon is a snake oil salesman and a no good grafter.
little too much anthropomorphizing for AI
Maybe we should set up some laws around this?
Yes. Won’t be better
What a mess. A car that stops at red lights 99.9% of the time is not Fully Self Driving. This is level 3 of 5.