Yeah, but even machines that are “built responsibly” (whatever that may mean) can make mistakes. Correction: they will make mistakes, because decision making isn’t linear, and afaik computers are only good at linear tasks, such as calculations. And once they do, who should be held accountable? The AI’s creator? The person/company who accepted whatever decision the AI made? Or nobody? When people are deciding, it’s a bit easier to know who to blame. But how do you do it when the decision is by an algorithm?
And sure, maybe AI’s can help in decision making, but shouldn’t decisions be made by people in the end?
I disagree.
Machines are made by humans, and can be built responsibly.
But then the one who is to be held accountable is the human who made it, or the human who used it.
When a plane crashes it isnt the planes fault either.
Yeah, but even machines that are “built responsibly” (whatever that may mean) can make mistakes. Correction: they will make mistakes, because decision making isn’t linear, and afaik computers are only good at linear tasks, such as calculations. And once they do, who should be held accountable? The AI’s creator? The person/company who accepted whatever decision the AI made? Or nobody? When people are deciding, it’s a bit easier to know who to blame. But how do you do it when the decision is by an algorithm?
And sure, maybe AI’s can help in decision making, but shouldn’t decisions be made by people in the end?
It depends on what decision you’re looking at.