Especially if we let its half baked incarnations operate our cars and act as a claims adjuster for our for profit healthcare system.
AI is already killing people for profit right now.
But, I know, I know, slow, systemic death of the vulnerable and the ignorant is not as tantalizing a storyline as doomsday events from Hollywood blockbusters.
An “AI” operated machine gun turret doesn’t have to be sentient in order to kill people.
I agree that people are the ones allowing these things to happen, but software doesn’t have to have agency to appear that way to laypeople and when people are placed in a “managerial” or “overseer” role they behave as if the software knows more than they do even when they’re subject matter experts.
Would it be different if instead of LLM the AI operated machine gun or the corporate software where driven just by traditional algorithms when it comes to that ethical issue?
Because a machine gun does not need “modern” AI to be able to take aim and shoot at people, I guarantee you that.
No, it wouldn’t be different. Though it’d definitely be better to have a discernable algorithm / explicable set of rules for things like health care. Them shrugging their shoulders and saying they don’t understand the “AI” should be completely unacceptable.
I wasn’t saying AI = LLM either. Whatever drives Teslas is almost certainly not an LLM.
My point is half-baked software is already killing people daily, but because it’s more dramatic to pontificate about the coming of skynet the “AI” people waste time on sci-fi nonsense scenarios instead of drawing any attention to that.
Fighting the ills bad software are already causing today would also do a lot to advance the cause of preventing bad software from reaching the imagined apocalyptic point in the future.
Especially if we let its half baked incarnations operate our cars and act as a claims adjuster for our for profit healthcare system.
AI is already killing people for profit right now.
But, I know, I know, slow, systemic death of the vulnerable and the ignorant is not as tantalizing a storyline as doomsday events from Hollywood blockbusters.
AI is doing nothing. It’s not sentient, it’s not taking conscious decisions.
People is doing it.
An “AI” operated machine gun turret doesn’t have to be sentient in order to kill people.
I agree that people are the ones allowing these things to happen, but software doesn’t have to have agency to appear that way to laypeople and when people are placed in a “managerial” or “overseer” role they behave as if the software knows more than they do even when they’re subject matter experts.
Would it be different if instead of LLM the AI operated machine gun or the corporate software where driven just by traditional algorithms when it comes to that ethical issue?
Because a machine gun does not need “modern” AI to be able to take aim and shoot at people, I guarantee you that.
No, it wouldn’t be different. Though it’d definitely be better to have a discernable algorithm / explicable set of rules for things like health care. Them shrugging their shoulders and saying they don’t understand the “AI” should be completely unacceptable.
I wasn’t saying AI = LLM either. Whatever drives Teslas is almost certainly not an LLM.
My point is half-baked software is already killing people daily, but because it’s more dramatic to pontificate about the coming of skynet the “AI” people waste time on sci-fi nonsense scenarios instead of drawing any attention to that.
Fighting the ills bad software are already causing today would also do a lot to advance the cause of preventing bad software from reaching the imagined apocalyptic point in the future.