It’s even worse than that. The ones that should understand the tools decide that the ease is good enough and just become AI brain rot.
I’ve watched co-workers go from good co-workers to people I can’t trust anything from because I know they just slapped at an AI and didn’t check it.
What’s worse is, when you come to them as an engineer and tell them they’re wrong, you have to prove to them the AI is wrong, not they have to prove to you the AI is right.
Moreover, when you refer to documentation, they can’t be bothered and say the AI didn’t say that, so it must be wrong.
It’s even worse than that. The ones that should understand the tools decide that the ease is good enough and just become AI brain rot.
I’ve watched co-workers go from good co-workers to people I can’t trust anything from because I know they just slapped at an AI and didn’t check it.
What’s worse is, when you come to them as an engineer and tell them they’re wrong, you have to prove to them the AI is wrong, not they have to prove to you the AI is right.
Moreover, when you refer to documentation, they can’t be bothered and say the AI didn’t say that, so it must be wrong.