- cross-posted to:
- [email protected]
- cross-posted to:
- [email protected]
cross-posted from: https://lemmy.zip/post/1476431
Archived version: https://archive.ph/uwzMv
Archived version: https://web.archive.org/web/20230815095015/https://www.euronews.com/next/2023/08/14/how-ai-is-filtering-millions-of-qualified-candidates-out-of-the-workforce
The “AI” is simply a program used to filter data. Whose fault is it if using it causes problems? The nitwits who choose to use these programs and trust the results without understanding the limitations.
The “guns don’t kill people, people kill people” argument, eh?
Not really the most precise analogy, but sure. Individuals and companies are responsible for actions, not inanimate objects. However, that line of thinking is used as an argument against gun regulations. “AI” is a much broader field and more general tool than a firearm, and also newer and less well understood. People seem to be on average confused by the hype and unclear on what these systems can do well, or not, and what are appropriate uses. I’d hate to see legislators who know barely anything about technology write laws to restrict it prematurely, or more likely, adopt laws written directly by companies and industry groups who could have various hidden motives. For example we have seen both Sam Altman and Musk mention restricting AI research, and in both cases, the real motive seemed to be not safety but obtaining an advantage for their own companies.
For now, I agree with the other poster who said existing laws are sufficient. If a company was to discriminate in hiring, it doesn’t matter whether they used a special program to make the decisions or not.