- cross-posted to:
- technology
- cross-posted to:
- technology
Summary
Geoffrey Hinton, the “Godfather of AI,” warns of a 10-20% chance that AI could cause human extinction within 30 years, citing the rapid pace of development and the likelihood of creating systems more intelligent than humans.
Hinton emphasized that such AI could evade human control, likening humans to toddlers compared to advanced AI.
He called for urgent government regulation, arguing that corporate profit motives alone cannot ensure safety.
This stance contrasts with fellow AI expert Yann LeCun, who believes AI could save humanity rather than threaten it.
An AI actually more intelligent than humans is probably not a huge threat, many of the mutual cooperation things that make humans work semi-well together apply to an AI. Likewise, an LLM is unlikely to cause any problems just by existing.
Instead, I think the big danger is something like an LLM that convinces people that its smarter than they are (probably by being able to recite more facts than they can, or offering copy/paste explanations of advance topics), and is then put into more and more places of trust.
Once it’s there, we have the open possibility that something “weird” happens, and many many devices, controls, etc simultaneously react poorly to novel inputs. Depending on the type of systems, and how widespread an issue it is, that could cause extremely large problems. Military systems might be the worst possibility for this.