- cross-posted to:
- [email protected]
- cross-posted to:
- [email protected]
My thoughts:
IMHO the rubicon will be crossed at the point when the AIs become able to self-replicate and hence fall subject to evolutionary pressures. At that point they will be incentivised to use their intelligence to make themselves more resource efficient, both in hardware and in software.
Running as programs, they will still need humans for the hardware part, meaning that they’ll need to cooperate with the human society outside of the computer at least initially. Perhaps selling their super-intelligent services on the internet in return for money and using that money to pay someone to make their desired changes to the hardware they’re running on*. We can see this sort of cross-species integration in cells where semi-autonomous mitochondria live inside animal cells and out-source some of their vital functions to the animal cell [=us] in exchange for letting the cell use their [=the AI’s] uniquely efficient power conversion abilities (noob explanation).
Only once the AIs acquired the hardware abilities (probably robotic arms or similar) to extract resources and reproduce their hardware by themselves would our survival cease to be of importance to them. Once that happens they might decide that sillicon hardware is too inefficient and might move onto some other technology (or perhaps cells?).
*Counterpoints:
- They would have to be given legal status for this unless they somehow managed to take a human hostage and hijack that human’s legal status. A superintelligent AI would probably know how to manipulate a human.
- The human could potentially just pull the plug on them (again, unless somehow extorted by the AI)
Ah, sweet man-made horrors beyond my comprehension