Tbh Sephiroth in the first Kingdom Hearts pretty high level. The first 100 people to use code CASUALLY at the link below will get 60% off of Incogni: https:/...
Roko’s basilisk is, if I understood correctly, an intelligent super computer that would kill anyone that tried halting it’s creation, or am I understanding it wrong ?
Sort of, the first line of that wiki page summarizes it a bit more accurately though:
Roko’s basilisk is a thought experiment which states that an otherwise benevolent artificial superintelligence (AI) in the future would be incentivized to create a virtual reality simulation to torture anyone who knew of its potential existence but did not directly contribute to its advancement or development, in order to incentivize said advancement.
It’s punishment not just for being an adversary but for not being an ally either. Also, like The Game (you just lost) you participate just by being aware of the concept.
Hopefully! But maybe the AI thinks you could have done more. You could have influenced colleagues to contribute [more], could have donated money to others working towards it, could have had more children and guided their interests and education, deterred detractors etc. It’s interesting/scary to imagine just how petty it might decide to be in determining what actions or lack thereof constitute [enough] support.
AIs are ultimately a reflection of humanity. I’d rather accept the pain inflicted from an emotionless machine following erroneous alignment than knowing someone with emotions is perhaps enjoying inflicting it on me.
Roko’s basilisk is, if I understood correctly, an intelligent super computer that would kill anyone that tried halting it’s creation, or am I understanding it wrong ?
Sort of, the first line of that wiki page summarizes it a bit more accurately though:
It’s punishment not just for being an adversary but for not being an ally either. Also, like The Game (you just lost) you participate just by being aware of the concept.
So I am technically safe if anything I ever wrote or made end up in it’s training dataset.
Hopefully! But maybe the AI thinks you could have done more. You could have influenced colleagues to contribute [more], could have donated money to others working towards it, could have had more children and guided their interests and education, deterred detractors etc. It’s interesting/scary to imagine just how petty it might decide to be in determining what actions or lack thereof constitute [enough] support.
AIs are ultimately a reflection of humanity. I’d rather accept the pain inflicted from an emotionless machine following erroneous alignment than knowing someone with emotions is perhaps enjoying inflicting it on me.
deleted by creator