Not if they make it friendly in time. Late binding to ‘god’. if criticality(agi)>0.99 agi.personality=friendly basiliskMode=off
-
-
-
I am pretty sure that if we succeed in building self improving AI we can make some of it friendly. But not all of it. Once we figure it out, no regulation or good intention is going to reach all corners of the planet that have AI building capacities.
End of conversation
New conversation -
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.