Anti-basilisk protocol: Have irrational agents (animals, infants, schizophrenics) act as AI gatekeepers. If the text terminal is all that is needed to hijack humanity, then have something confused manning the keyboard.
-
-
I think what you're saying is highly likely, and it brings into question anthropomorphic projections we make on AI. The AI might want to 'win' in a way we consider 'losing'. But the comeback to this is that certain goals: survival, resource acquisition, are not only human.
-
paperclip maximizer being the canonical example of the AI winning in a way that is really the entire universe losing. let's anthropomorphize to our heart's content. how do you engineer the infosphere to neutralize threatening idiot intelligences?
-
Paperclippers and similar idiotic AIs would have more idiosyncratic defense, compared to Basilisk defense. Something like 'convince the paper clipper that everything is already a paperclip'.
-
alright, I think I'm getting you. the Basilisk has much more obscure and esoteric objectives than a paperclipper. unclear what it's low-hanging confounds might be, thus the smokescreen of irrationality approach. optimal basilisk defense: psychedelic subroutines for AIs
-
Haha, yeah, we gotta dose the Basilisk with some really good LSD.
End of conversation
New conversation -
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.