What is the research that would be most helpful for AI safety while still having value if it turns out not to be useful for safety research?
@The_Lagrangian safety/security is a tradeoff between getting what you want and not getting screwed over. less #1 => less opportunity for #2
-
-
@The_Lagrangian that said, assuming worst case, a theoretical framework for this could be interesting and short:https://twitter.com/allgebrah/status/713141180922519552 … -
have revisited the idea: halting problem with a derivate of us (nature unknown yet) as executing machines
End of conversation
New conversation -
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.