What is the research that would be most helpful for AI safety while still having value if it turns out not to be useful for safety research?
-
-
Replying to @The_Lagrangian
@The_Lagrangian safety wrt what? threat models are important. (who attacks? what is attacked? how can it be attacked?)2 replies 0 retweets 1 like -
Replying to @allgebrah
@The_Lagrangian safety/security is a tradeoff between getting what you want and not getting screwed over. less #1 => less opportunity for #21 reply 0 retweets 0 likes -
Replying to @allgebrah
'(·) Retweeted '(·)
@The_Lagrangian that said, assuming worst case, a theoretical framework for this could be interesting and short:https://twitter.com/allgebrah/status/713141180922519552 …'(·) added,
1 reply 0 retweets 0 likes
Replying to @allgebrah @The_Lagrangian
have revisited the idea: halting problem with a derivate of us (nature unknown yet) as executing machines
6:51 AM - 16 May 2016
0 replies
0 retweets
0 likes
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.