What is the research that would be most helpful for AI safety while still having value if it turns out not to be useful for safety research?
-
-
Replying to @The_Lagrangian
@The_Lagrangian safety wrt what? threat models are important. (who attacks? what is attacked? how can it be attacked?)2 replies 0 retweets 1 like -
Replying to @allgebrah
@allgebrah safety as in value alignment for general artificial intelligence a la MIRI1 reply 0 retweets 0 likes -
Replying to @The_Lagrangian
@The_Lagrangian I find it hard to talk about safety of the intransitive kind but maybe that's just my infosec background1 reply 0 retweets 0 likes -
Replying to @allgebrah
@The_Lagrangian do MIRI have a threat model or a justification for not having one?2 replies 0 retweets 0 likes
@The_Lagrangian btw this looks very similar to transitive/intransitive morals (morals towards a cause vs self-evident ones)
3:36 PM - 24 Mar 2016
0 replies
0 retweets
0 likes
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.