What is the research that would be most helpful for AI safety while still having value if it turns out not to be useful for safety research?
-
-
@allgebrah safety as in value alignment for general artificial intelligence a la MIRI -
@The_Lagrangian I find it hard to talk about safety of the intransitive kind but maybe that's just my infosec background - 6 more replies
New conversation -
-
-
@The_Lagrangian safety/security is a tradeoff between getting what you want and not getting screwed over. less #1 => less opportunity for #2 -
@The_Lagrangian that said, assuming worst case, a theoretical framework for this could be interesting and short:https://twitter.com/allgebrah/status/713141180922519552 … - 1 more reply
New conversation -
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.