What is the research that would be most helpful for AI safety while still having value if it turns out not to be useful for safety research?
Conversation
Replying to
@The_Lagrangian safety wrt what? threat models are important.
(who attacks? what is attacked? how can it be attacked?)
2
1
Replying to
safety as in value alignment for general artificial intelligence a la MIRI
1
Replying to
@The_Lagrangian I find it hard to talk about safety of the intransitive kind but maybe that's just my infosec background
Replying to
@The_Lagrangian do MIRI have a threat model or a justification for not having one?
2
Replying to
@The_Lagrangian btw this looks very similar to transitive/intransitive morals (morals towards a cause vs self-evident ones)

