Why don’t people care more about averting s-risks?
Conversation
The literal devil is in the details… no idea. But I’d much rather people who would otherwise resign tackle s-risks instead rather than people who’d have to spread themselves thin between different AI safety projects.
1
1
Show replies
State-dependent memory access and/or lack of regular cultivation of the critical mixture of equanimity and compassion (at high enough doses).
1
8
Show replies
It's hard to work on something that's not the mode of your probability distribution.
1
Oh, good point! Yeah, I’ve read someone argue that s-risks can be ignored because they’re < 10% as likely as other x-risks… I’m a bit skeptical of EV in edge cases, but this is not one, not to me.
2
Show replies
some x-risk work still seems higher EV to me (more gut-feeling than hard forecasting :/), but it's a close enough call that I'm on the side looking for tractable s-risk work (in particular low-hanging fruits that are quick to take), plus moral philosophy to do better estimates.
1
1
Yeah, the tractability is actually what makes most difference to me. Today’s approaches to alignment seem vastly more speculative to me than a good part of the work on s-risks (e.g., longtermrisk.org/research-agend), in particular because there is often almost no “deployment problem.”
I guess mostly because it seems orders of magnitude more unlikely than x-risks. If there is a strong case that s-risks are probable with current paths we need to work on them a lot, then it's lack of knowledge.
1
2
Okay. Yeah, I think 1 or 2 OOM seem/s fair. Less than 1% as likely seems unlikely to me… Then again no one has managed to really produce any quantitative models for that. I think a lot hinges on how likely a multipolar takeoff is going to be. A model for that would help a lot.
1
Show replies





