Conversation

has not occured to them to take it seriously, same as capabilities people about X-risk. i'm fairly concerned about them but it *seems* like alignment (of the kind i'm working on) is still the best thing to work on, including to reduce S-risk. perhaps unlike others, i don't… Show more
1
4
The literal devil is in the details… no idea. But I’d much rather people who would otherwise resign tackle s-risks instead rather than people who’d have to spread themselves thin between different AI safety projects.
1
1
Show replies
Show replies
It's hard to work on something that's not the mode of your probability distribution.
1
Oh, good point! Yeah, I’ve read someone argue that s-risks can be ignored because they’re < 10% as likely as other x-risks… I’m a bit skeptical of EV in edge cases, but this is not one, not to me.
2
Show replies
some x-risk work still seems higher EV to me (more gut-feeling than hard forecasting :/), but it's a close enough call that I'm on the side looking for tractable s-risk work (in particular low-hanging fruits that are quick to take), plus moral philosophy to do better estimates.
1
1
I guess mostly because it seems orders of magnitude more unlikely than x-risks. If there is a strong case that s-risks are probable with current paths we need to work on them a lot, then it's lack of knowledge.
1
2
Okay. Yeah, I think 1 or 2 OOM seem/s fair. Less than 1% as likely seems unlikely to me… Then again no one has managed to really produce any quantitative models for that. I think a lot hinges on how likely a multipolar takeoff is going to be. A model for that would help a lot.
1
Show replies