I’m getting seriously mad at optimizer theology. That’s what got us into this mess. Specific ideas like lean/fat don’t kill. Mathematical techniques like optimization don’t kill.
What kills is idiots fetishizing what they know over what they don’t. Optimizers incapable of doubt.
Conversation
Doubt is not uncertainty or risk
Doubt is not a probability estimate < 1
Doubt is not ambiguity
Doubt is the capacity for living with a consciousness of true ignorance without anxiously covering it up
Optimizer theology as opposed to the math of it is about removing doubt
2
5
33
Replying to
this rhymes a lot with ai-risk concerns, esp. corrigibility
the "without anxiously covering it up" seems to be the hard part in that case
transparency can only help so much, the fundamental problem is the desirability of a cover up
1
Replying to
AI risk theology is the same as optimizer theology. I have zero patience got it. It is not even wrong.
1
1
Replying to
sorry to try your patience, but i'm confused at how it's the same
like, do you think it's misguided to even try to control an optimization process, or something along those lines?
1
Replying to
Not a position I’m interested in elaborating. Would take too long. I think the rationalist/AI-risk community is deeply not-even-wrong about almost everything, but it’s not a view I debate or defend. takes it on and I’m loosely aligned with him.
1
4
i've read (most of?) your blog and while i've read the criticisms of the rationality community, i don't remember seeing a criticism of AI-risk as an issue (as opposed about the community's methods/framing)
do you have something like this i could read?
1
1
Yes, I haven’t written about AI risk as such. I’ve written about AI hype:
1
1
2
I think sudden-takeoff superintelligence is unlikely for the foreseeable future, but it can’t be ruled out, so it’s reasonable to have a few people trying to think about it.
1
4
OTOH, it’s like thinking about how to defend against hypothetical hostile aliens with FTL drives. Unless you have *some* idea about how an FTL drive might work, you really can’t get started. And we don’t have any plausible stories about how AGI might work either.
1
4
I’d rather more people work on that problem by writing sci-fi
I agree! OTOH, I can’t have huge confidence in my judgement about how to approach the problem. On the third hand, mathematical logic is about the last approach I’d favor, so…


