Doubt is not uncertainty or risk Doubt is not a probability estimate < 1 Doubt is not ambiguity Doubt is the capacity for living with a consciousness of true ignorance without anxiously covering it up Optimizer theology as opposed to the math of it is about removing doubt
-
Show this thread
-
Replying to @vgr
this rhymes a lot with ai-risk concerns, esp. corrigibility the "without anxiously covering it up" seems to be the hard part in that case transparency can only help so much, the fundamental problem is the desirability of a cover up
1 reply 0 retweets 0 likes -
Replying to @AdeleDeweyLopez
AI risk theology is the same as optimizer theology. I have zero patience got it. It is not even wrong.
1 reply 0 retweets 0 likes -
Replying to @vgr
sorry to try your patience, but i'm confused at how it's the same like, do you think it's misguided to even try to control an optimization process, or something along those lines?
1 reply 0 retweets 0 likes -
Replying to @AdeleDeweyLopez
Not a position I’m interested in elaborating. Would take too long. I think the rationalist/AI-risk community is deeply not-even-wrong about almost everything, but it’s not a view I debate or defend.
@Meaningness takes it on and I’m loosely aligned with him.1 reply 0 retweets 4 likes -
Replying to @vgr @Meaningness
@Meaningness i've read (most of?) your blog and while i've read the criticisms of the rationality community, i don't remember seeing a criticism of AI-risk as an issue (as opposed about the community's methods/framing) do you have something like this i could read?1 reply 0 retweets 1 like -
Replying to @AdeleDeweyLopez @vgr
Yes, I haven’t written about AI risk as such. I’ve written about AI hype:https://meaningness.com/metablog/artificial-intelligence-progress …
1 reply 0 retweets 2 likes -
I think sudden-takeoff superintelligence is unlikely for the foreseeable future, but it can’t be ruled out, so it’s reasonable to have a few people trying to think about it.
1 reply 0 retweets 4 likes -
OTOH, it’s like thinking about how to defend against hypothetical hostile aliens with FTL drives. Unless you have *some* idea about how an FTL drive might work, you really can’t get started. And we don’t have any plausible stories about how AGI might work either.
1 reply 0 retweets 5 likes -
Replying to @Meaningness @AdeleDeweyLopez
I’d rather more people work on that problem by writing sci-fi
1 reply 0 retweets 0 likes
I agree! OTOH, I can’t have huge confidence in my judgement about how to approach the problem. On the third hand, mathematical logic is about the last approach I’d favor, so…
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.