Ultimately this insanity is due to optimizer mentality. Fat in the system, underutilized resources for alt adjacent realities. That’s mediocritizaton thinking.https://www.ribbonfarm.com/2019/04/15/mediocratopia-4/ …
-
Show this thread
-
Insurance is NOT the right way to think about the adjacent possible. Insurance can at best pick out a countably infinite set of scenarios to hedge against. The adjacent possible is a continuum of normal accidents waiting to happen. https://en.wikipedia.org/wiki/Normal_Accidents …
2 replies 0 retweets 16 likesShow this thread -
I’m getting seriously mad at optimizer theology. That’s what got us into this mess. Specific ideas like lean/fat don’t kill. Mathematical techniques like optimization don’t kill. What kills is idiots fetishizing what they know over what they don’t. Optimizers incapable of doubt.
1 reply 7 retweets 50 likesShow this thread -
Doubt is not uncertainty or risk Doubt is not a probability estimate < 1 Doubt is not ambiguity Doubt is the capacity for living with a consciousness of true ignorance without anxiously covering it up Optimizer theology as opposed to the math of it is about removing doubt
2 replies 5 retweets 32 likesShow this thread -
Replying to @vgr
this rhymes a lot with ai-risk concerns, esp. corrigibility the "without anxiously covering it up" seems to be the hard part in that case transparency can only help so much, the fundamental problem is the desirability of a cover up
1 reply 0 retweets 0 likes -
Replying to @AdeleDeweyLopez
AI risk theology is the same as optimizer theology. I have zero patience got it. It is not even wrong.
1 reply 0 retweets 0 likes -
Replying to @vgr
sorry to try your patience, but i'm confused at how it's the same like, do you think it's misguided to even try to control an optimization process, or something along those lines?
1 reply 0 retweets 0 likes -
Replying to @AdeleDeweyLopez
Not a position I’m interested in elaborating. Would take too long. I think the rationalist/AI-risk community is deeply not-even-wrong about almost everything, but it’s not a view I debate or defend.
@Meaningness takes it on and I’m loosely aligned with him.1 reply 0 retweets 4 likes -
Replying to @vgr @Meaningness
@Meaningness i've read (most of?) your blog and while i've read the criticisms of the rationality community, i don't remember seeing a criticism of AI-risk as an issue (as opposed about the community's methods/framing) do you have something like this i could read?1 reply 0 retweets 1 like -
Replying to @AdeleDeweyLopez @vgr
Yes, I haven’t written about AI risk as such. I’ve written about AI hype:https://meaningness.com/metablog/artificial-intelligence-progress …
1 reply 0 retweets 2 likes
I think sudden-takeoff superintelligence is unlikely for the foreseeable future, but it can’t be ruled out, so it’s reasonable to have a few people trying to think about it.
-
-
OTOH, it’s like thinking about how to defend against hypothetical hostile aliens with FTL drives. Unless you have *some* idea about how an FTL drive might work, you really can’t get started. And we don’t have any plausible stories about how AGI might work either.
1 reply 0 retweets 5 likes -
Replying to @Meaningness @AdeleDeweyLopez
I’d rather more people work on that problem by writing sci-fi
1 reply 0 retweets 0 likes - 1 more reply
New conversation -
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.