Savings in financial form essentially embodying expectations information just are not a good savings instrument for wealth capable of hedging against uncertainty. Uncertainty destroys models. The only place to store wealth surpluses is in capability surpluses.
-
Show this thread
-
What was bad about stock buybacks was they overvalued production flows in current reality rather than capability stocks in adjacent realities. That’s the pernicious part about them. What if that value had been stored in empty, underutilized hospitals or warehouses ventilators?
2 replies 3 retweets 20 likesShow this thread -
You know the only area where we understand this? Military capability. If you want peace prepare for war, if you want war prepare for peace. We don’t see piles of barely-used tanks and missiles as unproductive capital. We see it as adjacent possible potentiality.
3 replies 8 retweets 45 likesShow this thread -
Ultimately this insanity is due to optimizer mentality. Fat in the system, underutilized resources for alt adjacent realities. That’s mediocritizaton thinking.https://www.ribbonfarm.com/2019/04/15/mediocratopia-4/ …
3 replies 5 retweets 27 likesShow this thread -
Insurance is NOT the right way to think about the adjacent possible. Insurance can at best pick out a countably infinite set of scenarios to hedge against. The adjacent possible is a continuum of normal accidents waiting to happen. https://en.wikipedia.org/wiki/Normal_Accidents …
2 replies 0 retweets 16 likesShow this thread -
I’m getting seriously mad at optimizer theology. That’s what got us into this mess. Specific ideas like lean/fat don’t kill. Mathematical techniques like optimization don’t kill. What kills is idiots fetishizing what they know over what they don’t. Optimizers incapable of doubt.
1 reply 5 retweets 48 likesShow this thread -
Doubt is not uncertainty or risk Doubt is not a probability estimate < 1 Doubt is not ambiguity Doubt is the capacity for living with a consciousness of true ignorance without anxiously covering it up Optimizer theology as opposed to the math of it is about removing doubt
2 replies 5 retweets 31 likesShow this thread -
Replying to @vgr
this rhymes a lot with ai-risk concerns, esp. corrigibility the "without anxiously covering it up" seems to be the hard part in that case transparency can only help so much, the fundamental problem is the desirability of a cover up
1 reply 0 retweets 0 likes -
Replying to @AdeleDeweyLopez
AI risk theology is the same as optimizer theology. I have zero patience got it. It is not even wrong.
1 reply 0 retweets 0 likes -
Replying to @vgr
sorry to try your patience, but i'm confused at how it's the same like, do you think it's misguided to even try to control an optimization process, or something along those lines?
1 reply 0 retweets 0 likes
Not a position I’m interested in elaborating. Would take too long. I think the rationalist/AI-risk community is deeply not-even-wrong about almost everything, but it’s not a view I debate or defend. @Meaningness takes it on and I’m loosely aligned with him.
-
-
Replying to @vgr @Meaningness
@Meaningness i've read (most of?) your blog and while i've read the criticisms of the rationality community, i don't remember seeing a criticism of AI-risk as an issue (as opposed about the community's methods/framing) do you have something like this i could read?1 reply 0 retweets 1 like -
Replying to @AdeleDeweyLopez @vgr
Yes, I haven’t written about AI risk as such. I’ve written about AI hype:https://meaningness.com/metablog/artificial-intelligence-progress …
1 reply 0 retweets 2 likes - Show replies
New conversation -
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.