Conversation

The inadequacy of fiscal mechanisms for dealing with global disasters is mind boggling On the one hand, this 2T deal is the largest thing of its kind ever OTOH it buys 3 months of time tops, best case implementation. Drop in this bucket.
3
52
Individuals, corporations, states... emergency funds saved at any level get blown through in months. Stock buybacks are a red herring. Distasteful self-dealing perhaps by some views but if that hadn’t been the norm the outcome would have been no different.
2
12
Savings in financial form essentially embodying expectations information just are not a good savings instrument for wealth capable of hedging against uncertainty. Uncertainty destroys models. The only place to store wealth surpluses is in capability surpluses.
1
26
What was bad about stock buybacks was they overvalued production flows in current reality rather than capability stocks in adjacent realities. That’s the pernicious part about them. What if that value had been stored in empty, underutilized hospitals or warehouses ventilators?
2
20
You know the only area where we understand this? Military capability. If you want peace prepare for war, if you want war prepare for peace. We don’t see piles of barely-used tanks and missiles as unproductive capital. We see it as adjacent possible potentiality.
3
43
I’m getting seriously mad at optimizer theology. That’s what got us into this mess. Specific ideas like lean/fat don’t kill. Mathematical techniques like optimization don’t kill. What kills is idiots fetishizing what they know over what they don’t. Optimizers incapable of doubt.
1
54
Doubt is not uncertainty or risk Doubt is not a probability estimate < 1 Doubt is not ambiguity Doubt is the capacity for living with a consciousness of true ignorance without anxiously covering it up Optimizer theology as opposed to the math of it is about removing doubt
2
33
Replying to
this rhymes a lot with ai-risk concerns, esp. corrigibility the "without anxiously covering it up" seems to be the hard part in that case transparency can only help so much, the fundamental problem is the desirability of a cover up
1
Replying to
Not a position I’m interested in elaborating. Would take too long. I think the rationalist/AI-risk community is deeply not-even-wrong about almost everything, but it’s not a view I debate or defend. takes it on and I’m loosely aligned with him.
1
4
Show replies