Conversation

On the attractiveness of having choice taken away. (Just from my personal notes on EA. Applies to many other strong ideologies too, of course, this just happens to be on my mind right now.)
Image
16
83
Replying to
This was my starting assumption with EA, carried over from my starting assumption about all of LW-style rationalism. It turns identification of the unique best option in all circumstances into a legibilizing fetish. Sometimes helpful, usually not.
1
9
Replying to and
I think the idea of "too many choices" is actually a misframing and the paradox of choice is ill-posed. There are huge swathes of option space that collapse into a fairly small number of indifference classes, and the "real" options (which takes narrative insight to judge) are few
2
12
Replying to and
Imo, the philosophically sound approach is to calibrate the appropriate amount of doubt to live with given uncertainty and ambiguity in circumstances, and adopt varied decision-making styles accordingly. Otherwise you risk projecting your fear of uncertainty onto the situation.
1
9
Replying to
You can consciously calibrate it in the same way you can calibrate whether you can lift a weight safely. Depending on your weight training condition, you may be able to lift 50lb or 500lb. Near your upper limit, precise form matters. In the middle, you can be sloppy.
1
4
Replying to and
I think doubt is the same way. If you spend a lot of time making fast 10s decisions with 70% certainty and "being right a lot" the way Amazon for eg trains its leaders to, you'll be good at that calibrated doubt level. If that's too "heavy", you'll overthink and do badly anyway.
1
4
Replying to and
There's some relationship to Klein's recognition-primed decision-making model and system 1/system 2, but I think the main thing going on here is psyche strength. If you operate in a decision regime for long, the main effect is you learn to regulate emotions (fear/doubt) well.
1
3
Replying to and
To bring it back to EA, if you were to actually model philanthropy decisions taking into account both uncertainty/ambiguity along chain of impact AND your own quality of emotional regulation, I think you'd end up in one of two fairly predictable places....
1
Replying to and
...you'd end up concluding you should EITHER only give in your neighborhood OR go to "last mile" of interest and get attuned to doubt levels and only start giving etc once attuned. And whaddaya know, "local giving" and "trusted fieldworkers" are historically validated models
1
3
Replying to and
That said, I do think EA type approaches bring some valuable decision frames to the problem that are a useful foil/complement to traditional. Their value is just overstated, and it is definitely NOT a model I'd choose to universalize. Maybe 25-30% of $ should go via EA at most.