Conversation

Replying to
12) In many cases I think $10k is a reasonable bet. But I, personally, would do more. I’d probably do more like $50k. Why? Because ultimately my utility function isn’t really logarithmic. It’s closer to linear.
8
107
13) Sure, I wouldn’t care to buy 10,000 new cars if I won the coinflip. But I’m not spending my marginal money on cars anyway. I’m donating it. And the scale of the world’s problems is…. Huge.
6
143
14) 400,000 people die of malaria each year. It costs something like $5k to save one person from malaria, or $2b total per year. So if you want to save lives in the developing world, you can blow $2b a year just on malaria.
6
115
15) And that’s just the start. If you look at the scale of funds spent on diseases, global warming, emerging technological risk, animal welfare, nuclear warfare safety, etc., you get numbers reaching into the trillions.
5
76
16) So at the very least, you should be using that as your baseline: and kelly tells you that when the backdrop is trillions of dollars, there’s essentially no risk aversion on the scale of thousands or millions.
1
50
17) Put another way: if you’re maximizing EV(log(W+$1,000,000,000,000)) and W is much less than a trillion, this is very similar to just maximizing EV(W).
2
52
18) Does this mean you should be willing to accept a significant chance of failing to do much good sometimes? Yes, it does. And that’s ok. If it was the right play in EV, sometimes you win and sometimes you lose.
13
99
19) And more generally, if you look at everyone contributing to the cause as one portfolio--which is certainly true from the perspective of the child dying from malaria--they aren’t worried about who it was that funded their safety.
3
53
20) So what does all this mean? It means that, when you’re thinking about your career, sometimes the altruistic thing to do is to take chances. Seek out the opportunities with the biggest upside, not the ones which are the safest, and hone in on that vision.
5
187