So my objection to the maximize-EV(wealth) heuristic is that over time it will tend inevitably to box me into an outcome where almost all of my utility is enjoyed by a version of me that lives in a vanishingly unlikely world—regardless of what my utility function on wealth is
Conversation
Your heuristic is that you prefer strategy A to strategy B if A has higher EV(utility). My heuristic is that I always prefer strategy B to strategy A if B leads to higher utility with probability (1 - epsilon)—regardless of how high my utility would be in the epsilon case.
1
3
Now, whether Kelly actually outperforms your strategy with probability (1 - epsilon) does depend on some assumptions, including that we take the limit as t goes to infinity. But I think that is a separate disagreement
1
2
Do you agree that it’s coherent (and indeed kinda reasonable) for me to prefer the gamble where my utility is almost surely higher, even if my average utility across all outcomes is lower because I miss out on a very rare outcome with astronomical utility?
2
3
It seems like your utility function is just log(wealth), in which case maximizing EV(utility) is the same as Kelly?
And then your *wealth* and not utility is almost surely higher?
2
1
No! The same paradox can arise if my utility function is log(wealth). That just means my wealth in the infinitesimal case would need to be exponentially higher to make the EV of that strategy higher.
1
1
Could you keep iterating this argument?
Wanting almost surely higher wealth can be done by maximizing EV[log(wealth)]
Wanting almost surely higher log(wealth) can be done by maximizing EV[log(log(wealth))]
So your utility function is log(log(wealth))?
1
1
1
It's not log(log(wealth)) either! You can repeat the process as much as you want; I will always disregard the infinitesimal outcome no matter how high. I am not trying to maximize an expectation value at all!
1
What are you trying to maximize?
I claim that if you are trying to end up with almost surely higher f(wealth), you are maybe also maximizing E[log(f(wealth))]?
1
I'm not sure there's a way to frame it as maximization of some value. Here's basically the axiom of the decision theory: if strategy A results in higher wealth than strategy B with probability 1 (en.wikipedia.org/wiki/Almost_su), then I prefer strategy A to strategy B.
2
2
what happens if we bar talking about "literally infinity" because it isn't realistic and instead allow numbers which are VERY LARGE? Like in particular, what if we ban models/charts/simulations that go out further than the number of atoms in the universe?
would you present a modified, realistic version of your axiom?


