The Kelly criterion does not assume that you prefer to maximize log(wealth). It assumes that you would prefer having more wealth to having less wealth, and it guides you to the strategy where you have more wealth than any other strategy in 99.99...% of worlds (in the long run)
Conversation
Causation is tricky, but Kelly is mathematically equivalent to maximizing log(wealth), so assuming one gives the other.
1
4
I think you're being a bit glib with the second half there? Like I could equally say "maximize linear EV assumes you'd prefer having more wealth than less wealth, and it guides you to the strategy where you are able to get the largest possible wealth by a factor of 999999999..."
1
1
Hmm I don’t think EV(wealth) maximization actually maximizes wealth in the best possible world, right? It doesn’t reach the highest peak
What EV(wealth) maximizes is sum of wealth across all possible worlds
1
1
Peak-maximization would mean buying lottery tickets and such even at negative EV
1
so in every example we've been talking about they *are* the same.
they don't have to be but they usually are.
2
1
agreed that negative lottery tickets are different here but every approach gets that one right!
but in e.g. st petersberg, hold USDC vs hold ERC20 token vs LP, classic Kelly question, etc., the max EV = max upside strategy = bet it all every time on the max EV option
1
2
But more generally my point is that "maximize odds of winning" is not what really matters, and neither is "maximize the max upside"; both are "good" things to have but neither are perfect, and really this is just an argument between max(EV) and max(EV(log))
1
4
No! I am not trying to maximize EV of anything!
I want to pick the strategy that beats yours 99.99% of the time. That’s my terminal goal
Kelly takes that input and spits out that I should maximize EV(log(wealth)), but that preference is the consequence, not the cause
1
1
6
Understood -- I think it's a crazy goal that's not going to hold water if you really dig into it but acknowledged that's what you want to do!
2
1
also, again this is all not necessary for my core arguments against the original paper; this is all a looooong tangent about risk tolerance
Maybe one way to frame this: ignore the infinite time horizon point—imagine it’s a one-shot.
And ignore utility as a function of wealth—let’s just talk in terms of raw utility (the output of the utility function).
1
If presented a gamble, I would not necessarily want to maximize EV(utility).
I am not indifferent between 99% chance of 0 utility and 1% chance of 100, and 100% chance of 1.
That preference is NOT a statement about my function of wealth->utility. Right?
1
1
Show replies

