1/ Been thinking about non-ergodicity and path dependency due to provocations from @doriantaylor and @TaylorPearsonMe among others. Trying to come up with the simplest toy example I think clarifies the basic question. Here’s what I came up with...
-
-
4/ Due to network effects, each path A is more efficient at larger scales of deployment, but the efficiency is *uncertain* and not a deterministic function of efficiency at lower scales. So it could be that A is most efficient when 500 people use it, B at 750, but C at 1000.
Show this thread -
5/ So efficiency for method A, at scale n, and time t is E(A, n, t). It is a function of time on its own learning curve only. Assume no cross-learning.
Show this thread -
6/ Worse, the fatality rates F_A, F_B, F_C are also uncertain functions of deployment scale n, and time t. But you have only 1 timeline to work with. What do you do?
Show this thread -
7/ Pure small-scale testing won’t work. You can’t divide population into 3 groups of 333 people each to pick lowest fatality/highest efficiency for a test period T because it doesn’t predict efficiency/fatality at n=1000, and worse, people are gonna die during the test phase
Show this thread -
8/ OTOH, if you go all-in in series, ABC, BAC, CAB, CBA, BAC, BCA will all have different fatality profiles and each will have previous stages capping the maximium effectiveness at full scale, since people are dying along the way.
Show this thread -
9/ Point is... there’s no way to test your way out of this bind because the indeterminacy in scaling efficiency and fatality rates with n from 0-1000 fundamentally limits the information input available for your problem.
Show this thread -
10/ Under the tightest conditions, basically nothing is possible I think. Series or parallel, the only way to land in the most efficient future with the most people left alive to enjoy it is to get lucky somehow.
Show this thread -
11/ You need to loosen the constraints to make non-brute-force learning possible and pick the best future with better than random chance. Three mitigations help: the efficiency scaling is predictable, the fatality scaling is predictable, cross learning is possible.
Show this thread -
12/ The problem strikes me as a sort of restless multi-armed bandit problem with non-independent, non-stationary arms. These are known to be horribly intractable. AFAIK the Gittins index approach doesn’t work. You can’t sample the arms and shift from explore to exploit easily.
Show this thread -
13/ In practice the problem is not that tightly constrained and all 3 mitigations are available to a degree. Also, it’s rarely 3 static futures, and the fatality rate is rarely a big fraction of test population. So early experimentation leads to iterative refinement of options.
Show this thread -
14/ So societies have a deployment path that looks like a fractal generated from (Parallel —> Refactor Options —> Series) across space and time
Show this thread -
15/ The key to the practical solution is to guess the scaling phase transition points correctly across fractal levels, so you can switch between parallel/series gears and refactor options at the right time. The “series” option looks like “exploit” locally in time/space.
Show this thread -
16/ Throwing in some links that inspired this line of thinking. First
@TaylorPearsonMe Big Little Idea Called Ergodicityhttps://taylorpearson.me/ergodicity/Show this thread -
17/ Ole Peters 2019 Nature article which seems to have caused this current surge of interest, "The ergodicity problem in economics"https://www.nature.com/articles/s41567-019-0732-0 …
Show this thread -
Wright Meets Markowitz, via
@mengwong http://research.economics.unsw.edu.au/vpanchenko/papers/WriteMeetsMarkowitz.pdf …Show this thread -
Founder effect: non-ergodicity in nature https://en.wikipedia.org/wiki/Founder_effect …
Show this thread
End of conversation
New conversation -
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.