5/ So efficiency for method A, at scale n, and time t is E(A, n, t). It is a function of time on its own learning curve only. Assume no cross-learning.
Conversation
6/ Worse, the fatality rates F_A, F_B, F_C are also uncertain functions of deployment scale n, and time t.
But you have only 1 timeline to work with. What do you do?
1
1
2
7/ Pure small-scale testing won’t work. You can’t divide population into 3 groups of 333 people each to pick lowest fatality/highest efficiency for a test period T because it doesn’t predict efficiency/fatality at n=1000, and worse, people are gonna die during the test phase
1
2
2
8/ OTOH, if you go all-in in series, ABC, BAC, CAB, CBA, BAC, BCA will all have different fatality profiles and each will have previous stages capping the maximium effectiveness at full scale, since people are dying along the way.
2
2
1
9/ Point is... there’s no way to test your way out of this bind because the indeterminacy in scaling efficiency and fatality rates with n from 0-1000 fundamentally limits the information input available for your problem.
1
2
3
10/ Under the tightest conditions, basically nothing is possible I think. Series or parallel, the only way to land in the most efficient future with the most people left alive to enjoy it is to get lucky somehow.
3
3
3
11/ You need to loosen the constraints to make non-brute-force learning possible and pick the best future with better than random chance. Three mitigations help: the efficiency scaling is predictable, the fatality scaling is predictable, cross learning is possible.
1
2
3
12/ The problem strikes me as a sort of restless multi-armed bandit problem with non-independent, non-stationary arms. These are known to be horribly intractable. AFAIK the Gittins index approach doesn’t work. You can’t sample the arms and shift from explore to exploit easily.
1
2
5
13/ In practice the problem is not that tightly constrained and all 3 mitigations are available to a degree. Also, it’s rarely 3 static futures, and the fatality rate is rarely a big fraction of test population. So early experimentation leads to iterative refinement of options.
1
1
2
14/ So societies have a deployment path that looks like a fractal generated from
(Parallel —> Refactor Options —> Series) across space and time
1
2
2
15/ The key to the practical solution is to guess the scaling phase transition points correctly across fractal levels, so you can switch between parallel/series gears and refactor options at the right time. The “series” option looks like “exploit” locally in time/space.
Replying to
16/ Throwing in some links that inspired this line of thinking. First Big Little Idea Called Ergodicity taylorpearson.me/ergodicity/
1
2
2
17/ Ole Peters 2019 Nature article which seems to have caused this current surge of interest, "The ergodicity problem in economics" nature.com/articles/s4156
1
2
1
2
Replying to
well the simplest version is actually russian roulette or the kind of casino example Taleb uses (and is in taylor pearson's article). This is the simplest "real world" macro example I could make up
1

