11/ You need to loosen the constraints to make non-brute-force learning possible and pick the best future with better than random chance. Three mitigations help: the efficiency scaling is predictable, the fatality scaling is predictable, cross learning is possible.
Conversation
12/ The problem strikes me as a sort of restless multi-armed bandit problem with non-independent, non-stationary arms. These are known to be horribly intractable. AFAIK the Gittins index approach doesn’t work. You can’t sample the arms and shift from explore to exploit easily.
1
2
5
13/ In practice the problem is not that tightly constrained and all 3 mitigations are available to a degree. Also, it’s rarely 3 static futures, and the fatality rate is rarely a big fraction of test population. So early experimentation leads to iterative refinement of options.
1
1
2
14/ So societies have a deployment path that looks like a fractal generated from
(Parallel —> Refactor Options —> Series) across space and time
1
2
2
15/ The key to the practical solution is to guess the scaling phase transition points correctly across fractal levels, so you can switch between parallel/series gears and refactor options at the right time. The “series” option looks like “exploit” locally in time/space.
2
3
4
16/ Throwing in some links that inspired this line of thinking. First Big Little Idea Called Ergodicity taylorpearson.me/ergodicity/
1
2
2
17/ Ole Peters 2019 Nature article which seems to have caused this current surge of interest, "The ergodicity problem in economics" nature.com/articles/s4156
1
2
1
2
Is there a correspondence here with the homogeneity of medieval China vs the heterogeneity of medieval Europe, giving rise to diversity of funding sources (Columbus vs Cheng Ho and friendliness to heresy (Lutheranism) allowing better search for Renaissance -> Industrial Era?
1
There might be. I think Needham had some such argument that might have been cited in Mokyr’s book

