Conversation

Replying to
11/ You need to loosen the constraints to make non-brute-force learning possible and pick the best future with better than random chance. Three mitigations help: the efficiency scaling is predictable, the fatality scaling is predictable, cross learning is possible.
1
3
12/ The problem strikes me as a sort of restless multi-armed bandit problem with non-independent, non-stationary arms. These are known to be horribly intractable. AFAIK the Gittins index approach doesn’t work. You can’t sample the arms and shift from explore to exploit easily.
1
5
13/ In practice the problem is not that tightly constrained and all 3 mitigations are available to a degree. Also, it’s rarely 3 static futures, and the fatality rate is rarely a big fraction of test population. So early experimentation leads to iterative refinement of options.
1
2
14/ So societies have a deployment path that looks like a fractal generated from (Parallel —> Refactor Options —> Series) across space and time
1
2
15/ The key to the practical solution is to guess the scaling phase transition points correctly across fractal levels, so you can switch between parallel/series gears and refactor options at the right time. The “series” option looks like “exploit” locally in time/space.
2
4
Replying to and
Is there a correspondence here with the homogeneity of medieval China vs the heterogeneity of medieval Europe, giving rise to diversity of funding sources (Columbus vs Cheng Ho and friendliness to heresy (Lutheranism) allowing better search for Renaissance -> Industrial Era?
1