1/ Been thinking about non-ergodicity and path dependency due to provocations from and among others. Trying to come up with the simplest toy example I think clarifies the basic question. Here’s what I came up with...
Conversation
2/ Let’s call it the Perez Problem, after Carlota Perez. It’s the problem of how a complex society deploys complex new technology requiring a degree of coordination and convention, like say the internet (a protocol convention) or vaccines.
1
2
6
3/ An invention X has occurred in a society of 1000 people that can be deployed in 3 ways, A, B, and C. Each way has a learning curve attached, with an unknown fatality rate that declines to 0 over time. Eg if you go all-in on A, there will be fatalities F_A on deployment path.
Replying to
4/ Due to network effects, each path A is more efficient at larger scales of deployment, but the efficiency is *uncertain* and not a deterministic function of efficiency at lower scales. So it could be that A is most efficient when 500 people use it, B at 750, but C at 1000.
1
2
3
5/ So efficiency for method A, at scale n, and time t is E(A, n, t). It is a function of time on its own learning curve only. Assume no cross-learning.
1
2
2
6/ Worse, the fatality rates F_A, F_B, F_C are also uncertain functions of deployment scale n, and time t.
But you have only 1 timeline to work with. What do you do?
1
1
2
7/ Pure small-scale testing won’t work. You can’t divide population into 3 groups of 333 people each to pick lowest fatality/highest efficiency for a test period T because it doesn’t predict efficiency/fatality at n=1000, and worse, people are gonna die during the test phase
1
2
2
8/ OTOH, if you go all-in in series, ABC, BAC, CAB, CBA, BAC, BCA will all have different fatality profiles and each will have previous stages capping the maximium effectiveness at full scale, since people are dying along the way.
2
2
1
9/ Point is... there’s no way to test your way out of this bind because the indeterminacy in scaling efficiency and fatality rates with n from 0-1000 fundamentally limits the information input available for your problem.
1
2
3
10/ Under the tightest conditions, basically nothing is possible I think. Series or parallel, the only way to land in the most efficient future with the most people left alive to enjoy it is to get lucky somehow.
3
3
3
11/ You need to loosen the constraints to make non-brute-force learning possible and pick the best future with better than random chance. Three mitigations help: the efficiency scaling is predictable, the fatality scaling is predictable, cross learning is possible.
1
2
3
12/ The problem strikes me as a sort of restless multi-armed bandit problem with non-independent, non-stationary arms. These are known to be horribly intractable. AFAIK the Gittins index approach doesn’t work. You can’t sample the arms and shift from explore to exploit easily.
1
2
5
13/ In practice the problem is not that tightly constrained and all 3 mitigations are available to a degree. Also, it’s rarely 3 static futures, and the fatality rate is rarely a big fraction of test population. So early experimentation leads to iterative refinement of options.
1
1
2
14/ So societies have a deployment path that looks like a fractal generated from
(Parallel —> Refactor Options —> Series) across space and time
1
2
2
15/ The key to the practical solution is to guess the scaling phase transition points correctly across fractal levels, so you can switch between parallel/series gears and refactor options at the right time. The “series” option looks like “exploit” locally in time/space.
2
3
4
16/ Throwing in some links that inspired this line of thinking. First Big Little Idea Called Ergodicity taylorpearson.me/ergodicity/
1
2
2
17/ Ole Peters 2019 Nature article which seems to have caused this current surge of interest, "The ergodicity problem in economics" nature.com/articles/s4156
1
2
1
2
