7/ Pure small-scale testing won’t work. You can’t divide population into 3 groups of 333 people each to pick lowest fatality/highest efficiency for a test period T because it doesn’t predict efficiency/fatality at n=1000, and worse, people are gonna die during the test phase
-
Show this thread
-
8/ OTOH, if you go all-in in series, ABC, BAC, CAB, CBA, BAC, BCA will all have different fatality profiles and each will have previous stages capping the maximium effectiveness at full scale, since people are dying along the way.
2 replies 2 retweets 1 likeShow this thread -
9/ Point is... there’s no way to test your way out of this bind because the indeterminacy in scaling efficiency and fatality rates with n from 0-1000 fundamentally limits the information input available for your problem.
1 reply 2 retweets 3 likesShow this thread -
10/ Under the tightest conditions, basically nothing is possible I think. Series or parallel, the only way to land in the most efficient future with the most people left alive to enjoy it is to get lucky somehow.
3 replies 2 retweets 3 likesShow this thread -
11/ You need to loosen the constraints to make non-brute-force learning possible and pick the best future with better than random chance. Three mitigations help: the efficiency scaling is predictable, the fatality scaling is predictable, cross learning is possible.
1 reply 2 retweets 3 likesShow this thread -
12/ The problem strikes me as a sort of restless multi-armed bandit problem with non-independent, non-stationary arms. These are known to be horribly intractable. AFAIK the Gittins index approach doesn’t work. You can’t sample the arms and shift from explore to exploit easily.
1 reply 2 retweets 5 likesShow this thread -
13/ In practice the problem is not that tightly constrained and all 3 mitigations are available to a degree. Also, it’s rarely 3 static futures, and the fatality rate is rarely a big fraction of test population. So early experimentation leads to iterative refinement of options.
1 reply 1 retweet 2 likesShow this thread -
14/ So societies have a deployment path that looks like a fractal generated from (Parallel —> Refactor Options —> Series) across space and time
1 reply 2 retweets 2 likesShow this thread -
15/ The key to the practical solution is to guess the scaling phase transition points correctly across fractal levels, so you can switch between parallel/series gears and refactor options at the right time. The “series” option looks like “exploit” locally in time/space.
2 replies 3 retweets 4 likesShow this thread -
16/ Throwing in some links that inspired this line of thinking. First
@TaylorPearsonMe Big Little Idea Called Ergodicityhttps://taylorpearson.me/ergodicity/1 reply 2 retweets 2 likesShow this thread
17/ Ole Peters 2019 Nature article which seems to have caused this current surge of interest, "The ergodicity problem in economics"https://www.nature.com/articles/s41567-019-0732-0 …
-
-
Wright Meets Markowitz, via
@mengwong http://research.economics.unsw.edu.au/vpanchenko/papers/WriteMeetsMarkowitz.pdf …1 reply 0 retweets 2 likesShow this thread -
Founder effect: non-ergodicity in nature https://en.wikipedia.org/wiki/Founder_effect …
1 reply 0 retweets 2 likesShow this thread
End of conversation
New conversation -
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.