Fragility increases rapidly as you squeeze out the last bit of efficiency.
-
-
Already done by
@nntaleb in his Antifragile bookThanks. Twitter will use this to make your timeline better. UndoUndo
-
-
-
You can show that efficiency increases fragility by looking at the increase in code volume and complexity. Both are known to be correlated with faults. Consider the textbook quicksort implementation compared to one tailored for efficiency.pic.twitter.com/RWUuHnneSM
-
Generally a system that's anti fragile needs to have redundancy which is the opposite of efficiency. A good example of this is TCP/IP which sacrifices efficiency to ensure reliability.
- Show replies
New conversation -
-
-
Thanks. Twitter will use this to make your timeline better. UndoUndo
-
-
-
fragility(efficiency) = K * (1/efficiency) just be lazy and do what physicists do and measure K using experiments
Thanks. Twitter will use this to make your timeline better. UndoUndo
-
-
-
I couldn’t find a good reference. It seems logical that cycle time increases as a system with multiple variable process times gets close to theoretical capacity and wait time increases. At GitLab we explicitly opt for capacity over predictabilityhttps://about.gitlab.com/handbook/engineering/#velocity-over-predictability …
-
something like chaos engineering for organisations might works as well. interesting experiment - How can you make organisations more resilient? for eg. simulate if Everyone in US has lost internet connectivity.https://gitlab.com/gitlab-org/gitlab-foss/-/issues/47043 …
End of conversation
New conversation -
-
-
Say a system is at max efficiency, E=E_max, then we perturb it If E stays the same,
If E changes, it must decrease, because we’ve defined it to be at MAX efficiency.
=> the system is fragile (wrt E) unless it does not respond to perturbations
Formal enough? Or too obvious? -
That has a lot of assumptions in it, especially that E is modeled as a smooth 1-dimensional measure of the system. But efficiency can have lot of dimensions in real life. So, systems are more fragile when their efficiency measures are narrower. E.g. you optimize for only 1 KPI
End of conversation
New conversation -
-
-
Picture an X/Y fitness landscape: if the difficulty of finding a peak is related to width, and the height of any given peak is random, then over time the best peak (efficiency), and slope around that peak (fragility) found by a [person/company/market] will both tend to increase.
-
In this model, fragility(efficiency) and efficiency(time) are both stochastic (notably, no guarantee that fragility(time) is monotonic), and these functions would depend on the search "algorithm" used and the nature of the fitness landscape—I think you'd have to do empiricism.
End of conversation
New conversation -
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.