Ten abstractions that turned out to be much less robust than I'd expected:
- Evolutionary fitness
- GDP
- Economic welfare
- International law
- The scientific method
- Pleasure/suffering
- Personal identity
- Most ML theory
- Planning
- Rationality
Brief explanations ⬇️
Conversation
Fitness: is only defined locally, with respect to a population
GDP: same, GDP comparisons over time are very hacky
Welfare: you can't untangle selfish vs altruistic preferences
International law: see IR realism
The scientific method: *what* scientific method? See Strevens
1
10
Pleasure/suffering: see Buddhism
Personal identity: same, plus Parfit
Most ML theory: doesn't help scale deep learning
Planning: when did you last make a plan with 3+ steps? Abstraction does most of the work
Rationality: mostly just nudges intelligence in the right direction
1
12
Note that these aren't necessarily criticisms of the concepts as used by experts in the relevant fields - in several cases I just started off with a naively optimistic impression of them.
Also, since it's a long list, I'm probably wrong about at least one of these. But which?
6
9
Evolutionary fitness *is* defined locally, in both time and space, but I’m not sure that takes away a lot. That local optimization still resulted in everything you see around you. If it were defined globally, wouldn’t the whole process have ended a billion years ago?

