3/ The questions these researchers had went something like this: how is it that some people become experts through trial and error, and others do not or cannot?
Sure, it's great if you can take a course, or a coach. You will likely learn faster. But what if you can't?
Conversation
4/ It turns out there's a theory about this. It's called Cognitive Transformation Theory, or CTT.
It tells us how people build expertise in the real world.
Think: less pianists and chess and more business and leadership and investing.
1
18
5/ The theory's central claim is that we learn by replacing flawed mental models with better ones. The key word is REPLACE.
Here's the catch: the more advanced our mental models, the easier it is for us to ignore anomalous data, or to explain them away. This blocks progress.
2
6
41
6/ What do I mean by this? Well, let's say you're trying to get better in the real world. This means trial and error.
If the learning environment is kind, you can improve quickly. You build mental models that help you achieve your goals. You become good at what you do.
Yay!
1
2
11
7/ But most of the time, the learning environment is messy. Learning is hard.
This should be obvious: you don't know what cues to look for in your experiences because you don't have good models. But you can't build good mental models because you don't notice the right cues!
2
4
28
8/ Worse, having an instructor point out these cues to you might make you worse in the long term. You need to learn to learn from experience.
Learning better from experience means 2 things: 1) getting good at introspection ('sensemaking') and 2) DESTROYING old mental models.
1
9
33
9/ So here we get at the heart of the theory.
CTT tells us that we learn only when we destroy old mental models. We DON'T learn when we are refining an existing model.
It also tells us that it gets harder to unlearn when our mental models become more sophisticated.
Read:
2
8
30
10/ This explains a bunch of things. It tells us why expertise building with trial and error is discontinuous. You hit plateaus, and then you make jumps.
1
3
19
11/ It also explains the importance of having loose feedback loops. One of the best blog posts on this idea is 's Beware of Tight Feedback Loops: brianlui.dog/2020/05/10/bew
Lui talks about how having loose feedback loops are important for investing and life.
3
12
42
I'm thinking about adding noise to the training set of a ML model to prevent over-fitting
2
1
My brain went there as well. Not a perfect mapping, but can't help but think it given Brian's gifs in the essay.

