Conversation

Replying to
Unfortunately, when people write down stories about... anything, they usually elide so many details that it's impossible to extract valid, meaningful tacit knowledge from them. For example, most articles about software design use extremely simplified toy example code.
2
22
Simple toy examples will show you roughly what a software design might look like, but they will do almost nothing to help you understand "if I implement this idea in the real world in my 100k-LOC codebase, what will it look like 2y later?"
1
8
If you're trying to get people to use a software design idea, simply explaining what it *is* is only like 10% of your problem. The other 90% is stuff like: - When is it reasonable to apply the idea? - What are the trade-offs vs. alternative ideas? - How large are the benefits?
1
14
Those are questions that you basically can't answer only with abstract, explicit reasoning. Instead, you need to digest a lot of training data and then answer them intuitively. (This isn't just about software design, the same things apply to any tacit knowledge heavy field!)
2
9
But in many fields, people don't seem to put nearly enough value on case studies that are detailed enough to extract real training data from.
1
7
In software, I think (good) outage postmortems are a counterexample to this—they have enough detail that you can use them to speedrun tacit knowledge acquisition.
2
15
It's interesting to think about why postmortems are particularly good at including enough detail to train intuition well. Maybe because people realize it's important to learn the maximum amount from each datapoint, because the data points (incidents) are obviously very costly?
1
10
Also maybe because there's an obvious point at which to write up a postmortem of an outage (right after the outage), whereas there's not an obvious point at which to write up a story about many other types of management decision, or software design decision, etc.
1
11