4/ The 2nd idea: coming up with 'reasoned rules'.
The intuition for this is that you want to use algorithms, but you don't want to do the hard work of measuring outcomes.
What do you do? Simple: you use a few commonsense rules, and then you use those to construct a formula.
Conversation
5/ Example: you want to evaluate loan applications. You take a handful of historical cases, and pick a number of commonsense properties to look into for each case. Then, calculate a 'standard score' for each of those variables across the cases.
Finally: set the cut-off points.
1
6/ 3rd idea: aggregate judgments. This one is simple. If you have many judgments, you can combine them and have the random variations cancel out.
This is why markets are good 'weighing machines' — over the long term, at least.
Only problem: aggregations are usually expensive.
1
7/ 4th idea: use algorithms but make them tolerable!
Humans tend to distrust algorithms.
One neat trick is to use an algorithm, and then allow people to tweak the answer — even if just a little! This makes them more accepting of algorithm use.
pubsonline.informs.org/doi/abs/10.128
1
8/ 5th and final idea: do something called a Mediating Assessments Protocol.
This was originally described in a Kahneman article, in 2019: sloanreview.mit.edu/article/a-stru
The basic idea is drawn from structured job interviews.
1
9/ In a structured job interview:
1) You have predetermined assessments.
2) When interviewing, you score each assessment independently first.
3) Only after completing the assessments on their own are interviewers allowed to pool judgments together.
1
10/ So Kahneman asks: why not do this, but for EVERYTHING?
In a MAP, you:
1) Come up with predetermined assessments.
2) Score each assessments independently.
3) Only think about the decision holistically AFTER you're done with the assessments.
But with a catch!
1
11/ How do you score?
Qualitative assessments like 'good' or 'very good' suck.
Scores like 4 out of 10 or 'A', 'B' and 'C' also suck.
Kahneman et al's answer: use a percentile evaluation over a comparison class!
1
12/ This does 3 things:
1) You are forced to take the 'outside view'. Comparison with other similar cases is easier!
2) Bad evaluators are easily identified. e.g. the one guy who rates 40% of cases as being in the top 10% ...
3) Percentiles can be easily turned into policy.
1
13/ Kahneman and co got an unnamed VC firm to implement MAP. To make it easier to evaluate potential investments, they used a scale of past investments:
It worked pretty well, for very little extra work.
1
14/ The goal of these 5 techniques is ultimately to reduce variability in decision making.
If I could summarise the thread across all 5 ideas, it is this: create some structure for your decisions. This tamps down on noise.
For the full post: see
