Conversation

2/ Let's talk about 5 techniques. The first idea is doing a noise audit. Measuring bias is hard. You may have to wait for outcomes to decisions; this may take years. Much easier to measure noise: you look at your org and measure the variance of decision outputs.
1
3/ Kahneman et al suggest administering a set of sample cases, and then measuring the variance in evaluations of those cases across your org. They did this at two financial services firms. Executives thought the variance would be 5-10%. It turned out to be closer to 48-60%.
Image
Replying to
4/ The 2nd idea: coming up with 'reasoned rules'. The intuition for this is that you want to use algorithms, but you don't want to do the hard work of measuring outcomes. What do you do? Simple: you use a few commonsense rules, and then you use those to construct a formula.
1
5/ Example: you want to evaluate loan applications. You take a handful of historical cases, and pick a number of commonsense properties to look into for each case. Then, calculate a 'standard score' for each of those variables across the cases. Finally: set the cut-off points.
Image
1
6/ 3rd idea: aggregate judgments. This one is simple. If you have many judgments, you can combine them and have the random variations cancel out. This is why markets are good 'weighing machines' — over the long term, at least. Only problem: aggregations are usually expensive.
1
9/ In a structured job interview: 1) You have predetermined assessments. 2) When interviewing, you score each assessment independently first. 3) Only after completing the assessments on their own are interviewers allowed to pool judgments together.
1
10/ So Kahneman asks: why not do this, but for EVERYTHING? In a MAP, you: 1) Come up with predetermined assessments. 2) Score each assessments independently. 3) Only think about the decision holistically AFTER you're done with the assessments. But with a catch!
1
11/ How do you score? Qualitative assessments like 'good' or 'very good' suck. Scores like 4 out of 10 or 'A', 'B' and 'C' also suck. Kahneman et al's answer: use a percentile evaluation over a comparison class!
Image
1
12/ This does 3 things: 1) You are forced to take the 'outside view'. Comparison with other similar cases is easier! 2) Bad evaluators are easily identified. e.g. the one guy who rates 40% of cases as being in the top 10% ... 3) Percentiles can be easily turned into policy.
1
13/ Kahneman and co got an unnamed VC firm to implement MAP. To make it easier to evaluate potential investments, they used a scale of past investments: It worked pretty well, for very little extra work.
Image
1