Conversation

Efforts to improve forecasting work primarily by reducing random noise in predictions, not compensating for bias! Remember when someone asked Kahneman "how do you avoid the biases you discovered" and he was like "I don't" and everyone made fun of him? WHO'S LAUGHING NOW?!
Quote Tweet
This week's Commonplace post is a summary of a landmark paper from the Good Judgment Project — i.e. Satopää et al's BIN model. It gives us more evidence that it's better to tamp down on noise to improve decisions, instead of fighting cognitive biases. commoncog.com/blog/reduce-no
Show this thread
2
15
Replying to and
"50 percent of the accuracy improvements can be attributed to noise reduction, 25 percent to tamping down bias, and 25 percent to increased information." So this work tells us that the low-hanging fruit was reducing noise - not that bias reduction was irrelevant.
Quote Tweet
Efforts to improve forecasting work primarily by reducing random noise in predictions, not compensating for bias! Remember when someone asked Kahneman "how do you avoid the biases you discovered" and he was like "I don't" and everyone made fun of him? WHO'S LAUGHING NOW?! twitter.com/ejames_c/statu…
Show this thread
2
10
It certainly does not mean that it's *better* to tamp down on noise, just that GJP was most successful in doing so. (It's also non-obvious that the way they operationalize reducing noise and information provision weren't equivalent to reducing *individual level* biases.)
Quote Tweet
This week's Commonplace post is a summary of a landmark paper from the Good Judgment Project — i.e. Satopää et al's BIN model. It gives us more evidence that it's better to tamp down on noise to improve decisions, instead of fighting cognitive biases. commoncog.com/blog/reduce-no
Show this thread
1
1
But one of the most successful interventions is calibration training to reduce overconfidence - and the BIN model seems to categorize this as reducing noise, which is easily misinterpreted. Or am I missing something - / - ?
Quote Tweet
Replying to @davidmanheim @benskuhn and @tenthkrige
To clarify the latter point, when looking at predictive accuracy on individual questions, miscalibration is noise, not bias. Reducing overconfidence, which is typically thought of as a bias, is considered noise reduction.
1
2