Conversation

I want to run through a couple of caveats about the BIN model here, partly because I think this is very nerdy, and also because I think Twitter is good for throwaway thoughts.
Quote Tweet
This week's Commonplace post is a summary of a landmark paper from the Good Judgment Project — i.e. Satopää et al's BIN model. It gives us more evidence that it's better to tamp down on noise to improve decisions, instead of fighting cognitive biases. commoncog.com/blog/reduce-no
Show this thread
1
1
First: it’s really important to understand that what the authors call “bias, information, and noise” are really statistical artefacts. I’m not 100% confident that the model is an accurate reflection of reality.
1
As puts it “this is not like the physicists, where you assume a model, then you test, then you verify the model (or not). Here, the authors are assuming a model is accurate, and then using it to measure a latent variable that they can’t measure directly.
1
Second, it’s worth noting that the BIN model is run against an old dataset. The original GJP is several years old at this point; the authors didn’t test the model against an ongoing or new forecasting tournament. I hope they do so; GJP2 is currently ongoing.
1
Third, I would feel more confident about the BIN model if it generates new predictions — that may be tested! — and then the authors go out and test those predictions. Confirmation of those predictions should make us more confident that the model actually reflects reality.
Replying to
But of course, I’m probably not fully grokking parts of the math; maybe the model has been verified bout side of the paper? (But I don’t think so; the paper says that BIN is novel).
1
Finally, the biggest takeaway for me is that so much of modern science demands a level of statistical sophistication that I don’t have; I totally regret not taking more stats classes in uni.
1
1