Conversation

At the borderlands of EA and non-EA, I find that the main argument I tend to want to cite is Bayes: 'Yep, A seems possible. But if not-A were true instead, what would you expect to see differently? How well does not-A retrodict the data, compared to A?'
2
41
And relatedly, 'What are the future predictions of A versus not-A, and how soon can we get data that provides nontrivial evidence for one side versus the other?' But that's a more standard part of the non-EA college-educated person's toolbox.
1
11
And there's a sense in which almost all of the cognitive resources available to a human look like retrodiction, rather than prediction. If you hear a new Q and only trust your pre-registered predictions, then that means your whole lifetime of past knowledge is useless to you.
We have in fact adopted the norm "give disproportionate weight to explicit written-down predictions", to guard against hindsight bias and lying. But it's still the case that almost all the cognitive work is being done at any time by "how does this fit my past experience?".
1
11
I guess there's another, subtler reason we give extra weight to predictions: there's a social norm against acknowledging gaps in individual ability. If you only discuss observables and objective facts, never priors, then it's easier to just-not-talk-about individuals' judgment.
1
6
Whatever the rationale, it's essential that we in fact get better at retrodiction (i.e., reasoning about the things we already know), because we can't do without it. We need to be able to talk about our knowledge, and we need deliberate practice at manipulating it.
1
8
The big mistake isn't "give more weight to pre-registered predictions"; it's "... and then make it taboo to say that you're basing any conclusions on anything else". Predictions are the gold standard, but man cannot live on gold alone.
8