Conversation

Replying to
11) But even if we didn't, we could have a good guess. 538's model was Biden winning popular vote by ~8% and electoral college by ~5%, with 90% that Biden won EC. Prediction markets don't publish models, but we can guess. They were 30% for TRUMP.
1
1
12) So _probably_, their implied model was something like: Biden winning popular vote by ~4%, EC by ~1%. Standard Deviation of ~3%. That's what you'd get if you assumed 538's model but shifted it all until it was 30% for Trump.
1
1
13) You can do this for lots of predictions: guess at the underlying model that generated them, and see how well it did. _now_ let's do a Bayesian update. Assuming normal distribution, SD of 3%, 538 predicted 8%, PM predicted 4%, and actual was 4%: 2.4x update for PM!
1
1
14) If you were 50/50 before, now you'd be 70/30 for prediction markets. A big swing, and the opposite of what we got before! I basically think this approach is better. It's using a fuller guess at people's models.
1
2
15) Basically: sure, 538 guessed the winner more confidently. But both guessed it right, and in fact prediction markets guessed the margin about right; 538 was off. The election was fairly close! Prediction markets nailed this; 538 didn't.
1
2
16) All of this, BTW, is assuming that Biden did/does win; if you disagree with that, you have different updates, and in fact update much _further_ towards prediction markets. But even this approach can be flawed.
1
1
17) What if, in an alternate universe, Biden won by 4%, but also 24h before the election he got caught in an armed robbery (unlikely, but you never know!). Sure, prediction markets would have been "right". But the truth is _no one_ was predicting burglary.
1
1
18) Really he would have been about to win by 8% before that really unlikely event. An even which wasn't contributing to either 538's or PM's models. I think the *right* update would have been: 538 was about right with +8%, and then some really unlikely thing happened.
1
1
19) So you can't even just look at the result and impute to models: you have to understand whether that was really part of what the models were modeling, and what the real update was. If you'd asked 538 and PMs how likely +8% --> burglary --> +4% was, 538 would have been higher!
1
1
20) So in fact the _full_ Bayesian update would have favored 538 there, even though updating on the % vote would have favored prediction markets. In this case, though, it wasn't shocking. Polls were wrong in about the way some thought they might be. About the same as 2016.
1
1
21) This was, in fact, pretty close to exactly what I think prediction markets were predicting. It wasn't *shocking* to 538 -- just over 1 standard deviation away from their mean! But it was a *little* bit surprising, at least more than it was to markets.
Replying to
22) And this doesn't mean markets are always right either! After results started coming out, all PMs went to ~80% for Trump. Trump going up was correct! Florida was good for him. But 80% was probably too high, and PMs probably overreacted.
1
2
23) To be honest I'd be surprised if they *didn't*. Ask my rational brain and it'd have said "eh this takes it from 30/70 to 50/50".
1
3
24) Tell my emotional brain, and: "so you're saying TRUMP was heavily favored to lose, and was about to lose Florida and thus the election, but then one late-reporting Florida county came in way better for Trump than expected so he won Florida instead"--
1
5
25) My first instinct was "whelp, we've been here before". We hadn't, in fact! 2020 was different: Biden's lead was ~4% higher which was enough of a cushion. But prediction markets sure thought it looked the same.
4
8