1) One last note on .
In general, they've done well.
Tonight was not their best moment.
Conversation
2) Going into the election, they'd have to be off by 5% nationally for Trump to win: projects.fivethirtyeight.com/2020-election-
In fact, it looks like Biden will win, as they predicted.
electionbettingodds.com
But it looks like a ~1% swing nationally would make it a dead heat.
2
1
1
5) Prediction markets, for instance, had Trump around 35% going into the election.
ftx.com/president2020
If you take 538 and shift it by enough so that Trump is 35% to win, you'd have to shift it by... about 3.5%.
Which is pretty close to how much 538 was off by.
2
7
7) All of which would have been ok, except that didn't say "yeah prediction markets are cool too, idk, ours is just a model, it's unclear what's right".
He said this:
Quote Tweet
3
2
16
8) And, now, when presented with a bit of a miss, his response is...
Quote Tweet
LOL to all of you who gave 538's model shit for producing really weird maps every now and then.
Show this thread
2
4
9) C'mon, man. You've done great, overall. You did better than most in 2016, after nailing 2008 and 2012.
2020 was bad by not terrible. It's time to own it.
It was not terrible though. Nuance needs to be recognized. He shared that. And he in fact ends up being for the most part right in final conclusion.
1
Replying to
not sure about "ends up being for the most part right", I think he was more off than most people were.
And I totally recognized nuance--half my post was saying that it wasn't egregious except that he kept doubling down.
1
2
Show replies
Out of curiosity, what is one of your predictions or models that was slightly off that you should own?
I think your case is stronger, though mixed, when looking at state level, too
1
1
Agreed twitter.com/dglid/status/1
Quote Tweet
Quick and dirty Brier analysis of the models vs. prediction markets: Among states where the average probability was between 5-95% (i.e. excluding shoe-ins):
* @PredictIt: 0.13
* @FiveThirtyEight: 0.17
* @TheEconomist: 0.18
Lower is better (0.00 is perfect score).
Show this thread
1
1
Show replies
Quote Tweet
Replying to @robertwiblin
SBF seems to think they were overcofident
If this "overconfidence" led them to predict other races incorrectly then that would show in a brier score.
But "overconfidence" in the right direction is just.. good prediction.
I think the bigger problem is the way the narrative played out. If all votes from all states came in at once, we may not be as critical.
Quote Tweet
Maybe this isn't a forecaster/modeler's job, but I do think one thing that could improve public understanding of what to expect would be to spend some time on how a polling error in a particular early state could affect the narrative of the election.
Show this thread
1
I'll also add that if I recall correctly, on yesterday's podcast while he forcefully defended the model he also admitted that it may be worth evaluating whether there is something systemic about polling errors that in last few elections have been wrong in the same direction.
1
Show replies







