I wonder if it’s fair to start asking questions about how forecasting influenced the 2016 election: https://solomonmg.github.io/projects/1_project/ …https://twitter.com/NateSilver538/status/1076521277756788736 …
-
Show this thread
-
Replying to @SolomonMg @zeynep
As an outside observer from UK, who followed the 2016 polling from 538 and
@NateSilver538's commentary, it was absolutely clear to me they believed Clinton was far from certain to win. To imply otherwise is disingenuous at best.1 reply 0 retweets 10 likes -
Replying to @benjohnbarnes @SolomonMg and
I believe 538's probability of a Trump win prior to election day was about 33%. That's 1 Trump win for every 2 Clinton wins.
@NateSilver538 had consistently said he felt there were factors that probably made Trump more likely than the models were predicting.1 reply 0 retweets 0 likes -
Replying to @benjohnbarnes @SolomonMg and
I've seen this revisionism about their predictions and record a few times. I believe they gave a solid and extensive warning of what was and could happen. Who ever is to blame, in my view, 538 aren't among them.
1 reply 0 retweets 0 likes -
I don’t disagree with any of this. Our work isn’t about 538 being overly certain. It’s about the certainty everyone else took away from the forecasting that became popular after 538’s success
3 replies 0 retweets 4 likes -
I wonder if people just don't understand probabilities (at least as they apply to decisive events like this, rather than mixes). I talked to people that said "they only gave Trump a 33% chance", seemingly unaware that 33% is 1 time in 3.
3 replies 0 retweets 3 likes -
I suspect there’s some truth to that. There’s some research we cite that provides some evidence in the original link https://solomonmg.github.io/projects/1_project/ …
1 reply 0 retweets 2 likes -
I've had endless threads about this. People don't know how to interpret probability models, after a lifetime of poll results. 75-80% looks like a huge landslide, and things like extra decimal digit etc. didn't help. Also the visual dominates. Visual looked like a landslide.
2 replies 3 retweets 15 likes -
Replying to @zeynep @SolomonMg and
I read the text and footnotes—few previous datapoints with big gaps between (every four years) to base a model on, and one in three is huge odds. Almost nobody reads footnotes. Even 60/40 looks like a landslide because we have a lifetime of being trained to read polls, not odds.
1 reply 1 retweet 8 likes
So the issue is how things are interpreted and what can be done to not have them be interpreted erroneously and how prediction, in general and however presented, affects behavior. All important.
-
-
Replying to @zeynep @SolomonMg and
I’m coming from a different background, but I’m fascinated by this because you see it in game design as well: many people *do not understand* how they can fail a check with 90% odds of success twice in a row. Curious if there’s a solve that would teach model literacy.
0 replies 0 retweets 2 likesThanks. Twitter will use this to make your timeline better. UndoUndo
-
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.