I don’t know about this... Forecasts in general can definitely be wrong. Assigning a 99% chance to Clinton winning, for example, was a modeling error. If you don’t have the outcome in your prediction interval, from a modeling standpoint you very likely did something wrong.https://twitter.com/zeynep/status/1323649467015376896 …
-
-
Predicting five million deaths may even be the reason we have much fewer deaths (scared people take precautions, deaths go way way down). Outcome is rarely how you can evaluate model quality when there is reflexivity and response—thus things like close elections or epidemiology.
-
I don't know whether I like the implication of this perspective insofar as the population at large is deemed as uneducated and therefore unable to understand a probabilistic forecast. If we can say with confidence that Biden will win, but instead lie and say that it's close...
- Show replies
New conversation -
-
-
Given Trump's polls in 2016, it's pretty insane to think he had a 1% chance. People with that deficit win all the time. It was a junk model
-
I do agree those models sucked, but you cannot get that from outcome alone. Any model that did not have correlated shifts taken properly into account sucks for any electoral college modeling.
- Show replies
New conversation -
-
-
I think most modelers would not accept your exclusive definition of “wrong” as “a model that gave 100% odds to something that didn’t happen.” There are degrees of error in our modeling, and getting the outcome that far along the uncertainty interval is high on the curve.
-
Anyway. We’re not going to make any progress here so maybe we can discuss in an academic setting or blog about it. I think your overall critique has some important truths to it, but the idea that we can’t diagnose and/or reject bad (“wrong”) models is silly.
- Show replies
New conversation -
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.