Stats people, help me out. For all the people like @gelliottmorris making probabilistic forecasts of winning back the House, updated to three significant figures... how are you supposed to validate those models when you only get to observe one outcome?
-
-
Replying to @Pinboard @gelliottmorris
Disappointed that no one chimed in with an answer to this one. I'm not a statistician but am firmly in the camp "you can't validate probabilistic models of one-offs.”
1 reply 0 retweets 4 likes -
Replying to @beenwrekt @gelliottmorris
I asked
@gelliottmorris about this before, but I think he's muted me or something1 reply 0 retweets 0 likes -
Replying to @Pinboard @gelliottmorris
He must think that you have a bad case of "probabilistic misinterpretation-itis."
1 reply 0 retweets 0 likes -
Replying to @beenwrekt @Pinboard
Clever Validated by using the model to predict past elections. IE use method trained on 2008/2012 data to predict and evaluate 2014, 2016
1 reply 0 retweets 0 likes -
Replying to @gelliottmorris @beenwrekt
(I'm not asking this adversarially, but honestly). There's such a small set of past elections, how do you get the necessary level of confidence in your probabilistic model? Especially as these are not independent trials.
1 reply 0 retweets 0 likes -
Replying to @Pinboard @beenwrekt
Scientifically speaking, we use error in past forecasts to say what is likely this year. Forecasting is also an art though: has a lot to do with validating what we already know about politics
1 reply 0 retweets 0 likes
Thank you for the answer!
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.