Stats people, help me out. For all the people like @gelliottmorris making probabilistic forecasts of winning back the House, updated to three significant figures... how are you supposed to validate those models when you only get to observe one outcome?
-
-
That sounds a bit like (frequentist) calibration. Alternatively, if a Bayesian model has latent parameters, see if the results and putting the model back into a similr@situation might cause@it to produce different answers the second time around.
Thanks. Twitter will use this to make your timeline better. UndoUndo
-
-
-
From a frequentist perspective, it should have predicted a hundred percent probability. If we take a Bayesian view with a uniform prior, then the probability will skew towards higher numbers than 50, but not 100 (Bayes update using Bayes theorem).
Thanks. Twitter will use this to make your timeline better. UndoUndo
-
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.