Given the uncertainty around every component of #COVID19 forecasts, how can *anyone* do any useful modeling at all? The more research we did, the more confused I became. But our conversation with @drake_lab helped.https://fivethirtyeight.com/features/why-its-so-freaking-hard-to-make-a-good-covid-19-model/ …
-
-
Basically, many of the uncertainties are correlated; not all combinations of parameters are equally plausible or even possible. The thing is, it's hard to estimate those correlations from the data. But it's not just unhelpful to assume they're uncorrelated -- it's also wrong.
Show this thread -
So if you think, like I feared, that the uncertainty means we can't say anything useful at all, that's not true. But this is where domain expertise and experience come in -- that's what helps modelers make informed decisions about how to narrow the range of outcomes.
Show this thread -
Of course, even experts can disagree! Which explains why different research groups have come up with different forecasts. But that's what makes this topic a particularly iffy one for non-experts to weigh in on, as this excellent
@W_R_Chase piece outlines:https://www.williamrchase.com/post/why-i-m-not-making-covid19-visualizations-and-why-you-probably-shouldn-t-either/ …Show this thread
End of conversation
New conversation -
-
-
I mean, welcome to academic modelling, in many cases. The one thing going for you is that in many instances, errors aren't correlated and are normally distributed, so the chances of several data uncertainties all coming with extreme worst case true parameters is low.
-
IE the chances are good that your errors will cancel out to some degree, and the reasonable worst case scenario is somewhat better than, e.g., the 95th quantile on all of your error distributions.
End of conversation
New conversation -
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.
in
, via
| she/her