Feedback effects on behaviour definitely important to consider. I suspect overly precise estimates of transmissibility give the impression that epidemic is easier to monitor (and control measures easier to reactively tailor) than they are in reality.
-
-
Replying to @AdamJKucharski @nataliexdean
Yeah, and I'm really bothered by the emergent obsessive model-tracking/updating on many sites... Our models are not high-precision crystal balls, plus feedback effects, plus placing epi models in the same narrative framework as the horse-race political coverage is not good.
3 replies 4 retweets 14 likes -
I wrote this early on about epi models: https://www.theatlantic.com/technology/archive/2020/04/coronavirus-models-arent-supposed-be-right/609271/ … I really hate to see epi models turn into the daily refresh thing we saw run up to the 2016 with election models, rather than "what is the best course of action" guidelines.
1 reply 2 retweets 17 likes -
Replying to @zeynep @nataliexdean
Remember seeing that – really nice piece. Forecasting is such a small part of what models are useful for (especially in outbreaks where control measures change frequently), and as you say, getting lost in details/comparisons distracts from far more important questions
1 reply 3 retweets 7 likes -
Replying to @AdamJKucharski @nataliexdean
Thank you! It's actually really worrisome to me, because it is turning into climate change polarization, with improper use and comparison of models becoming the pivot with which to polarize and to stop action. And the faux precision, model tracking sites etc. feed right into it!
2 replies 1 retweet 12 likes -
Plus, exponential dynamics are hard to viscerally understand so models can converge on one thing but that doesn't mean same thing as, say, weather projections but that's how people will interpret model trackers—convergence=stability. Exponential processes can tip over quickly!
1 reply 6 retweets 17 likes -
In 2016, people got false comfort from models/odds reported with faux precision (71.4 to 91% chance of Clinton!) and visually presented (the eye overwhelms the fine print). Plus people are used to polls where that would mean a landslide, not odds where it is anything but.
1 reply 1 retweet 4 likes -
Now, I'm afraid something similar is happening with COVID model trackers especially as they show convergence. People are used to interpreting model convergence as stability/confidence, but we have exponential dynamics, feedback effects and a novel virus! This will backfire.
1 reply 1 retweet 8 likes -
Replying to @zeynep @nataliexdean
Personally, I can't see much benefit to forecasting further than 2-3 weeks out given how much dynamics could change as behaviour/policies shift. Longer-term projections make a lot of strong assumptions, so think need to show other (wide-ranging) possible scenarios alongside.
1 reply 9 retweets 44 likes -
Replying to @AdamJKucharski @nataliexdean
Completely agree and after a few weeks, the range of possibilities is so huge! NYC went from 7 to 500+ known deaths per day in less than three weeks. We need Rt, case/death count, positivity rate, etc. for sure. But basically it's to know how long to buckle down, not forecast.
4 replies 2 retweets 9 likes
(Well, in any case I hope the message gets clearer, but also without measures to control things back down as society opens up, it's a very frustrating conversation to begin with).
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.