Assuming representativeness, we first collected Google search trend interest & MediaCloud news volume data on #nCoV2019 (#COVID19) transmissibility. We then curated relevant studies from Google Scholar & four popular preprint servers. (Discovery specs are noted in our preprint.)
-
-
-
This alone was a really interesting finding, but we took it a step further by comparing the results of the relevant preprint studies against the relevant peer-reviewed studies. To do this, we first collected the basic reproduction number (R_0) estimates presented by each study.
Show this thread -
As y'all may recall, R_0 is a measure of transmissibility *potential*. It can be defined as the *average* number of individuals a new case may infect in a fully susceptible population. I posted a few explainers about this measure earlier, including this more technical one below.https://twitter.com/maiamajumder/status/1221896232001572866 …
Show this thread -
After collecting R_0 ranges from each study that estimated the transmissibility potential of
#nCoV2019 (#COVID19), we plotted them by date of publication (see attached). Preprint estimates versus peer-reviewed estimates are clearly demarcated in the diagram to allow comparison.pic.twitter.com/ExseHbnXTw
Show this thread -
We also collected methods & data specs for each relevant study, which can be found in our preprint. Notably, the presentation of R_0 ranges differed across studies (e.g. 95%CI, 95%CrI, etc.); moreover, different modeling approaches & data sources were used across studies too.
Show this thread -
Nevertheless, we found that the range of R_0 estimates presented by preprints overlapped with those presented by peer-reviewed studies later down the line. Mean preprint estimates skewed higher on average than peer-reviewed estimates (R_0 = 3.61 & 2.54, respectively). However...
Show this thread -
These differences were driven primarily by two mean estimates that fell outside of the 95%CI we calculated for the preprint group's collective mean estimates [95%CI = 2.77, 4.45]. When these two estimates were discarded, average R_0 = 3.02 – similar to the peer-reviewed average.
Show this thread -
I'm sure I'll have plenty more to say about this work later (all of which I'll add to this thread as appropriate), but I'm gonna leave it here for now... And as a reminder, our analysis has not yet been peer-reviewed & should thus be treated as *strictly provisional*. Thank you!
Show this thread
End of conversation
New conversation -
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.