9/n So, for example, a retrospective analysis of routinely collected data was rated down because it was lead by a public health physician, but another almost identical piece of work was given a green square because a pulmonologist was in charge 
-
-
20/n But if we look at the paper, this doesn't make sense Of the HCQ group, 22/75 had a persistent viral load Of the control, it was 19/75 If you work the odds ratio out, you get 1.22, i.e. 22% HIGHER FOR HCQpic.twitter.com/UcUqyc7t0x
Show this thread -
21/n In other words, not only is the figure in the meta-analysis wrong, it seems to be in entirely the opposite direction to the true result That's very worrying, and hard to explain
Show this thread -
22/n (One thing that gives you an answer more similar to the number they've got is calculating odds based on the %s given in the KM curve rather than the crude figures, but that has its own issues)
Show this thread -
23/n There's also no reasoning given for including multiple different outcomes from the SAME PATIENTS in a meta-analysis This is very concerning
Show this thread -
24/n For example, they've got some studies where they included numbers on the likelihood of a "clinical cure" AND "death" from the same patients These are obviously related, so aggregating both in the same model is...problematic
Show this thread -
25/n It's also worth noting that several of these retrospective clinical audits were from the same places at overlapping times, and so probably included some of the same patients anyway
Show this thread -
26/n There are, somehow, even more issues to examine here For one thing, the review protocol was poorly described and hard to followpic.twitter.com/AVrJ50Kx6s
Show this thread -
27/n Remember; this is the pre-proof version of the study In other words, the final, slightly unedited, publishable version In that context, this methodology is FAR too opaque
Show this thread -
28/n The results show you what I'm talking about There is NO WAY that searching with those search terms gets you only 23 studies Even just plugging in the search terms to PubMed gives you >100 studiespic.twitter.com/73Qn7v1x9u
Show this thread -
29/n It took me less than 5 minutes to find a study that matches the inclusion criteria but was not included That's...worrying
Show this thread -
30/n On top of that, in some cases the authors have included older versions of the included studies That's less than ideal (newer versions CORRECT mistakes!)
Show this thread -
31/n There is so much more I could look at here, but honestly you have to stop somewhere This study is riddled with flaws and almost certainly does not present an accurate estimate of the effect of HCQ or CQ
Show this thread -
32/n If you want a decent summation of the evidence for/against HCQ, a good source is CEBM: "Current data do not support the use of hydroxychloroquine for prophylaxis or treatment of COVID-19"https://www.cebm.net/covid-19/hydroxychloroquine-for-covid-19-what-do-the-clinical-trials-tell-us/ …
Show this thread -
33/n I forgot to mention, the paper was received, revised, and accepted within a month While not unheard of, that's very quick for academic publishing!pic.twitter.com/UgR3WO2BPB
Show this thread -
34/n In summation, the paper: - inadequately rates risk of bias - inappropriately combines estimates... - ...that may have been miscalculated It is hard to know what to make of this, except to say that the paper itself is not very useful in any way
Show this thread
End of conversation
New conversation -
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.