2/n The semi-anonymous site claims to be a "real-time meta analysis" of all published studies on ivermectin, collating an impressive 60 pieces of research It's flashy, well-designed, and at face value appears very legitimate
-
-
Show this thread
-
3/n The benefits that this website show for ivermectin are pretty amazing - 96%(!) lower mortality based on 10,797 patients worth of data is quite astonishing. Sounds like we should all be using ivermectin! Except, well, these numbers are totally meaninglesspic.twitter.com/B0rrocOEpS
Show this thread -
4/n Digging into the site, you're immediately hit with this error. That's not how p-values work at all, any stats textbook will show you why this statement is entirely untruepic.twitter.com/Hzb4K1NYaH
Show this thread -
5/n Most of these dotpoints are wrong in some way (heterogeneity causing an underestimate is particularly hilarious) but this statement about CoIs is wild considering that there are several potentially fraudulent studies in the IVM literaturepic.twitter.com/pWS2gaywrF
Show this thread -
6/n Going back to the heterogeneity point, this is the explanation from the authors about why heterogeneity is not a problem in their analysis. They appear to have entirely misunderstood what heterogeneity is (hint: this is more about BIAS than heterogeneity)pic.twitter.com/m2YLGSFfKy
Show this thread -
7/n Also worth noting, I've previously shown the heterogeneity is high in meta-analysis of IVM for COVID-19 mortality, and that's almost entirely because there are 2 studies that show a massive benefit and a bunch of studies that show no benefit at all
Show this thread -
8/n Anyway, back to the website - the authors then present this forest plot of effect estimates Each dot is a point estimate, and the lines around the dots represent confidence intervalspic.twitter.com/bQhmFfGU1L
Show this thread -
9/n Now, any data thug will immediately notice something wildly improbable about this forest plot (H/T
@jamesheathers) Can you see the issue?


Show this thread -
10/n While you have a think, here's a graph I made replicating these results. Not very pretty, but the final result is the same (with some minor rounding differences)pic.twitter.com/gDheBO5em6
Show this thread -
11/n Ok, so back to the question - why does this look problematic? It comes down to confidence intervals. When you've got a bunch of very wide confidence intervals from different studies, you expect the point estimates to move around inside them quite a bit
Show this thread -
12/n Instead, look at those point-estimates! Even though they've all got MASSIVE intervals, virtually all the PEs are within 0.05-0.1 either side of 0.15pic.twitter.com/2f7HQYpoCW
Show this thread -
13/n We can actually graph this. In Stata, I made what's called a funnel plot, which basically plots each point estimate against its standard error, with a line at the overall estimate from the meta-analysis modelpic.twitter.com/tF3U3ToFGv
Show this thread -
14/n What you expect to see, if there are no issues, is an equal number of points on either side of the line at similar positions Instead, ~virtually every point is below the estimate of the effect~pic.twitter.com/oLoaGQRdh4
Show this thread -
15/n I ran an Egger's regression to test the statistical significance of this, and the result is that there is a huge amount of what would usually be called 'publication' bias in the results. In other words, this is extremely weirdpic.twitter.com/7bKqGFrTSU
Show this thread -
16/n What's happening here? Well, this is where we really get into the weeds You see, the meta-analysis on this website is REALLY BIZARRE
Show this thread -
17/n How bizarre? Well, here are the measurements from the 'early' treatment studies - hospitalization is in the same model as % viral positivity, recovery time, symptoms, and death All in the same model WILDpic.twitter.com/LDzNRQ8JId
Show this thread -
18/n Worse still, these appear to be picked almost entirely arbitrarily. The website claims to choose the "most serious" outcome, but then immediately says that in cases where no patients died or most people recovered a different estimate was usedpic.twitter.com/xspr31qd7m
Show this thread -
19/n Even a fairly surface skim shows that what appears to actually be happening here is that the authors choose the outcome that shows the biggest benefit for ivermectin
Show this thread -
20/n For example, the analysis includes this paper. The primary outcome was viral load, which was identical between groups Never fear however, because ivmmeta won't take "null findings" as an answer!pic.twitter.com/xW0WmYGaV9
Show this thread -
21/n If you dig through the supplementaries, what you find is that for "all reported symptoms" there was a large but statistically insignificant difference, represented in this graph of marginal predicted probabilities from a logistic model. It is mostly driven by an/hyposmiapic.twitter.com/h64IdG4LJF
Show this thread -
22/n If you eyeball "any symptoms", you get the results that ivmmeta included in their analysis But that's TOTALLY ARBITRARY. Why not choose cough (where there's no difference) or fever (where IVM did WORSE)
Show this thread -
23/n Also, hilariously, this study used the last observation carried forward method to account for missing data in symptom reporting. You can actually see this in the supplementaries - it's possible the entire result comes from a few people not filling out their diaries properlypic.twitter.com/rA7PO4o4hd
Show this thread -
24/n None of this should matter, because the trial found NO BENEFIT FOR IVERMECTIN, but this has been reported and included into ivmmeta dot com as a hugely beneficial resultpic.twitter.com/mjGBDnpVW5
Show this thread -
25/n This explains the bias I noted above - it's not publication bias, it's that the authors appear to have generally chosen whichever result makes ivermectin look better to include in their model Not really scientific, that!
Show this thread -
26/n But the fun doesn't stop there. The inclusion criteria for this website is any study published on ivermectin, which has led to what I can only call total junk science being lumped in with decent studies
Show this thread -
27/n Here's a study with impossible percentages in table 1 that used a comparator of 12 completely random patients as their control. They don't even say if these 12 people had COVID-19 Included in ivmmeta, no questions askedpic.twitter.com/3AlGdGcPnW
Show this thread -
28/n ivmmeta includes all of the studies I've been tweeting about recently including this one https://twitter.com/GidMK/status/1421368493975359490?s=20 … And this one https://twitter.com/GidMK/status/1420582871031373824?s=20 … And this onehttps://twitter.com/GidMK/status/1419557546872819719?s=20 …
Show this thread -
29/n I've now read through about 3/4 of all the studies on the website, and I would say at least 1/2 of them are so low-quality that the figures they report are basically meaningless
Show this thread -
30/n Moreover, sometimes the website just does stuff that is wildly strange Here's a study with no placebo control. They appear to have calculated a relative risk of...whether the patients in this hospital got treated with ivermectin? WHYpic.twitter.com/qH9PRewSPS
Show this thread -
31/n I could keep going - there's just so much there. Even just the basic concept of combining literally any number from any study and saying that it makes the model MORE ROBUST is so intrinsically flawed So. Many. Mistakes
Show this thread - Show replies
New conversation -
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.