Basically all your IFR estimates are higher than the leading papers for the studies you cite. Most were written before we knew how quickly AB's go below detectable levels (particularly in asympto's) Did you reach out to the rep's for these studies to compare your IFR calc?
-
-
Replying to @sangfroyd
That's actually not true. In some cases ours are lower (i.e. Geneva), depends on how they calculated IFR. AB undercount is something of a concern, but as we note it is built into the sensitivity calculations for many tests and therefore in many cases already accounted for
1 reply 0 retweets 0 likes -
Replying to @GidMK
Geneva looks practically the same - 0.64% https://www.thelancet.com/journals/laninf/article/PIIS1473-3099(20)30584-3/fulltext … Others are 60-100% higher. i.e. Sweden Doing a secondary sensitivity analysis for AB vs. adjusting for it as a primary median are very different. You did it for fatalities, but not AB, which creates large bias.
1 reply 0 retweets 0 likes -
Replying to @sangfroyd
Some are indeed higher - as I said, it depends on how IFR was calculated in those papers. We used a standard methodology which was elucidated in the paper for all calculations
1 reply 0 retweets 0 likes -
Replying to @GidMK @sangfroyd
As for "large bias", I have no idea what you mean. As I said, the AB issue is in many cases built into the sensitivity assumptions of the ELISA used in most serology programs - it's not necessary to adjust for this again in the analysis as we note
1 reply 0 retweets 0 likes -
Replying to @GidMK
It's not in all ELISA's and is very under-represented if sampled during height of the curve. +AB type & decay rates not adjusted for at all If you plotted your imputed IFR's compared to the median IFR's of those studies, the delta is rather large for the 5 that I checked.
1 reply 0 retweets 0 likes -
Replying to @sangfroyd
Sure, but describing that as "bias" is a leading statement. I would argue that taking deaths from an ongoing epidemic is "biased" in and of itself And yes, absolutely! One reason why we did not include samples taken during the height of the curve
1 reply 0 retweets 0 likes -
Replying to @GidMK @sangfroyd
And, again, decay rate is usually part of sensitivity calculations. We're going around in circles here
1 reply 0 retweets 0 likes -
Replying to @GidMK
That's not correct - decay rate was unknown when the sens & spec were provided They are relative to & dependent on the sampling timeframes against the curves Did you examine this? Many studies actually adjust for these. Yours is one of the few who didn't
1 reply 0 retweets 0 likes -
Replying to @sangfroyd
I have no idea what you mean by "were provided", we cited a number of very recent studies on exactly this point. I also don't think you've actually understood the point about test sensitivity here
2 replies 0 retweets 0 likes
From recent evidence, we know that ELISA sensitivity is ~80% i.e. https://www.bmj.com/content/370/bmj.m2516 … What you're suggesting, as far as I can tell, is that there is an ADDITIONAL element, aside from the 20% false negatives we already know about, that would be missed by the ELISA
-
-
Replying to @GidMK
Correct. Obviously depends on the exact test used. But generally it's the % of infections that have decayed by the measurement date, and therefore changes the weighting of decay, similar to how true prev effects PPV. This is relative to each community & their point on curve
1 reply 0 retweets 0 likes -
Replying to @sangfroyd
Sure, which again is one of the reasons that we only looked at places with stable epidemics, rather than including ongoing outbreaks. Virtually all seroprevalence studies had some adjustment for test sensitivity and specificity
1 reply 0 retweets 0 likes - Show replies
New conversation -
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.