That's not correct - decay rate was unknown when the sens & spec were provided They are relative to & dependent on the sampling timeframes against the curves Did you examine this? Many studies actually adjust for these. Yours is one of the few who didn't
-
-
Replying to @sangfroyd
I have no idea what you mean by "were provided", we cited a number of very recent studies on exactly this point. I also don't think you've actually understood the point about test sensitivity here
2 replies 0 retweets 0 likes -
Replying to @GidMK @sangfroyd
From recent evidence, we know that ELISA sensitivity is ~80% i.e. https://www.bmj.com/content/370/bmj.m2516 … What you're suggesting, as far as I can tell, is that there is an ADDITIONAL element, aside from the 20% false negatives we already know about, that would be missed by the ELISA
1 reply 0 retweets 1 like -
Replying to @GidMK
Correct. Obviously depends on the exact test used. But generally it's the % of infections that have decayed by the measurement date, and therefore changes the weighting of decay, similar to how true prev effects PPV. This is relative to each community & their point on curve
1 reply 0 retweets 0 likes -
Replying to @sangfroyd
Sure, which again is one of the reasons that we only looked at places with stable epidemics, rather than including ongoing outbreaks. Virtually all seroprevalence studies had some adjustment for test sensitivity and specificity
1 reply 0 retweets 0 likes -
Replying to @GidMK @sangfroyd
But I can't see the argument that there is a large element of bias in the small proportion of people who might become seronegative that are not already included in test sensitivity calculations
1 reply 0 retweets 0 likes -
Replying to @GidMK
Small if done within the 1-3 months that the sensitivity calc's were derived from. But it becomes quite large after that. The calculus changes, but you're treating it as fixed. If you accounted for this somehow, apologies. Wasn't obvioushttps://www.nature.com/articles/s41591-020-0965-6 …
1 reply 0 retweets 0 likes -
Replying to @sangfroyd
Ugh. What I've been trying to explain is that this is accounted for already in test sensitivity. It's part of the reason that you get false negatives (the main other being delay to seroconversion)!
1 reply 0 retweets 0 likes -
Replying to @GidMK
Are you developing your own test sensitivity calc for each study based on the dynamics of their epi timeline vs. dates samples were taken? Or are you taking the published sensitivities from the test manufacturer?
1 reply 0 retweets 0 likes -
Replying to @sangfroyd @GidMK
What I have been trying to explain is that if the test manu developed specs using PCR +ve vs. AB 1-3 months later, with a median follow-up of 2 months (wherein 90% of AB's still exist) Then a serology study is conducted in month 6 of an Epi (where 40% of AB's remain), it's under
1 reply 0 retweets 0 likes
I mean, you can assume anything, but the evidence (incl the study you linked) mostly shows that antibody titres decrease then stabilise in most cases, and the majority of cases remain seropositive
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.