That's actually not true. In some cases ours are lower (i.e. Geneva), depends on how they calculated IFR. AB undercount is something of a concern, but as we note it is built into the sensitivity calculations for many tests and therefore in many cases already accounted for
We're really going around in circles here. To put it another way - you calculate test sensitivity using PCR +ve cases as your benchmark, then testing serology several months later. Thus, the reduction in ABs is ~already accounted for in test sensitivity calculations~
-
-
For this not to be the case, you have to assume that there is a fixed cutoff beyond which antibodies decay, which is missed by all sensitivity calculations. Since this is false, it's likely that most of your point is captured in test sensitivity already
-
They decay at different rates over different time periods. By 4 months it's higher than it was at 3 months. By 3 months, it's higher than at 1 month. So using the same sens & spec regardless of time along the decay curve misses a meaningful # of infections.
End of conversation
New conversation -
-
-
To short circuit it At a macro level, you're generally showing much higher IFR's than those who conducted the studies, either you know something everyone else is missing, your you're missing something. Worth asking feedback from authors who have lower IFR's for same studies
-
Oh, we're definitely emailing some of the authors. Out of interest, which 5 studies did you look at that calculated their own IFRs?
End of conversation
New conversation -
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.
