Thought I'd take a leaf out of @ProfMattFox's book and use some 2x2 tables to illustrate thishttps://twitter.com/GidMK/status/1278468849294168064 …
From this, we can work out the Positive Predictive Value (PPV) = likelihood of a positive test having actually had COVID-19 Negative Predictive Value (NPV) likelihood of a negative test actually not having had COVID-19
-
-
Here's what that looks like for a population prevalence of 5% Of the people who test positive, only 79% actually have the disease. Of the negatives, 99% have never had itpic.twitter.com/45MF4mDplm
Show this thread -
But if we vary the prevalence, the PPV and NPV change a lot! At 1%, PPV = 43% NPV = 100% At 20%, PPV = 95% NPV = 95%pic.twitter.com/VjEm7YGyPS
Show this thread -
What this means is that if you run the test in a population where very few people who've had the disease, MOST of your positive tests will be false positives This means that your prevalence estimate might be double the true one (or more)
Show this thread -
If instead you run the test in a population where many people have had COVID-19, you'll underestimate the prevalence by at least 10% Both of these aren't great scenarios
Show this thread
End of conversation
New conversation -
-
-
I think this concept would be easier for people to understand if we had a better name than Positive Predictive Value. I don't have any good candidates, though. (I do like "detection rate" for sensitivity and "rejection rate" for specificity.)
Thanks. Twitter will use this to make your timeline better. UndoUndo
-
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.