Thought I'd take a leaf out of @ProfMattFox's book and use some 2x2 tables to illustrate thishttps://twitter.com/GidMK/status/1278468849294168064 …
-
Show this thread
-
So, here's our table. We've got positive and negative results for our test compared with the truth Here, I've plugged in the numbers for a prevalence of 5% (i.e. 5% of people have had COVID-19)pic.twitter.com/LTc08r14p7
1 reply 0 retweets 2 likesShow this thread -
Now, we know that sensitivity is 80.9% and specificity is 98.9%. Plugging those in, we get this tablepic.twitter.com/ahZXTRpx2D
1 reply 0 retweets 3 likesShow this thread -
From this, we can work out the Positive Predictive Value (PPV) = likelihood of a positive test having actually had COVID-19 Negative Predictive Value (NPV) likelihood of a negative test actually not having had COVID-19
2 replies 0 retweets 0 likesShow this thread -
Here's what that looks like for a population prevalence of 5% Of the people who test positive, only 79% actually have the disease. Of the negatives, 99% have never had itpic.twitter.com/45MF4mDplm
1 reply 0 retweets 2 likesShow this thread -
But if we vary the prevalence, the PPV and NPV change a lot! At 1%, PPV = 43% NPV = 100% At 20%, PPV = 95% NPV = 95%pic.twitter.com/VjEm7YGyPS
1 reply 0 retweets 1 likeShow this thread
What this means is that if you run the test in a population where very few people who've had the disease, MOST of your positive tests will be false positives This means that your prevalence estimate might be double the true one (or more)
-
-
If instead you run the test in a population where many people have had COVID-19, you'll underestimate the prevalence by at least 10% Both of these aren't great scenarios
1 reply 0 retweets 3 likesShow this threadThanks. Twitter will use this to make your timeline better. UndoUndo
-
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.