In the first week of August the Australian state of NSW ran 150,000 tests and recorded 110 positives. In an absolute worst case scenario, where every positive was false, that implies a sensitivity of 0% and a specificity of ~99.94%
-
-
False positives are not just a result of the testing process. They are also a function of the population being tested. Plus you can't take a minimum value - it will tend to a mean.
1 reply 0 retweets 1 like -
Replying to @ClareCraigPath @GidMK and
For example, if you imagined a population all infected with SARS1, you would expect them to have a higher false positive rate than a healthy population.
1 reply 0 retweets 0 likes -
Replying to @ClareCraigPath @m0102940 and
All of that being true, the observable - and demonstrable - false positive rate is roughly 1 per 2,000 true negatives, for a specificity of 99.95% or higher. Thus, you would only expect a PPV of 50% if the tested population had a prevalence of <0.1%
2 replies 0 retweets 1 like -
Replying to @GidMK @ClareCraigPath and
(In this case, taking a minimum value is better for your argument. The true test specificity is likely to be ~99.99% or higher, which would make your original point wrong unless the prevalence was virtually 0%)
1 reply 0 retweets 0 likes -
With a specificity of 99.95% then you would expect all of the summer ONS 'cases' to be false positive. Either that or you assume 100% specificity with R-value rock steady at 1.0000. Here is evidence so far that summer COVID was minimal -https://logicinthetimeofcovid.com/2020/09/07/waiting-for-zero/ …
1 reply 0 retweets 0 likes -
Replying to @ClareCraigPath @m0102940 and
That is incorrect. With a % positive of 0.5% (the average over summer). and a specificity of 99.95% you would expect that roughly one in 10 cases detected would be a false positive, so around 10% As I've noted, the specificity is probably higher, but that's the lower bound
1 reply 0 retweets 0 likes -
99.95% for ONS. 99.6% for pillar 1 and 99.2% for pillar 2.
1 reply 0 retweets 0 likes -
Replying to @ClareCraigPath @m0102940 and
Those are the ABSOLUTE MINIMUM values, yes. If every single positive was false (extraordinarily unlikely) in testing datasets, those are the lowest possible values for specificity. A more realistic range would use that as the lowest estimate and 100% as the highest
2 replies 0 retweets 0 likes -
100% is ludicrous. Did you look at the evidence I sent you t sting the hypothesis that they were indeed all false positives?
1 reply 0 retweets 0 likes
100% has been found in some situations. Obviously, it's not possible to have exactly 100%, but the difference between 99.99999 and 100 is not really observable
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.