COVID-19 Update: Why it makes sense to test the Most at Risk (and why antibody tests won’t be useful for a while)

I saw a paper from UCSF and UC Berkeley while I was looking for the specificity values of COVID-19 antibody tests and it made me think of a topic to talk about that might be interesting and useful to folks. It certainly will have lots of applications outside COVID-19 tests or antibody tests. Specificity is the parameter that tells us whether the test can separate true positive results from false positives. A high specificity value tells us that a test has low false positives.

Why do we care about False Positives?

This is a fun subject to me and is always part of my statistics classes. In radar processing we refer to it as a false alarm rate. As most signal processing engineers know, a radar can appear to be a world-beater, but if it has a high false alarm rate, there’s a good chance that most of it’s detections are not airplanes flying overhead or any other desired target. In medical tests, false positives present a similar challenge. Many “detections” from the test turn out to not be the thing we hope to detect. The below example might be helpful. In this case, we’re using a nominal-but-high estimate of the total number of COVID-19 cases per 100 people (2). This is about 10x higher than what the data is showing for the US as a whole, so there’s a significant fudge factor here. My false positive rate comes from the UCSF/UCB paper, which stated that even though many of the COVID-19 Antibody tests they were evaluating had a 5% false positive rate, “Several of our tests had specificities over 98 percent, which is critical for reopening society.” So I picked 98% for my example to demonstrate why even this nice-sounding number is still unacceptable.

Decision Tree showing True Positives/Negatives along with False Positives and Negatives

Looking above, this is a simple way of evaluating a test. In this case, since we’re applying this to the country as a whole, we’re using our inflated number of 2% of Americans that are likely to have contracted COVID-19 (this comes from our data plus a 10x safety factor to prevent underestimation). You can see that means MOST of the 175M adult Americans (the 98%) when tested, fall into the “True Negative” category. That means they don’t have the disease and test negative for antibodies. This is good and what we hope for. However, due to our 2% False Positive Rate, we find out that there are a very large number of Americans who never had COVID-19 who test positive for antibodies. In this case, close to 3.5 million of them. When we go to the top of the decision tree and look at the 2% who data tells us have had the disease, we find that our test accurately catches 99.9% of them (We’re also assuming a really small problem with false negatives… this might be unrealistic, but lets assume it’s a really good test for this). This translates into… 3.5 Million Americans who have had COVID-19 and who test positive for antibodies! The false negatives are unimportant because they come out to 3.5K people due to our test.

The critical takeaway here is that in the scenario above with realistic assumptions we have 7M Americans testing positive for antibodies, but we know that only 1/2 of the really do! This is not even meaningful because if you test positive with this specific test, you still cannot predict if you actually have the disease (or antibodies). This points out that there is really no good reason to run this test in the current state on the population at large because the results are not that informative.

Where Should We Test Then?

The CDC did do one thing wisely with COVID-19 testing early on when they saved the tests for the most affected. I doubt this was accidental, but rather, had to do with this effect I’m showing here. Because, if you can determine that a community has a higher probability of having a disease, this reduces the false positive problem. When we know that specific symptoms (losing smell, temperature, difficulty breathing, etc.) raise the likelihood that a person has COVID-19, say from our nominal 2% probability up to 25% probability, a test with a 2% false alarm rate gives us different results. Then, instead of our true positives being equal to our false positives, our true positives are now around 17X larger than our false positives. This means that if you already exhibit symptoms, the test is statistically more valuable to you because it is more effective at predicting if you really have the disease or not.

Summary

Maybe this is boring, but it applies to cancer tests, tests on an assembly line, and anywhere else where a test is less than 100% accurate (that would actually be all tests for the most part). I’ll recap by scaling the numbers above to a “universe” of 1000 people to make a better comparison.

1) In this universe of 1000 people where statistically 2% have been exposed to a disease (we’ll call this the healthy universe), a test with a 2% false positive rate will give the following results:

  • 20 people: Have disease/antibodies and test positive for disease/antibodies.
  • 0 people: Have disease but test negative
  • 20 people: Don’t have disease/antibodies, BUT test positive for disease/antibodies
  • 980 people: Don’t have disease and test negative

2) In the adjacent universe of 1000 people (the obviously symptomatic universe) where statistically 25% with those symptoms are sick, the 2% false positive test will give the following results:

  • 250 people: Have disease/test positive
  • 0 people: Have disease/test negative
  • 15 people: Don’t have disease/test positive
  • 735 people: Don’t have disease/test negative

Hope this is a helpful explanation of a couple of things 1) why the antibody tests aren’t really trustable yet and 2) why we give tests to the ones most likely to have the disease (because then the test is effective at predicting the sickness accurately).

4 Replies to “COVID-19 Update: Why it makes sense to test the Most at Risk (and why antibody tests won’t be useful for a while)”

  1. this sounds the the positive predictive value and the negative predictive value of the test, which as I recall take into account the “pretest probability of disease” and when you have a grip on that you can better know what to do with your results. As you say, with symptoms for nasal RT-PCR test, or with a good story that you may actually had the disease, then we better know how to interpret lab results.

    1. Correct, Essie! 🙂 This is something called Bayes Rule, a mechanism that Rev. Thomas Bayes came up with to evaluate our prior beliefs on a subject with new evidence. Sounds simple, but is the underpinning of much of what we do (and most modern statistics too). For example, my prior belief about my patient’s probability of having cancer right now is something small, maybe 0.05%. But if I hear that he’s over 60, I might update my belief to 5%. If I then find out that he’s a smoker, my prior may go to 20%. THEN, if he tests positive for cancer on a test with a false positive rate of 2%, my belief goes way up.

  2. The false positives come from cross reacting with Abs to ie SARS, MERS, for instance, which hardly any of us here have. At least that’s what my clinic’s Ab test has as their “disclaimer.” Our labs’s Sensitivity is 98.1% and Specificity is 98.6%. That’s from Vibrant America and we are also now using LabCorp.

    1. Thanks so much! That’s really interesting. I hadn’t considered that the false positive scores might be due to other coronaviruses. I’m guessing your test is pretty widespread now considering you’re going through LabCorp?

Leave a Reply

Your email address will not be published. Required fields are marked *