COVID-19 Update: Negative Case Growth in the US

State COVID-19 data for the states with negative case growth

The Instantaneous Rate of Change metric has given us the ability to understand in one number the state of the outbreak in a region regarding case growth and deaths. These are shown above with IROC in the header. This table is sorted by the states where the daily growth rate in cases has decreased the most over the last 3 days. I’d refer to this as “decelerating” rates. Fortunately, New York is finally at the top of a “good” list associated with this outbreak. Hopefully this will signify a long plateau in the rates. Note that in some regions (Singapore is a good example) an early plateau (celebrated widely) was followed by accelerating growth in cases. Perhaps NY will be different due to the extreme penetration of their society by COVID-19. Below is the individual NY time series charts for your perusal.

New York Confirmed Case data and curve fit – 5/7/2020
New York Death data and curve fit – 5/7/2020

COVID-19 Update: Ro Tracker for Selected Regions

Selected Regions sorted by Ro – 4/15/2020
Selected Regions sorted by Ro – 4/30/2020
Selected Regions sorted by Ro – 5/6/2020

I’m still interested in the Rates of Reproduction and the tracking of this number. I picked 7 countries that were interesting to me for various regions and began tracking their numbers (including Ro) over time. Above you can see some of the results. Notable observations: Germany’s Ro on 4/15 was the highest of the group of 7 at 1.3. This means, in theory, that one new infected person would infect 1.3 more people. On 4/15, we also see a number of countries that are struggling now with Ro numbers less than 0.5 (Mexico, Brazil, Russia, Singapore. Singapore, especially, is interesting because 4/15 was in between their first (small) wave of infections and their current (larger) wave. Now you can see Singapore’s number is up to 1.46 and Brazil and Mexico are much higher. The US peaked somewhere around 4/30 and is trending slowly downward. Germany, however, has moved down quite a bit, signifying that their outbreak is under control. Sweden, with the most unique strategy for COVID-19 immunity, has held pat at just under 1.0 for the entire duration. I wonder if this is an artifact of their systematic approach?

Now, see below for the top 11 countries by Ro (with more than 1K cases) over those same time periods. Just like above, you can see some countries dramatically decrease their Ro (Portugal, Turkey) while new countries replaced them at the top of the list. Note also that right now, a high Rate of Reproduction isn’t equaling a high deaths per 1000 number. This may be due to new countries experiencing outbreaks, or may be due to some other factor (temperature? Immunity? Better approaches to medical care?)

Top 11 Regions by Ro – 4/15/2020
Top 11 Regions by Ro – 4/30/2020
Top 11 Regions by Ro – 5/6/2020

COVID-19 Update: Can the Recovery from the Outbreak be managed using the Rate of Reproduction (Ro) calculation?

Recently Germany began to share that they were reopening their economy with an eye on their Rate of Reproduction calculation. They had been seeing Ro in the 0.7 range and decided to back off of some of their lockdown restrictions. Now they were seeing Ro creeping up to 1.0 (edit: I’m calculating their Ro too and I don’t see this movement. Maybe they have data that I don’t) and they were getting concerned. This seems like a data-driven approach to reopening the economy, but is it a good one?

Some Background on Ro

I have published on methods to calculate Ro in previous articles. There may be other ways to do this, but one very simple way is to use the Susceptible, Infected, Recovered (SIR) equations that come from epidemiology. This is why having these three numbers published by a nation or locality is so important (note this, US Governors!). Below is a list of locations where Ro is highest, per my calculations.

Countries sorted by Ro – 5/3/2020

The Ro is purported to describe how many people an infected person will transmit the virus to. Therefore, if Ro is over 1, the virus will expand in society. If Ro is 2, one person will transmit to two others, thus creating a non-linear growth pattern. Traditionally, Ro is calculated by multiplying the Transmissibility factor (above on the chart), which is what we actually back out of the SIR equations, by the number of days a person with the disease is infectious.

Problems with this…

  1. I cannot find any references to what the actual number of infectious days is for COVID-19. In my calculations, I guess at the 14 day number that is all around us and I get the same numbers that I see published for European countries. So I suspect they’re using 14 days too. But I kind of doubt that’s the right number because for other infections the number of infectious days ranges from 2-10. If my assumption is correct, then I suspect that this could be inflating the Ro numbers associated with COVID-19. Not a huge deal (the transmissibility numbers still give an indicator of whether a country is in a highly-infectious period) but might be giving false comparisons to other diseases.
  2. I also can’t find any data on reinfection percentages for COVID-19. This isn’t surprising, of course, as this is a novel coronavirus, but I also have to assume a value for reinfection in the SIR equations. If it turns out that reinfection is higher than we thought, this will lower our transmissibility values (seems counterintuitive, but it’s complicated).
  3. Superspreaders are a real problem for Ro. A superspreader is an event or person associated with large numbers of infections. Typhoid Mary, who was a non-symptomatic Typhoid Fever carrier, is a good example. She infected 76 people singlehandedly with Typhoid Fever. Imagine then if the Ro for a disease is 2 and one person infects 100 people? This acts like accelerant on a wildfire! The same applies for an event that acts as a superspreader, such as the Spanish Flu Liberty Loan parade in Philadelphia. Within 72 hours of this superspreading event every hospital bed in Philadelphia was full. Within a week there were 4K deaths. The CDC paper linked above states about this: “SSEs (Super Spreading Events) highlight a major limitation of the concept of R0. The basic reproductive number R0, when presented as a mean or median value, does not capture the heterogeneity of transmission among infected persons; 2 pathogens with identical R0 estimates may have markedly different patterns of transmission. Furthermore, the goal of a public health response is to drive the reproductive number to a value <1, something that might not be possible in some situations without better prevention, recognition, and response to SSEs.” (Frieden and Lee, 2020)
  4. The Ro measure is being co-opted by researchers who seek to “improve” it. This paper on MedRxiv is non-peer reviewed, but seems to be influencing the German government’s calculation of Ro. A summary of the approach is that the researchers are making assumptions about how to modify the Ro equation to take account of mobility restrictions and quarantines. I’m not a big fan of this paper, as it seems to be more reliant on buzz words and popular assumptions than facts. Also, I see no calibrations for super spreading events. This approach does seem immature, but it does appear that European nations are using this approach in their calculations. If these researchers are wrong in their assumptions on the value of mobility restrictions, of course, or the uniformity of transmission then the whole equation could be off.

Conclusion

This outbreak, because it is a novel virus and a situation we haven’t really been in since 1918, has been a learning experiment. New methods have been tested (nation-wide lockdowns, mandatory face masks), different strategies have been derived (Iceland, Sweden, China, and the US all have very different approaches), and data instrumentation and analysis has been exposed. Using Ro as a single metric to return to economic function seems on the surface to be a good idea, but challenges with the Ro metric itself need to be understood as limitations.

COVID-19 Update: Why it makes sense to test the Most at Risk (and why antibody tests won’t be useful for a while)

I saw a paper from UCSF and UC Berkeley while I was looking for the specificity values of COVID-19 antibody tests and it made me think of a topic to talk about that might be interesting and useful to folks. It certainly will have lots of applications outside COVID-19 tests or antibody tests. Specificity is the parameter that tells us whether the test can separate true positive results from false positives. A high specificity value tells us that a test has low false positives.

Why do we care about False Positives?

This is a fun subject to me and is always part of my statistics classes. In radar processing we refer to it as a false alarm rate. As most signal processing engineers know, a radar can appear to be a world-beater, but if it has a high false alarm rate, there’s a good chance that most of it’s detections are not airplanes flying overhead or any other desired target. In medical tests, false positives present a similar challenge. Many “detections” from the test turn out to not be the thing we hope to detect. The below example might be helpful. In this case, we’re using a nominal-but-high estimate of the total number of COVID-19 cases per 100 people (2). This is about 10x higher than what the data is showing for the US as a whole, so there’s a significant fudge factor here. My false positive rate comes from the UCSF/UCB paper, which stated that even though many of the COVID-19 Antibody tests they were evaluating had a 5% false positive rate, “Several of our tests had specificities over 98 percent, which is critical for reopening society.” So I picked 98% for my example to demonstrate why even this nice-sounding number is still unacceptable.

Decision Tree showing True Positives/Negatives along with False Positives and Negatives

Looking above, this is a simple way of evaluating a test. In this case, since we’re applying this to the country as a whole, we’re using our inflated number of 2% of Americans that are likely to have contracted COVID-19 (this comes from our data plus a 10x safety factor to prevent underestimation). You can see that means MOST of the 175M adult Americans (the 98%) when tested, fall into the “True Negative” category. That means they don’t have the disease and test negative for antibodies. This is good and what we hope for. However, due to our 2% False Positive Rate, we find out that there are a very large number of Americans who never had COVID-19 who test positive for antibodies. In this case, close to 3.5 million of them. When we go to the top of the decision tree and look at the 2% who data tells us have had the disease, we find that our test accurately catches 99.9% of them (We’re also assuming a really small problem with false negatives… this might be unrealistic, but lets assume it’s a really good test for this). This translates into… 3.5 Million Americans who have had COVID-19 and who test positive for antibodies! The false negatives are unimportant because they come out to 3.5K people due to our test.

The critical takeaway here is that in the scenario above with realistic assumptions we have 7M Americans testing positive for antibodies, but we know that only 1/2 of the really do! This is not even meaningful because if you test positive with this specific test, you still cannot predict if you actually have the disease (or antibodies). This points out that there is really no good reason to run this test in the current state on the population at large because the results are not that informative.

Where Should We Test Then?

The CDC did do one thing wisely with COVID-19 testing early on when they saved the tests for the most affected. I doubt this was accidental, but rather, had to do with this effect I’m showing here. Because, if you can determine that a community has a higher probability of having a disease, this reduces the false positive problem. When we know that specific symptoms (losing smell, temperature, difficulty breathing, etc.) raise the likelihood that a person has COVID-19, say from our nominal 2% probability up to 25% probability, a test with a 2% false alarm rate gives us different results. Then, instead of our true positives being equal to our false positives, our true positives are now around 17X larger than our false positives. This means that if you already exhibit symptoms, the test is statistically more valuable to you because it is more effective at predicting if you really have the disease or not.

Summary

Maybe this is boring, but it applies to cancer tests, tests on an assembly line, and anywhere else where a test is less than 100% accurate (that would actually be all tests for the most part). I’ll recap by scaling the numbers above to a “universe” of 1000 people to make a better comparison.

1) In this universe of 1000 people where statistically 2% have been exposed to a disease (we’ll call this the healthy universe), a test with a 2% false positive rate will give the following results:

  • 20 people: Have disease/antibodies and test positive for disease/antibodies.
  • 0 people: Have disease but test negative
  • 20 people: Don’t have disease/antibodies, BUT test positive for disease/antibodies
  • 980 people: Don’t have disease and test negative

2) In the adjacent universe of 1000 people (the obviously symptomatic universe) where statistically 25% with those symptoms are sick, the 2% false positive test will give the following results:

  • 250 people: Have disease/test positive
  • 0 people: Have disease/test negative
  • 15 people: Don’t have disease/test positive
  • 735 people: Don’t have disease/test negative

Hope this is a helpful explanation of a couple of things 1) why the antibody tests aren’t really trustable yet and 2) why we give tests to the ones most likely to have the disease (because then the test is effective at predicting the sickness accurately).