New York City Mayor Bill de Blasio had a major problem on his hands last month, one of his own. He had promised the city’s teachers’ union, it would shut down the city’s huge public school system, reopened not so long ago, if the city’s “test positivity rate” reached 3%. And he had, so he close schools. Barely 10 days later, with an even higher rate, at 3.9%, from Blasio reversed his decision, telling CNN that when the 3% threshold it once vigorously defended was put in place, the city “did not have the information we currently have. “
This deviation from the test’s positivity metric – typically calculated as the number of Covid-19 tests that turn out positive divided by the number of tests performed – is significant. Since the start of the pandemic, most reports of the spread of the disease have led to a simpler figure: the number of cases diagnosed. This continues to be the superior state on the main news sites ” Covid Trackers, despite the fact that the total number of cases has sometimes been quite misleading. Last spring, for example, testing was very rare and many cases went undetected. Policy makers have therefore come to rely on positivity testing as an alternative. For example, the 3% threshold for school closures in New York; or Connecticut directive that visitors from states where test positivity is greater than 10% should self-quarantine. But this replacement metric has been poorly understood.
Think about it. The positivity test is not a direct measure of new infections appearing in a population. It is a ratio, and a ratio increases in two ways: Either when the numerator (in this case, the number of positive tests) increases; or when the denominator (in this case the number of tests performed) decreases. Since the number of tests varies from place to place and over time, the positivity of the test may increase or decrease even if there is no change in the spread of the disease.
As a result, although the positivity of the test may be more informative than the raw numbers of cases, it has its own distortions. The ratio will vary depending on the availability of tests, who decides to get tested, and whether they can enter testing centers if they go. Numbers may also differ among subpopulations. It was still fair 0.3 percent in New York City schools, for example, when the overall city rate rose to 3.9%. And some states offer free tests only to people with symptoms, a policy that guarantees an increase in the test’s positivity rate.
One of the reasons that test positivity ratios have become so widely used is that they appeared on the Coronavirus Resource Center dashboard at Johns Hopkins University early in the pandemic. But they weren’t meant to be used as a direct measure of the spread of the coronavirus, says Jennifer Nuzzo, the centre’s senior epidemiologist. Rather, the numbers were meant to show whether enough testing was being done. When she and her colleagues first developed the website, they noticed that testing rates varied widely from country to country. But countries that were successful at managing the virus and had comprehensive surveillance in place showed test positivity rates of between around 3 and 5%. “This led to the realization that it made sense to track testing in this way,” she says. And practically speaking, they had very few other data points to consider at this point.
Many epidemiologists view the dependence of policy makers on the test’s positivity report in similar terms, as an example of an expediency: the number was close, so people started using it. The media, too, took up the test-positivity ratios and made them screaming headlines. “Somewhere along the line, some threads crossed,” says Michael Mina, virologist and epidemiologist at Harvard TH Chan School of Public Health. The ratio alone doesn’t tell you how prevalent Covid infections have become in your community. “The positivity test doesn’t really reflect anything unless you know very well who is being tested, and why,” says Mina.