Evidential Notes: This post links to two related editorials by John P.A. Ioannidis, a long prominent figure in the field of medical statistics and in the evidence-based medicine movement. The earlier editorial, from STAT, is from March 17, about a week old at posting time. The later editorial is a fully peer-reviewed pre-print publication in the European Journal of Clinical Investigation, which was published on March 19, five days old at posting time. When the first article was written the ECDC reported about 4,600 cases confirmed in the U.S. Today, at posting time, they report about 46,000 cases confirmed.[1]
Pandemic Covid-19 is a serious public health threat. Of course it is. One reason it’s serious is biological: it’s a potentially serious disease for anyone who catches it, and a very serious life-threatening disease for some of the people who catch it, and so far it seems to be a fairly contagious disease that’s hard to contain. Another reason it’s serious is institutional: the disease and reactions to the disease have caused overwhelming congestion, resource starvation, temporary breakdowns, and catastrophic failures in some countries’ healthcare systems, which have led to appalling conditions and to deaths, both from the disease itself and from other health emergencies that could not be adequately treated. You should take it seriously, and you absolutely should do what you can to keep yourself healthy and safe to those around you.
There’s also a lot that we do not yet know about Covid-19 as a disease, and a lot that we do not yet know about the effects of the drastic responses that people and institutions have already made in response to the Covid-19 pandemic, or the capabilities and knock-on effects of continuing and escalating measures that are being proposed. There has been an immense, often frenetic attempt to gather more information about the disease, about its spread, and about social and institutional responses to it — not just among experts or hobbyists, but by nearly everyone through news media, scientific journals, situation reports, social media, and every other outlet under the sun. That produces lots of information; it also produces lots and lots of low-quality information and specious info-garbage.
The current coronavirus disease, Covid-19, has been called a once-in-a-century pandemic. But it may also be a once-in-a-century evidence fiasco.
At a time when everyone needs better information, from disease modelers and governments to people quarantined or just social distancing, we lack reliable evidence on how many people have been infected with SARS-CoV-2 or who continue to become infected. Better information is needed to guide decisions and actions of monumental significance and to monitor their impact.
. . . The data collected so far on how many people are infected and how the epidemic is evolving are utterly unreliable. Given the limited testing to date, some deaths and probably the vast majority of infections due to SARS-CoV-2 are being missed. We don't know if we are failing to capture infections by a factor of three or 300. Three months after the outbreak emerged, most countries, including the U.S., lack the ability to test a large number of people and no countries have reliable data on the prevalence of the virus in a representative random sample of the general population.
This evidence fiasco creates tremendous uncertainty about the risk of dying from Covid-19. Reported case fatality rates, like the official 3.4% rate from the World Health Organization, cause horror — and are meaningless. Patients who have been tested for SARS-CoV-2 are disproportionately those with severe symptoms and bad outcomes. As most health systems have limited testing capacity, selection bias may even worsen in the near future.
The one situation where an entire, closed population was tested was the Diamond Princess cruise ship and its quarantine passengers. The case fatality rate there was 1.0%, but this was a largely elderly population, in which the death rate from Covid-19 is much higher.
Projecting the Diamond Princess mortality rate onto the age structure of the U.S. population, the death rate among people infected with Covid-19 would be 0.125%. But since this estimate is based on extremely thin data — there were just seven deaths among the 700 infected passengers and crew — the real death rate could stretch from five times lower (0.025%) to five times higher (0.625%). It is also possible that some of the passengers who were infected might die later, and that tourists may have different frequencies of chronic diseases — a risk factor for worse outcomes with SARS-CoV-2 infection — than the general population. Adding these extra sources of uncertainty, reasonable estimates for the case fatality ratio in the general U.S. population vary from 0.05% to 1%.
That huge range markedly affects how severe the pandemic is and what should be done. . . .
— John P.A. Ioannidis, A fiasco in the making?
STAT (March 17, 2020). Boldface emphasis added.
(Archival PDF of the paper as accessed from Wiley.com on March 24, 2020)
The evolving coronavirus disease 2019 (COVID-19) pandemic1is certainly cause forconcern. Proper communication and optimal decision-making is an ongoing challenge, as data evolve. The challenge is compounded, however, by exaggerated information. This can lead to inappropriate actions. It is important to differentiate promptly the true epidemic from an epidemic of false claims and potentially harmful actions. . . .
— John P.A. Ioannidis, Coronavirus disease 2019: the harms of exaggerated information and non-evidence-based measures
European Journal of Clinical Investigation, March 19, 2020. doi:10.1111/eci.13222
What are some likely error bars on early, widely disseminated and decontextualized estimates of the contagiousness of the virus and case fatality rate for the disease? They’re pretty wide.
Exaggerated pandemic estimates: An early speculation that 40-70% of the global population will be infected went viral.[4] Early estimates of the basic reproduction number (how many people get infected by each infected person) have varied widely, from 1.3 to 6.5.[5] These estimates translate into many-fold difference in the proportion of the population eventually infected and dramatically different expectations on what containment measures (or even any future vaccine) can achieve. The fact that containment measures do seem to work, means that the basic reproduction number is probably in the lower bound of the 1.3-6.5 range, and can decrease below 1 with proper measures. The originator of the 40-70% of the population
estimate tweeted on March 3 a revised estimate of 20-60% of adults
, but this is probably still substantially exaggerated. Even after the 40-70% quote was revised downward, it still remained quoted in viral interviews.[6]
Exaggerated case fatality rate (CFR): Early reported CFR figures also seem exaggerated. The most widely quoted CFR has been 3.4%, reported by WHO dividing the number of deaths by documented cases in early March.[7] This ignores undetected infections and the strong age-dependence of CFR. The most complete data come from Diamond Princess passengers, with CFR=1% observed in an elderly cohort; thus, CFR may be much lower than 1% in the general population; probably higher than seasonal flu (CFR=0.1%), but not much so. Observed crude CFR in South Korea and in Germany[8], the countries with most extensive testing, is 0.9% and 0.2%, respectively as of March 14 and crude CFR in Scandinavian countries is about 0.1%. Some deaths of infected, seriously ill people will occur later, and these deaths have not been counted yet. However even in these countries many infections probably remain undiagnosed. Therefore, CFR may be even lower rather than higher than these crude estimates.
— John P.A. Ioannidis, Coronavirus disease 2019: the harms of exaggerated information and non-evidence-based measures
European Journal of Clinical Investigation, March 19, 2020. doi:10.1111/eci.13222
How much of the divergence in estimates is due to errors of analysis? How much to artifacts of measurement (for example, the wildly different availability of testing and different methods of assigning tests in different countries)? How much of it is due to real biological or institutional differences in the different countries involved (differences in health care systems and their capabilities, differences in the age and health and customs of different populations)? In particular, I’m writing from the southeastern U.S., so a lot of the people that I know are wondering whether the situation where they are, or in the U.S. as a whole, is going to be more like the situation in Hubei Province, China, or more like the situation in Korea, or more like the situation in northern Italy. That’s a complicated question, but a lot of people are treating it as if it were a simple one. It’s not immediately obvious. It’s also not at all obvious how the actions being taken in response to the pandemic will affect the biological risk-factors, or how they will affect the institutional risk-factors, from the epidemic in the U.S. How effective are they at their intended goal? Do they have foreseeable side-effects or negative unintended consequences?
Extreme measures: Under alarming circumstances, extreme measures of unknown effectiveness are adopted. . . . Evidence is lacking for the most aggressive measures. A systematic review on measures to prevent the spread of respiratory viruses found insufficient evidence for entry port screening and social distancing in reducing epidemic spreading.[] Plain hygienic measures have the strongest evidence.[][] Frequent hand washing and staying at home and avoiding contacts when sick are probably very useful. Their routine endorsement may save many lives. Most lives saved may actually be due to reduced transmission of influenza rather than coronavirus.
Most evidence on protective measures comes from non-randomized studies prone to bias. A systematic review of personal protective measures in reducing pandemic influenza risk found only two randomized trials, one on hand sanitizer and another on facemasks and hand hygiene in household members of people infected with influenza.[]
Harms from non-evidence based measures: Given the uncertainties, one may opt for abundant caution and implement the most severe containment measures. By this perspective, no opportunity should be missed to gain any benefit, even in absence of evidence or even with mostly negative evidence.
This reasoning ignores possible harms. Impulsive actions can indeed cause major harm. One clear example is the panic shopping which depleted supplies of face masks, escalation of prices and a shortage for medical personnel. Masks, gloves, and gowns are clearly needed for medical personnel; their lack poses health care workers' lives at risk. Conversely, they are meaningless for the uninfected general population. However, a prominent virologist's comment[12] that people should stock surgical masks and wear them around the clock to avoid touching their nose went viral.
Misallocation of resources: Policy-makers feel pressure from opponents who lambast inaction. Also adoption of measures in one institution, jurisdiction, or country creates pressure for taking similar measures elsewhere under fear of being accused of negligence. Moreover, many countries pass legislation that allocates major resources and funding to the coronavirus response. This is justified, but the exact allocation priorities can become irrational. . . .
. . . [E]nhanced detection of infections and lower hospitalization thresholds may increase demands for hospital beds. For patients without severe symptoms, hospitalizations offer no benefit and may only infect health workers causing shortage of much-needed personnel. Even for severe cases, effectiveness of intensive supportive care is unknown. Excess admissions may strain health care systems and increase mortality from other serious diseases where hospital care is clearly effective. . . .
Economic and social disruption: The potential consequences onthe global economy are already tangible. February 22-28 was the worst week for global markets since 2008 and the worse may lie ahead. Moreover, some political decisions may be confounded with alternative motives. Lockdowns weaponized by suppressive regimes can create a precedent for easy adoption in the future. Closure of borders may serve policies focused on limiting immigration. Regardless, even in the strongest economies, disruption of social life, travel, work, and school education may have major adverse consequences. The eventual cost of such disruption is notoriously difficult to project. . . .
Learning from COVID-19: . . . If COVID-19 is indeed the pandemic of the century, we need the most accurate evidence to handle it. Open data sharing of scientific information is a minimum requirement. This should include data on the number and demographics of tested individuals per day in each country. Proper prevalence studies and trials are also indispensable.
If COVID-19 is not as grave as it is depicted, high evidence standards are equally relevant. Exaggeration and over-reaction may seriously damage the reputation of science, public health, media, and policy makers.It may foster disbelief that will jeopardize the prospects of an appropriately strong response if and when a more major pandemic strikes in the future.
— John P.A. Ioannidis, Coronavirus disease 2019: the harms of exaggerated information and non-evidence-based measures
European Journal of Clinical Investigation, March 19, 2020. doi:10.1111/eci.13222
Dissenting Views: Ioannidis’s articles have been controversial. For some direct responses, see the comments on the first article, and see also Marc Lipsitch, We know enough to act decisively against Covid-19.
See also.