(CIDRAP Business Source Osterholm Briefing) – I've always been amazed at how some people use numbers to make their point. For example, I could say that, between the two of us, Barry Bonds and I average 378 career major league home runs. Of course, that doesn't tell you that I account for zero of those dingers, and we all know that such an analysis isn't statistically appropriate. But far too often such calculations seem to become fact if the number is repeated enough.
As you make a point about a preparedness issue to a boss or coworker, your analysis may well include a couple of statistical calculations about the likelihood of a crisis. This makes sense. A statistical analysis that supports your conclusion adds credibility to your recommendations. In the field of epidemiology, we often say, "That which gets counted gets acted upon."
But be very careful about the statistics you use to support your proposed preparedness plan. In particular, take the vast majority of complex statistical modeling studies that often appear in the mainstream media today, offering a glimpse of what might be true of the next pandemic.
Then toss them out.
I know that's a strong statement, but such statistical predictions, though intriguing, are largely voodoo science and reflect what I see as an epidemic of modeling activity. In my entire career I have yet to see a single complex statistical model regarding infectious disease that had any public health or medical impact.
Flaws in statistical predictions
Recently published mathematical models are numerous, use complex formulas to predict how an event might unfold, and have made predictions on various aspects of the next pandemic, including:
- The likelihood of its occurrence
- Our ability to avert it by early and widespread use of antiviral drugs
- How it will unfold in terms of case numbers and deaths
If your preparedness plan uses these models to provide evidence for a certain intervention or to estimate the course of the next pandemic, you are grossly overinterpreting their validity.
In 33 years in epidemiology, I have performed many thousands of statistical calculations along with my colleagues. Some of these calculations have had important public health impact, particularly when they involved disease outbreaks. I have also advised numerous graduate students who have used quite sophisticated statistical analyses.
It's with this experience that I feel confident saying that most of the currently touted complex statistical models don't really tell us about the future. In fact, they very well may mislead us into preparing for something that will almost certainly unfold very differently than the model predicted.
In recent meetings, I've watched a slew of statisticians deliver elaborate presentations with lots of numbers and wonderful graphs, all indicating that they have the answer to how the future will unfold and how any intervention will change the course of a pandemic. You will be tempted to use this information for your planning activities, since uncertainty is a difficult foundation for a plan. And we all want to be associated with those who appear to give us a glimpse into the future.
But ask British government officials who used the statistical models from one of Britain's leading modelers in the early days of that country's epidemic of mad cow disease (bovine spongiform encephalopathy). That model predicted that the outbreak would produce more than 100,000 cases of variant Creutzfeldt-Jakob disease, the human equivalent of mad cow. Today, the epidemic in cattle in Great Britain and the rest of Europe is largely over. But only 167 confirmed and probable human cases have resulted—a far cry from what the best British modelers predicted.
I believe our experience with the next influenza pandemic will play out similarly: The modelers will have missed the real experience by a country mile.
One telling example
Let me give you an example. Several years ago, two groups of statisticians—one from the United States and one from Great Britain—predicted that an early and aggressive response by public health officials during the first days of an emerging pandemic could avert the pandemic. The main strategy would be to attack the outbreak when there were still only a few cases, and only in one country. Officials would widely and quickly distribute antiviral drugs to the entire population and restrict movement of people out of the region.
Well, these assumptions are just plain nuts. They fail to account for very real problems involving politics, economics, local customs, and healthcare disparities—not to mention unpredictable human behavior.
We know that the rapid detection of H5N1 avian influenza in humans is problematic in many developing countries. And in previous clusters of human cases in Asia, many potentially exposed people actually believed that the antiviral drugs would give them avian flu, not protect them. Further, governments are unlikely to declare themselves in the first days to be the location of the emerging pandemic, because it will result in swift and widespread trade and travel restrictions with the rest of the world. These factors alone make any fancy statistical calculations predicting the future impact of the use of antiviral drugs in the early days of a pandemic null and void.
The bottom line for business
You should develop your organization's preparedness plans on the basis of your professional experience and a range of possible pandemic outcomes. Don't use fancy statistical models that appear to give you a clairvoyant glimpse into the future; they could one day mean you are so wrong as to lose your credibility.
That doesn't mean you can't plan. A simple approach like estimating the potential number of human cases if the next pandemic is similar to a 1968-like one versus the one in 1918 is logical. Similarly, preparing for widespread supply-chain disruption is prudent.
But every time you read about another statistical model that tells you how the next pandemic will unfold, be prevented, or otherwise affect society, be very cautious. If nothing else, remember the words of Sir Josiah Stamp, a former head of the Bank of England. He reminded us all of the power—or lack thereof—of statistical analyses:
"The government [is] extremely fond of amassing great quantities of statistics. These are raised to the Nth degree, the cube roots are extracted, and the results are arranged into elaborate and impressive displays. What must be kept ever in mind, however, is that in every case, the figures are first put down by a village watchman, and he puts down anything he damn well pleases."
—Michael T. Osterholm, PhD, MPH, is Director of the Center for Infectious Disease Research & Policy (CIDRAP), Editor-in-Chief of the CIDRAP Business Source, Professor in the School of Public Health, and Adjunct Professor in the Medical School, University of Minnesota.