Myth #5: I know how to fix this
Congratulations: you’ve just been put in charge of your country’s decisions on dealing with COVID-19. You are presented with an option that is guaranteed to result in 10,000 deaths but is also guaranteed to save 10,000 jobs. Would you do it? How about a guaranteed 10,000 deaths to save a guaranteed 100,000 jobs? Where is your tipping point between saving lives and economic hardship?
Once you get comfortable with that, make it more interesting- take the word “guaranteed” out of the equation on both sides. How would that affect you? Would you be frustrated that you didn’t have clear information? Or would it make it easier for you to rationalize the potential negative impact (e.g.-maybe the loss of life wouldn’t really happen)?
This is an unprecedented situation. The most severe pandemic on a global scale prior to COVID-19 was the H1N1 virus of 1918. Below is a picture associated with that one- it was clearly a different era. COVID-19 is a whole new ball game.
So where do you go from here? This series of myths has been focused on data, which presents a challenge when looking to the future since that data hasn’t arrived yet. You have historical economic indicators at your disposal, but adapting them to this situation requires guesswork. You have models to predict how the virus will spread, which is a hot topic nowadays. Models are data based, future-focused, and widely misunderstood, making them a good fit for this series. The sample model we will use is from Imperial College and was published on March 16- the early stages of COVID-19. It is very well-done and has aged a bit, making it a perfect fit to illustrate four key points.
To learn how Ralph broke down the numbers, check out Orion’s Data Analytics Webinar Series.
Key point #1: The purpose of a model is not to predict a single outcome, like 4,327,862 global cases or 234,972 global deaths. The purpose of a model is to take the intelligence available at the time and project different outcomes based on different combinations of conditions. The Imperial College model also projects the impact of different preventative measures like social distancing (whole population or restricted to people >70), quarantine, closing of schools, etc. The model projected over 100 potential outcomes, based on what actions are taken. There’s a tendency to sensationalize the most negative possible outcome from a model (2.2 MILLION DEATHS POSSIBLE IN THE U.S. ALONE!) and then claim the model was wrong if it doesn’t happen. Don’t fall for it.
Key point #2: A model is not useless or wrong just because it has to be revised. The Imperial College report contained a lengthy section describing their assumptions- an incubation period of 5.1 days, 50% compliance with home quarantine, a reinfection rate of 2.4, etc. The virus was new when the study was conducted, so assumptions were based on the collective intelligence from coronaviruses in the past. To assume COVID-19 would behave exactly the same may not be totally realistic, but disregarding all prior knowledge would be irresponsible. The point is that as new information about the current virus is gathered it should be built into the model to improve your ability to predict outcomes from present point forward- which is critical, as conditions evolve over time.
Key point #3: A model must identify the Armageddon scenario. Models are based on simulation. Simulations run through a bazillion trials to project the most likely outcomes. Suppose you want to send people back to work, and simulation results say that 50% of the time the number of deaths projects to be 30,000 or less. Great to know, but not a complete picture of risk. You also have to look at the top end- not the absolute worst case scenario, but what is reasonably possible if things break wrong. If the 50th percentile is at 30,000 deaths and the top end is at 35,000, it’s a different scenario than if 50th percentile is at 30,000 deaths and the top end is at 5,000,000. Or is it? Would that enter into your decision?
By the way- if the Armageddon scenario doesn’t happen, that’s a good thing. The Imperial College model projected that if there were no defensive measures taken then there would be 2.2 million deaths in the United States and over 500,000 in the United Kingdom. That’s precisely why the two countries undertook defensive measures. Some people feel the restrictions imposed are a massive overreaction because the case / death count projections haven’t materialized. That is definitely a possibility… but it’s important to remember that minimizing the counts is what was supposed to happen. In the prior paragraph, that equates to a 10% chance of 5 million deaths and a 90% chance of everyone saying you overreacted.
Key point #4: Models can be narrowly focused. This point emphasizes the key limitation. The Imperial College model is focused singularly on what it takes to stop the spread of the virus, stating “we do not consider ethical or economic implications here, except to note that there is no easy policy decision to be made.” The most aggressive containment strategy in the model requires 12-18 months of rotating shelter in place with the easing of restrictions at certain trigger points to get us to the point where there’s a vaccine available… which of course isn’t guaranteed. What is guaranteed is that following that plan would result in massive unemployment and certain deaths from other health-related issues.
Moral of the story:
Models are important tools that should help shape the decisions that need to be made going forward. It’s easy to “solve” problems when you only look at one dimension. “I want as few COVID-19 deaths as possible” OR “I want no permanent damage to the economy.” OR… OR… Balancing these priorities is incredibly difficult.
So… What would you do?
Thanks for all the support throughout the series- appreciate all the feedback. A number of people have asked me if I was going to comment on the data for the United States specifically and what it means. If people are interested in that I will put out a post some time next week.
Stay safe, everyone!