When reports of the coronavirus were made public, the world became gripped with fear and anxiety. Despite being inundated daily with the most current data, information and guidelines, many unanswered questions and uncertainties persist. But at least one thing remains certain — prediction models used to determine action and policy have been far from reliable. A cursory look at models used by the United States (since more information is available for these) should illustrate this unreliability.
The initial COVID-19 tracking model developed by the Imperial College in London projected on March 17, 2020, 510,000 British deaths and as many as 2.2 million American deaths. On the basis of this report, the U.K. and the U.S. — having previously taken a somewhat relaxed position on the virus because of information from China and the World Health Organization — suddenly moved to enact extreme measures such as shutdowns and social distancing.
Fewer than two weeks later, Dr. Neil Ferguson, model team leader and “influential epidemiologist,” revised his estimates to under 20,000 U.K. deaths even though the team had stated that model data had been “painstakingly collected.” Reports also circulated that the U.S. death toll had been downgraded to fewer than 200,000. Two public health experts from Stanford University said that Ferguson’s estimates were “orders of magnitude” too high. Although the model’s projections were massively overstated, previously imposed severe interventions were not modified.
Subsequent model projections continued to be far off, even after factoring in economic mitigations and physical separations. In late March, the University of Washington’s Institute for Health Metrics and Evaluation (IHME), which is used by the U.S. task force team, predicted more than 90,000 American deaths, but this was shortly afterwards revised to 60,000. The IHME’s national predictions for April 5 were also dramatically overestimated. Hospitalizations were overestimated by eight times, intensive care unit beds by 6.4 times and ventilators 40.5 times.
According to a revised model, on April 5 New York would need 24,000 hospital beds and 6,000 ICU beds. The model was off by one-third. In addition, in less than one week, hospital beds projections nationally were down by two-thirds, ICU beds by one-half and ventilators by almost one-half. Andrew McCarthy, columnist and former U.S. assistant district attorney, commented that while the government relies on these models to create policy, “fundamental assumptions are so dead wrong, they cannot remain reasonably stable for just 72 hours.”
A touted “new and improved” model developed to accurately predict hospitalizations for two months wasn’t even close with its predictions for the next day. For example, on April 8 the model indicated that the next day California would need 4,386 hospital beds, Connecticut 3,686, Montana 70 and North Dakota 392. The following day, April 9, the number of hospital beds required were 2,825, 1,464, 13 and 14, respectively.
Are our national and provincial tracking models superior to ones used by our southern neighbour? If models are used as justification for drastic economic and social measures to avoid overburdening our hospitals (which was the purpose in the first place), it should not be unreasonable to expect a much higher degree of accuracy. To date in this COVID era, models have been anything but accurate and reliable yet the “experts” insist on using them to set public policy.
Without omniscience how are we to definitively know that millions would die without destroying our economies? Benjamin Franklin’s words are perhaps timely: “Those who would give up essential liberty, to purchase a little temporary safety, deserve neither liberty or safety!” The virus is serious and every life has value, but would less-draconian measures have sufficed?