I have a feeling that our response to this pandemic may lead to some reflections on the role of scientific models in the decision making process. I would normally err on the side of defending scientific advisors, but I have a sense that they might face some justified criticism. I, of course, don’t know all the details of what information was presented, how it was presented, what pressures the scientific advisors faced, and how the decisions were made. However, it does seem as though many – who should probably have known better – failed to recognise the strengths and weaknesses of the scientific models that were being used.
Scientific models typically allow us to ask “what if” questions: What will happen if we do nothing? What will happen if we encourage social distancing? What will happen if we enforce a partial lockdown? What about a full lockdown? What will happen if we wait a week before doing something, rather than starting now? etc. They’re typically more properly presenting projections, rather than predictions; they’re predictions that are conditional on us actually following the scenario that was modelled. There’s also always some level of uncertainty, so the questions should maybe be more properly phrased as “what could happen if….?”
However, we seem to be treating these models as predictions without always making clear that there is quite a lot of uncertainty involved, both in terms of how they model the infection itself and in terms of how they’re handling the various possible societal scenarios. Of course, it’s important to test models against real world data and if there is a good match, and if there is confidence that the assumptions in the model closely match what happened in reality, then one can be confident that the model is capturing many of the important processes. However, it’s still important to remember that all scientific models are simplified representations of reality that can never really capture all the complexity.
Another important aspect of using scientific models is to sanity check the results; do they make sense? It’s not clear that this has been done particularly well in this current context. James has been highlighing this in a number of his posts. Specifically, some of the leading researchers were still presenting numbers that no longer seemed reasonable. For example, suggesting that the lower limit to the number of deaths might be around 7000, when we were already pretty close to getting there [edit: see update at bottom of post.].
There’s probably a lot more that could be said, and I may return to this topic at a later stage. I think it’s important for people to recognise both the strengths and limitations of scientific models. They can be very powerful tools, but they’re never going to perfectly represent reality. The scientists involved should be willing to acknowledge this and should, in my view, also be checking that their model results make sense. Decision makers should also be aware that scientific models have strengths and limitations; they can certainly guide decision making but can’t really define it. I don’t think this takes anything away from the usefulness of such models, it is simply something that I think is important to recognise.
As Steve Forden points out, there was a stage where the lower bound for the number of deaths (5000) was presented at the same time as the group was projecting this number of deaths for the following week.