Christina Pagel and Kit Yates have an article in the British Medical Journal (BMJ) on the Role of mathematical modelling in future pandemic response policy. It’s part of a series in the BMJ on the UK’s covid-19 inquiry. I have had some concerns about how an inquiry might reflect on the role of mathematical modelling in our response to the pandemic and I think this article makes some important points that I really hope are considered when people assess the role of mathematical modelling, both in the response to the current pandemic, and the role it might play in future pandemics.
Mathematical models are really just representations of reality that will always include assumptions and simplifications. They allow us to consider various possible scenarios that can then be used to inform policy making. Models are also continually evolving as more information becomes available, as our understanding of the system improves, and as techniques and computational resources evolve. They’re clearly not perfect, but they are an extremely useful tool when trying to understand what might happen.
Models can also be wrong. Sometimes this is a natural part of the process of development, sometimes it’s because there isn’t really enough data to constrain the model, and sometimes it is because modellers make mistakes. Modellers can also sometimes have more confidence in their models, and model output, than is actually warranted. Modellers should be willing to admit when models are wrong, or when they make mistakes, but – like most humans – sometimes find it difficult to do so.
The article also stressed that communication is key. This is partly to make clear the strengths, and limitations, of the models, but also to try and ensure that people understand in what way the models were being used. Models are often used to consider numerous scenarios, few of which will be close to what actually materialises. For example, if a model considers some kind of worst-case scenario and the results are then used to inform policy so that we avoid this scenario, then the model wasn’t somehow wrong.
Similarly, models might consider scenarios that cover a range of possible policy pathways. Again, that we don’t end up following pathways close to many of these scenarios doesn’t make these models wrong. In some sense, a model is never strictly right or wrong. What matters is how well it does when the pathway we actually follow is close to one of those considered by the model.
Of course, even if communication is taken seriously and done well, it’s still worth being aware that there are some who engage in bad faith and who will use supposed model failures to promote their agendas. There is little that can be done to avoid this, but this mostly highlights the importance of trying to communicate clearly about model strengths and weaknesses, the motivation behind the modelling, what assumptions were made, and which results we should regard as reliable.
One final thing I was going to say is that it can be important to highlight the different kind of systems that can be modelled. As the article says, one issue with epidemiological modelling is the intrinsic inability of most models to capture important facets of human behaviour. This can limit how far into the future one can realistically model. There are other systems for which this is less of a problem, and so one should careful of thinking that a limitation that applies to one modelling situation applies to all situations.
As usual, I’ve said too much, so I encourage those who are interested to read Christina and Kit’s article.
Role of mathematical modelling in future pandemic response – BMJ article by Christina Pagel and Kit Yates
Covid Inquiry – A series of articles in the BMJ about the UK’s covid-19 inquiry.