A group of us have just had a paper published in The British Medical Journal on the effect of school closures on mortality from the coronavirus disease. The coverage has been rather unfortunate, as it is being interpreted as supporting a herd immunity strategy, which is certainly not what (I think) we were trying to argue. I thought I would write this post in the hope of clarifying what we actually did.
The work was motivated by trying to explain what seemed to be some counter-intuitive results presented by the Imperial College group in mid-March, in a document often referred to as Report 9. Specifically, there are some scenarios presented in this report, where adding an intervention leads to more deaths than a similar scenario without that additional intervention. For example, if you look at Table A1, the model predicts that adding place closures (PC) to case isolation (CI), household quarantine (HQ) and social distancing of those over 70 (SDOL70), would ultimately lead to more deaths than if place closures had not been implemented. A similar effect occurs if you add general social distancing (SD) to a scenario with CI and HQ.
The reason for this counter-intuitive result is illustrated by the Figure on the right, which shows ICU bed demand for the scenarios presented in Report 9. Some of these produce a single wave of infections. However, in some cases, adding a new intervention, substantially impacts the first wave, but means that once the interventions are lifted you can get a second wave, which – if the most vulnerable are not suitably protected – could produce more deaths overall, than the equivalent scenario without this additional intervention.
So, does this mean we should not have added some of the interventions? Well, for starters, these are model projections none of which specifically match what we actually did. Also, as James Annan has pointed out quite forcefully on Twitter, some of the model parameters used in Report 9 were clearly not correct (i.e., the basic reproduction number, R0, was lower than we now know to be the case). We were mostly trying to understand why some of these results presented were counter-intiutive, than make any kind of specific prediction, or update what was presented in Report 9 in mid-March. The result may well be different if we were to redo this using updated parameters [Edit: I should have been clearer here. I mean the results presented in Report 9 might have been different, not that our results would have been different].
Additionally, in all of the scenarios presented in Report 9 where there was a single-wave leading to herd immunity, the ICU bed demand and the total number of deaths far exceed what we actually experienced. If we had followed such a scenario, it would almost certainly have over-whelmed the healthcare system and would have almost certainly been perceived as far too extreme. Hence, I don’t think that our paper specifically supports an argument against the lockdown (although people can, of course, make their own interpretations).
Does this mean that we should now follow some kind of herd immunity strategy? Again, these are model results, so one should bear that in mind when drawing interpretations. We did finish the paper by doing some comparisons with actual data, and the model does do well if you update the parameters (i.e., higher R0 value and the epidemic starting sooner than suggested by Report 9). However, there are lots of things that the model doesn’t include. It doesn’t include the long-term impact on those who get infected and don’t die. It does suggest that limiting deaths would require properly shielding the vulnerable, but it doesn’t tell us if this is actually possible. It doesn’t tell us if we will actually develop immunity. There are many caveats that, I think, should be considered before drawing strong conclusions.
At the end of the day, what we were really trying to do was better understand the results presented in mid-March, which I think we’ve now done. There may well be implications to this, but I do think one should be cautious of drawing strong conclusions from a single study that was more motivated by trying to clarify what’s already been presented than make any specific predictions.