I’m currently in Oxford for a meeting and, having spent most of the train ride working on a book chapter I’m writing, I thought I would now spend some time writing a quick post about the recent Schurer et al. paper Interpretations of the Paris climate target. It’s essentially a response to the Millar et al. paper, that I’ve discussed in a number of recent posts.The key result is probably illustrated by the figure on the right. It shows how different ways of treating the observations influence how close we might appear to be to a target (in this case 1.5oC). For example, the top panel shows HadCRUT4 and how close it is to 1.5oC based on the standard dataset (blue), a version that corrects for HadCRUT4 being a combination of surface air temperatures (SATs) and sea surface temperatures (SSTs) (green), changing the pre-industrial baseline for the standard HadCRUT4 dataset (yellow), and doing the same but for a case where it is all SATs (purple). The middle panel does the same as the top panel, but corrects for HadCRUT4’s coverage bias (i.e., makes it global). The bottom panel is the same, but for the Cowtan and Way dataset.
Essentially, if we use observational datasets to infer how close we are to a target, it will depend on how that dataset is constructed (SST + SATs, coverage) and on the assumed baseline. There are a number of reasons why it’s important to understand this effect. For example, Millar et al. claimed that we’d warmed less than expected, given how much we’ve emitted. Therefore, we have a larger remaining carbon budget that had been realised. However, this difference was (as I understand it) mostly because the observations were blended (SSTs + SATs) and suffered from coverage bias, while the model used to estimate how much we should have warmed was based on global coverage and SATs. If the comparison had been like-for-like, then the difference would have been much smaller.
Additionally, if you’re going to estimate some warming at which impacts become sufficiently severe that we should aim to keep below that, then the observations used to determine how close we are to that target should be consistent with what was used to determine the target. If the latter was determined using global coverage and SATs, then either an equivalent observational dataset should be used, or the target should be corrected to account for form of the observations (blended and masked, for example).
Ultimately, however, I’m not entirely sure I quite get the fuss. I think this is an interesting scientific puzzle. I think it’s useful to understand why there are these differences. However, whether the target is 1.5oC or 2oC, achieving the target is going to be difficult even if we do have a few tenths of a degree more to go than we had realised. It’s essentially still start reducing emissions as soon possible, and reduce them as fast as we can.