There was quite a lot of coverage last year about a paper by Millar et al. which suggested that the carbon budget that would keep warming below 1.5oC was greater than had been earlier suggested. I wrote about it here, here, and here.
One of the reasons for the difference between the Millar et al. carbon buget, and other estimates, was the manner in which we would determine when we had reached some climate target. In a new paper, Richardson, Cowtan and Millar, look into this in some detail. Essentially there can be differences between how we estimate global surface temperatures if we’re using models, compared to how we do so from observations. From models, we would typical present an estimate based on global coverage and using surface air temperatures. When using observations, we typically mix surface air temperatures over land with sea surface temperatures over the oceans (air-sea blended). In addition, an observational dataset may have to switch from using air temperatures, to sea surface temperatures, in regions where sea ice has retreated (fully blended). Finally, some of the observational datasets do not cover the entire globe (blended-masked).As shown in an earlier paper estimates for surface warming depends on how the surface temperature is determined; if we were able to estimate surface temperatures using air temperatures over the whole globe, we would estimate more warming than if we do so using an air-sea blended dataset, which would again show more than one that is blended and suffers from coverage bias. The figure on the right (from Richardson et al.) shows how the difference depends on how the temperature dataset is constructed, and on the emission pathway that we actually follow.
As a consequence of this, our carbon budget estimates (for example, the carbon budget that gives a 66% chance of staying below 2oC) also depend on how we determine global surface temperatures. If we’re using a dataset like HadCRUT4, which is blended and suffers from coverage bias, then it will be about 60GtC greater than if we were to try and determine it using air surface temperatures with global coverage. Equivalently, we will cross the threshold about 7-8 years later.
I think it’s very useful to have this all clarified, as it would seem to suggest that the Millar et al. result wasn’t really some indication of a problem with climate models (as suggested by some) but mostly a consequence of the different ways in which global temperature datasets are determined. It might also have been nice if we’d been a bit more careful as to how we defined these various climate targets initially, but I suspect that this is mostly because many of the issues that seem obvious now, weren’t when these targets were first suggested.
I also don’t think this really makes much difference in terms of what this implies. I had a brief chat with Glen Peters on Twitter, and one way to consider this is that if we stick with the original carbon budgets, but assume that the correct dataset is one that is both blended and masked, then we go from having a 66% chance of achieving the target, to an 80% chance; not exactly a massive change in the probability of success. Also, whatever carbon budget we use, there is very little left. At best, we’ve gone from “almost certainly won’t achieve this” to “maybe we can, if we try very hard”.
Global temperature definition affects achievement of long-term climate goals, by Richardson, Cowtan and Millar.
Emission budgets and pathways consistent with limiting warming to 1.5 °C by Millar et al.
A bit more about carbon budgets a post that discusses some of the possible carbon cycle implications of Millar et al. (which I don’t really discuss in the post above).