More time …. really?

A recent paper about [e]mission budgets and pathways consistent with limiting warming to 1.5 °C essentially argues that it is still possible to follow an emission pathway that will give us a good chance of keeping warming below 1.5oC. More specifically, if we can keep total emissions from 2015 to below 200 – 250GtC (depending on what we do with regards to non-CO2 emissions) then we will have about a 66% chance of keeping warming to below 1.5oC.

Unfortunately, this has been interpreted in some circles as suggesting that we can relax a bit because we have more time than we had previously thought. This was mainly based on previous analyses suggesting that the carbon budget that would keep warming below 1.5oC was only about 50GtC (from 2015). Let’s be clear about something; we’re currently emitting 10GtC per year. Whether the budget is 50GtC, or 250GtC, we pretty much have to start reducing emissions now, and get them to zero as soon as we realistically can. Of course, if it is 250GtC, then that will be easier to achieve than if it is 50GtC, but it doesn’t really change what we should do, assuming that we do want to achieve this target. Personally, I think the correct framing is: if this study is correct, and if we keep total emissions from 2015 to below 200-250GtC, we might keep warming to below 1.5C.

However, I think there are some potential issues with this paper. One is that they’re assuming an 1861-1880 baseline from which they’re estimating the observed temperature change. There are arguments (here, for example) suggesting that to properly capture the warming we should use an earlier baseline, which would lead to us having warmed more than if we use a late 1800s baseline. Hence, we may already be closer to 1.5oC than this paper suggests.

Credit: Millar et al. (2017)

Another potential issue is that a key factor in their analysis is a potential mismatch between the model warming and the observed warming (see figure on right). Their argument is that after emitting as much as we have to date, the models predict more warming than has been observed. Hence, the models are predicting a smaller carbon budget than may actually be the case.

One problem is that there have been a number of recent studies reconciling the supposed model/observation discrepancy. These include updating the forcings and doing a proper apples-to-apples comparison by using blended temperatures (i.e., climate model output is typically air temperatures, while observations are a combination of air and sea-surface). So, there may not even be as big a discrepancy as suggested in this paper.

However, even if there is a discrepancy between the models and the observations, we still don’t know if this is because the models are really running too hot, or because some internal process (the pattern of sea surface warming, for example) has suppressed some of the forced warming. If the latter, then we’d expect the observations to catch up to the models at some point in the future and, hence, the initial model estimates for the carbon budget may not be too low.

So, I would certainly be cautious about assuming that the carbon budget is indeed as high as suggested by this new paper. However, in some sense it doesn’t make a great deal of difference. Even if the carbon budget that would give us a 66% chance of staying below 1.5C is 250GtC (or ~400GtC for 2oC) achieving this is going to require pretty drastic emission cuts starting as soon as we possibly can. It certainly doesn’t, in my view, imply that we can now sit back, relax, and wait a few more years before seriously thinking about how to reduce our emissions.

Links:
Ambitious 1.5C Paris climate target is still possible, new analysis shows – Guardian.
Guest post: Why the 1.5C warming limit is not yet a geophysical impossibility – Carbon Brief.
Study saying climate change poses less of a threat than first thought ‘has been dangerously misinterpreted,’ academics warn – Evening Standard.
Did limiting global warming to 1.5C just get easier – Climate Home.
Did 1.5C suddenly get easier? – Glen Peters.
Have scientists really admitted climate change sceptics are right? – The Independent.

Advertisements
Posted in Climate change, Climate sensitivity, Global warming, IPCC, Research, Uncategorized | Tagged , , , , , , | 5 Comments

Avoiding dangerous to catastrophic climate change

I haven’t really had much to say, hence the lack of posts. I still don’t, but I thought I would quickly highlight a recent paper by Xu and Ramanathan called Well below 2 °C: Mitigation strategies for avoiding dangerous to catastrophic climate changes. I haven’t really had a chance to read it as thoroughly as I should, but it’s open access (I think) so, if you’re interested, I’d encourage you to read it. It partly caught my eye because it uses terms like dangerous, catastrophic and even existential threat. Partly, these are their own definitions; > 1.5oC defined as dangerous, > 3oC defined as catastrophic, and > 5oC potentially being an existential threat. It also tries to take various uncertainties into account (such as carbon cycle feedbacks).

Credit: Xu and Ramanathan (2017)

I didn’t want to say too much, but did want to post the figure on the right. It shows the range of warming in the two different scenarios that they regard as not including climate policies (essentially RCP6 and RCP8.5) and a well-below 2C scenario (WB2C). This is what I wanted to highlight. Even the well-below 2C scenario has a roughly one-third chance of exceeding 2oC, and a non-negligible change of exceeding 3oC.

If you consider the table in this Carbon Brief article then a 66% chance of staying below 2oC would require emitting no more than another 1000GtCO2 (272GtC) from 2011. We’ve already emitted about 60GtC since 2011, so we have about 200 GtC left. In other words, another 20 years at current emissions would use up this carbon budget. Alternatively, we have to start reducing emissions pretty soon if we want to still have a 66% chance of keeping warming below 2oC. However, even if we do meet this carbon budget target, we would still have a one-third chance of exceeding 2oC and a few percent chance of exceeding 3oC.

I find this rather sobering and am not quite sure how to wrap this up. I don’t think we should focus on the possibility that everything could be much worse than we hope. However, I also don’t think we should ignore this either. What this means to me is that we should start taking this seriously, because ultimately we will need to get net emissions to zero and the sooner we start thinking of ways to do this, the less likely it will be that the outcome will be something that we would rather have avoided.

Posted in Climate change, Climate sensitivity, Science | Tagged , , , , | 69 Comments

It’s complicated, and it’s coupled

Matt Ridley, whose writings I’ve discussed before, has a new article in The Times called we are more than a match for hurricanes, that essentially argues that

[w]hether or not tropical storms are becoming fiercer, our growing wealth and ingenuity helps us to survive them

and that

[a]daptation is and always will be the way to survive storms.

In some sense the above is simplistically true; it would seem easier to deal with extreme weather if you’re wealthier, than if you’re poor, and these storms will always happen, so some amount of adaptation is inevitable. However, the basic premise of the article (adaptation policies [have] benefits over carbon-reduction policies) seems horribly flawed.

The one obvious issue is that it’s not just about surviving extreme storms, it’s also about dealing with things like increases in the frequency and intensity of heatwaves and extreme precipitation events, continued and accelerating sea level rise, ocean acidification, and expansion of the Hadley Cells, which determine the locations of the tropical rain-belts and where we find deserts. These are all essentially linked and so that we might be able to deal with some of the changes more easily than others isn’t really a very good argument for largely ignoring climate change and just aiming to get richer.

There’s however, in my view, a bigger issue with Ridley’s argument. It essentially seems to be that we should mostly ignore climate change, just get richer, and that fossil fuels are mostly better than any alternative. The problem is that if we drive economic growth through burning more and more fossil fuels and pumping more and more CO2 into the atmosphere, then the climate will continue to change and the impacts will get more and more severe.

If we were confident that economic growth would always outpace climate damages, then this might be a reasonable suggestion (although, maybe don’t suggest it right now to those who live on Caribbean islands). However, this is almost certainly not going to be the case. There is almost certainly a level of warming above which the impacts would be utterly catastrophic. We may not be able to define it precisely, but it is almost certainly within reach, either because we simply pump all the CO2 we can into the atmosphere, or because our climate is sensitive enough that we get there even if we don’t emit as much CO2 as we possibly could.

Given this, there must be an even lower level of warming at which cimate damages start outpacing economic growth. So, suggesting that we can just grow our way out of trouble simply seems wrong. If people don’t like the current policy then the solution (in my view) is not to argue that we should essentially ignore climate change and simply grow, but to argue for something like a carbon tax.

The idea behind a carbon tax is to properly price CO2 emissions (future damages discounted to today) so that the market can determine the optimal pathway. In a sense arguing that we should mostly ignore climate change and simply grow, is potentially arguing for a future pathway that is not only less efficient than one in which we take climate change into account, but one that potentially guarantees that we face the worst possible climate impacts. The argument seems so obviously flawed, that I’m still amazed that people who regard themselves as intellectuals actually have the gall to make it.

There’s always the chance that I’ve somehow misunderstood the general argument that Ridley is making (if so, it would seem pretty easy to misunderstand) but this is essentially pretty standard Ridley schtick (everything will be fine, just grow) so that does seem rather unlikely.

Posted in Climate change, Climate sensitivity, Global warming, Policy, Severe Events, Uncategorized | Tagged , , , , , | 192 Comments

The Portable POMO

Five paragraphs from Michel Foucault ought to be enough to dig POMO. Let’s take those that start his concluding remarks to the Seminar Discourse and Truth: the Problematization of Parrhesia. Parrhesia refers to the act of speaking candidly:

My intention was not to deal with the problem of truth, but with the problem of truth-teller or truth-telling as an activity. By this I mean that, for me, it was not a question of analyzing the internal or external criteria that would enable the Greeks and Romans, or anyone else, to recognize whether a statement or proposition is true or not. At issue for me was rather the attempt to consider truth-telling as a specific activity, or as a role.

As you can see, Michel doesn’t deny that truth exists. It’s just not what he’s auditing. Truths need to be told, i.e. by someone, in a context, etc. So let’s skip to the third paragraph:

But, in fact, my intention was not to conduct a sociological description of the different possible roles for truth-tellers in different societies. What I wanted to analyze was how the truth-teller’s role was variously problematized in Greek philosophy. And what I wanted to show you was that if Greek philosophy has raised the question of truth from the point of view of the criteria for true statements and sound reasoning, this same Greek philosophy has also raised the problem of truth from the point of view of truth-telling as an activity. It has raised questions like: Who is able to tell the truth? What are the moral, the ethical, and the spiritual conditions which entitle someone to present himself as, and to be considered as, a truth-teller? About what topics is it important to tell the truth? (About the world? About nature? About the city? About behavior? About man? ) What are the consequences of telling the truth? What are its anticipated positive effects for the city, for the city’s rulers, for the individual, etc.? And finally: what is the relation between the activity of truth-telling and the exercise of power, or should these activities be completely independent and kept separate? Are they separable, or do they require one another? These four questions about truth-telling as an activity — who is able to tell the truth, about what, with what consequences, and with what relation to power — seem to have emerged as philosophical problems towards the end of the Fifth Century around Socrates, especially through his confrontations with the Sophists about politics, rhetorics, and ethics.

Forgive the many questions. It’s more than a tic, it’s a trick French students use to write dissertations. These questions help enumerate conditions for truth-telling. In the following paragraph, notice how Michel distinguishes the logical aspect from the extra-logical aspect of truth, which he calls analytical and critical:

And I would say that the problematization of truth which characterizes both the end of Presocratic philosophy and the beginning of the kind of philosophy which is still ours today, this problematization of truth has two sides, two major aspects. One side is concerned with insuring that the process of reasoning is correct in determining whether a statement is true (or concern itself with our ability to gain access to the truth). And the other side is concerned with the question: what is the importance for the individual and for the society of telling the truth, of knowing the truth, of having people who tell the truth, as well as knowing how to recognize them. With that side which is concerned with determining how to insure that a statement is true we have the roots of the great tradition in Western philosophy which I would like to call the “analytics of truth”. And on the other side, concerned with the question of the importance of telling the truth, knowing who is able to tell the truth, and knowing why we should tell the truth, we have the roots of what we could call the “critical” tradition in the West. And here you will recognize one of my targets in this seminar, namely, to construct a genealogy of the critical attitude in the Western philosophy. That constituted the general objective target of this seminar.

(The emphasized bit is recurrent in ClimateBall ™.) Note that Michel speaks of genealogy in a philosophical sense. No need to delve into that nuance, let’s stick to problematization. What’s that beast, you may ask? Wait for it:

From the methodological point of view, I would like to underscore the following theme. As you may have noticed, I utilized the word “problematization” frequently in this seminar without providing you with an explanation of its meaning. I told you very briefly that what I intended to analyze in most of my work was neither past people’s behavior (which is something that belongs to the field of social history), nor ideas in their representative values. What I tried to do from the beginning was to analyze the process of “problematization” — which means: how and why certain things (behavior, phenomena, processes) became a problem. Why, for example, certain forms of behavior were characterized and classified as “madness” while other similar forms were completely neglected at a given historical moment; the same thing for crime and delinquency, the same question of problematization for sexuality.

So POMO is minimally a (derogatory) term to designate any way of asking ourselves about the conditions by which some “things” became topical. While Kant asked himself what are the a priori conditions for knowledge to be possible, POMOs ask themselves about the all the conditions for “things” to become problems. That includes the concept of “thing,” it goes without saying, and even kinds of things.

This way of looking at problems provides great latitude. One can explore about anything under any angle, to a point the studies can be read like novels. Sometimes, such freedom can lead to overweening verbosity or worse:

That shouldn’t always be the case.

Now you know POMO.

Posted in Freedom Fighters, Philosophy for Bloggers, The philosophy of science, The scientific method | Tagged , | 108 Comments

Prior knowledge

Something I have been bothered about for some time now, is how we best discuss climate change in the context of extreme events. Given the devastation from Hurricanes Harvey and Irma, damaging floods in South Asia and Nigeria, and the potential of more damage from Hurricane Jose, you’d think there would be a clear way to discuss the relationship to climate change that was also consistent with the scientific evidence.

However, there is clear pressure – from some – to avoid discussing the link to climate change because that supposedly politicises catastrophic events (FWIW, I think this argument is itself – rather ironically – political). There are also those (who I probably don’t need to name) who continually point out that the IPCC is clear that we cannot yet detect an anthropogenic climate change signal in many of these events. The problem, though, is that we have some prior knowledge as to how we expect climate change to influence these events and waiting until it is obvious that it has done so, before we take it seriously, seems rather unsatisfactory.

This has been a somewhat lengthy introduction to what I wanted to mention, which is one of James Annan’s posts in which he discusses a new paper by Michael Mann, Elisabeth Lloyd, and Naomi Oreskes called Assessing climate change impacts on extreme weather events: the case for an alternative (Bayesian) approach. Their argument is that rather than using a frequentist approach when trying to assess climate change impacts on extreme weather events, a Bayesian approach should be used.

I’m not an expert at Bayesian analysis, so I’m not going to try and explain it. Instead, I’ll try to explain what I think is wrong with the frequentist approach. This is partly based on a couple of James’s other posts. Essentially, the standard frequentist approach is to assume some kind of null hypothesis and to only reject this if you detect a statistically significant signal in your data. In the climate change context, the null hypothesis would normally be that there has been no change in some type of event. If you do detect some change, then one can try to attribute that to anthropogenic influences (i.e., detection and attribution).

In some sense, it’s a two-step process; detect some signal and then perform some kind of attribution analysis. If you do not detect any signal, then you do not reject the null hypothesis that there has been no change, and the process stops before any attempt at attribution. The problem, though, is that we know that our climate has changed due to anthropogenic influences and we can be pretty confident that this will have influenced, and will continue to influence, weather events. Therefore, the frequentist null hypothesis of no change is immediately wrong and the frequentist test will regularly return a result that is essentially incorrect (i.e., we will conclude that there has been no change even if we are pretty confident that there must have been some kind of change).

As you may have already noticed, though, the obvious other problem is that being confident that there must be some kind of change does not – itself – tell us anything about how it probably did change. Events could get stronger, or weaker. Events could become more, or less, frequent. Maybe they’ll move and start to occur more in regions where they were once rare.

However, in many cases we do have prior knowledge/understanding of how these events will probably change under anthropogenically-driven warming. There will probably be an increase in the frequency and intensity of heatwaves and extreme precipitation events. We expect an increase in the intensity and frequency of the strongest tropical cyclones, even though we might expect a decrease in the overall number of tropical cyclones. It seems, therefore, that we should really be updating our prior knowledge/understanding, rather than simply assuming that we can’t say anything until a statistically significant signal emerges from the data.

So, given that we are confident that anthropogenic climate change is happening and that this will almost certainly influence extreme weather, using a technique that will return no change until we eventually have enough data to detect changes, seems very unsatisfactory. That said, I don’t have a good sense of how to effectively, and properly, introduce a more Bayesian approach to discussing these events. If anyone has any suggestions, I’d be happy to hear them. Similarly, if anyone thinks I’m wrong, or am confused, about this, I’m also happy to hear that. I will add that this post was partly motived by Michael Tobis’s post which takes, I think, a slightly different line.

Links:
Assessing climate change impacts on extreme weather events: the case for an alternative (Bayesian) approach, by Mann, Lloyd and Oreskes.
More on Bayesian approaches to detection and attribution, by James Annan.
The inevitable failure of attribution, by James Annan.
Detection, attribution and estimation, by James Annan.
Neptune’s revenge, by Michael Tobis.

Update:
I had intended to highight this Realclimate post by Rasmus Benestad, which makes a related argument. Climate change has to impact the probability density function of weather events and, therefore, must impact the probability and intensity of those that are regarded as extreme.

Posted in Climate change, Global warming, Michael Mann, Research, Science, The philosophy of science, The scientific method | Tagged , , , , , , | 47 Comments

Beyond equilibrium climate sensitivity

Since I’ve written about climate sensitivity before, and since I have a few free moments, I thought I would briefly highlight a new paper by Knutti, Rugenstein, and Hegerl called Beyond Equilibrium Climate Sensitivity. It’s really a review of a large number of estimates for the Transient Climate Response (TCR) and the Equilibrium Climate Sensitivity (ECS).

In case you don’t know, the TCR is essentially how much we will have warmed when we’ve doubled atmospheric CO2, and the ECS is how much we will eventually warm once the system has returned to equilibrium after a doubling of atmospheric CO2 (technically, these are model metrics and consider fast feedbacks only, but let’s ignore those details for now). There are a large number of different estimates for the TCR and ECS, and the paper doesn’t really try to reconcile them; it simply discusses the various methods.

Essentially, there is reasonably agreement between the various estimates for the TCR; most are consistent with the likely range of 1 to 2.5oC and suggest that it is extremely unlikely above 3oC. There is some disagreement amongst the estimates for ECS, but this is mostly due to those that use the observed warming. The method that uses the observed warming essentially assume that the feedback response will remain constant as we warm to equilibrium; there now seems to be a reasonable amount of agreement that this is unlikely to be the case and that we will likely warm more as we approach equilibrium than we did initially.

Credit: Knutti et al. (2017)

This is illustrated quite nicely by the figure on the right, which shows the surface temperature anomaly, on the x-axis, and top-of-the atmosphere radiative imbalance, on the y-axis. The black line is the case in which we assume feedbacks remain constant; this produces what is typically referred to as the Effective Climate Sensitivity. We expect, however, that temperature dependent feedbacks and the pattern of the warming could lead to more warming in future than we would expect based on an assumption of constant feedbacks; this will eventually lead to the Equilibrium Climate Sensivity being larger than the Effective Climate Sensitivity. There are also other factors, like internal variability, the base state climate, the magnitude of the forcing, and what produces the change in forcing, that could also influence the overall warming. On top of this, there are slower feedbacks that will ultimately further amplify the warming, producing the Earth System Sensitivity.

Okay, I’m running out of time, as I have to head out to dinner with some colleagues, so will just wrap up, by quoting some of the conclusions of the paper

Our overall assessment of ECS and TCR is broadly consistent with the IPCC’s, but concerns arise about estimates of ECS from the historical period that assume constant feedbacks, raising serious questions to what extent ECS values less than 2oC are consistent with current physical understanding of climate feedbacks. A value of around 3 °C is most likely given the combined evidence and the recognition that feedbacks change over time.

The argument that an ECS value less than 2oC is inconsistent with our physical understanding of climate feedbacks is presented nicely in a video by Andrew Dessler, that I included in this post. I also wanted to quote from the paper’s abstract, which ends with

Newer metrics relating global warming directly to the total emitted CO2 show that in order to keep warming to within 2oC, future CO2 emissions have to remain strongly limited, irrespective of climate sensitivity being at the high or low end.

What this is referring to is the Transient Climate Response to Emissions (TCRE) which I discuss in this post. This attempts to include both radiative feedbacks and carbon cycle feedbacks and suggests that our warming depends linearly on emissions. Essentially, this suggests that if we want to keep warming below 2oC then – approximately – the total amount we can emit in future, is less than we’ve emitted so far. I think this is quite an important metric that probably isn’t discussed enough, but since I need to rush out, I won’t say any more about it now.

Posted in Climate change, Climate sensitivity, Global warming, Research, The scientific method | Tagged , , , , , , , | 3 Comments

Climate model tuning

I wrote a post about model tuning that discussed a paper that argued for more transparency in how climate models are tuned. Gavin Schmidt, and colleagues, have now published a paper that discusses the Practice and philosophy of climate model tuning across six US modeling centers. The paper is a bit long, but it’s well-written and easy to read, so I would encourage you to do so (if interested) and I’ll try to not say too much.

Probably a key point is why you need to tune these models in the first place. Well, they’re certainly based on basic physics, but they’re sufficiently complex that you can’t model everything from anything close to first-principles. This means that some processes are parametrised and, in some cases, the parameters are not well constrained. This requires that you then tune these parameters so that the model then matches some pre-defined emergent constraints.

A common claim, however, is that they’re then tuned so as to either match the 20th century warming or to produce specific climate sensitivities. These, however, are not amongst the emergent constraints used for model tuning. As the paper says

None of the models described here use the temperature trend over the historical period directly as a tuning target, nor are any of the models tuned to set climate sensitivity to some preexisting assumption.

Most of them do, however, tune for a radiative imbalance, either during pre-industrial times (PI) or present day (PD), or tune for aerosol forcing, or aerosol indirect effect. A summary of the tuning criteria in the 6 different US models is shown in the Table below.


Even though the tuning does not explicitly tune to something like climate sensitivity, there are some indications that there might be some implicit tuning. For example

However, analysis of the CMIP3 ensemble (Kiehl, 2007; Knutti, 2008) suggested that there may have been some kind of implicit tuning related to aerosol forcing and climate sensitivity among a subset of models with models with higher sensitivity having a tendency to have higher (more negative) aerosol forcing

The correlation is, however, rather low and this is less evident for CMIP5.

Having started this, I’ve also just noticed that James has a post in which he suggests that even though groups certainly don’t re-run their models, and tune parameters, until they get a good fit to the 20th century, some have certainly made adjustments/updates if they know that the fit is poor.

I guess the basic message is that this is complicated and although there certainly isn’t any explicit tuning to the 20th century trend or to some specific climate sensivity, subjective choices and expert judgement can have an impact on these emergent constraints. Having said that, what they explicitly tune to – in many cases, a radiative imbalance – seems quite reasonable to me since this is a key factor that indicates the net amount of energy being accrued by the system.

The paper ends with what seems like quite a sensible suggestion:

we recommend that all future model description papers …. include a list of tuned-for targets and monitored diagnostics and describe clearly [] their use of historical trends and imbalances in the development process.

As I said at the beginning, if you want to know any more, it’s probably best to read the paper (another link below).

Links:
Tuning to the global mean temperature record, by Isaac Held.
Practice and philosophy of climate model tuning across six US modeling centers, by James Annan.
Practice and philosophy of climate model tuning across six US modeling centers, by Schmidt et al.

Posted in Climate sensitivity, Gavin Schmidt, Research, Science, The scientific method | Tagged , , , , , , | 62 Comments