Climate Change – The Facts

I watched BBC One’s Climate Change – The Facts, narrated by David Attenborough. A pity about the title, as it’s the same as a book with authors that include Anthony Watts, Nigel Lawson and James Delingpole, but I thought it was pretty good. There was a good range of researchers and commentators who were interviewed and who presented pretty clear explanations about what’s happening and what we can expect. I probably don’t need to summarise as most of my readers probably know what they would have presented. It also ended with a positive message that it’s not too late to act, and presented some things that we could do; transform our energy infrastructure, fly less, eat less meat, eat local, etc.

The thing I found interesting, though, is that it seemed to essentially follow the deficit model. The goal was clearly to present information to convince people that this is a threat we need to face. David Attenborough explicitly said if we better understand the threat we face, the more likely it is that we can avoid such a catastrophic future. Yes, there were emotive images, and a positive message, but the key underlying idea was clearly to present information about anthropogenically-driven climate change and what could happen if we don’t act soon.

Anyway, I thought it was a good programme, and it’s good to see it being presented on the BBC and it getting so much coverage and attention. I’m now watching Age of Stupid, which came out in 2009, and is now starting to seem somewhat prophetic, although we don’t seem to yet have taken much notice.

Advertisements
Posted in Climate change, ClimateBall, Global warming, Scientists, Watts Up With That | Tagged , , , , , | 47 Comments

Six years

I’ve just been reminded by WordPress that this is the sixth anniversary of me starting this blog. I’m somewhat amazed that I’ve kept it going that long. I am, however, finding it more and more difficult to find things to write about, which is partly why it’s been a lot less active recently than it has been in the past.

I have, however, learned an awful lot while writing this blog. I’ve read about, and thought about, topics that I probably wouldn’t have considered otherwise. I’ve had interesting discussions with many people, and still enjoy these discussions. I’ve also learned a lot from other people. I’ve, on occasion, had less than pleasant discussions, but these seem to be less commmon than they once were. This might be partly because the topic is slightly less contentious, but is probably also because people with different views are talking less to each other, which is a little disappointing even if it does make life more pleasant.

I want to thank those who’ve been supportive and those who contribute to the still quite active comment threads. I also want to thank those who’ve contributed guest posts, and those who’ve allowed their discussions with Willard to be posted here. I don’t know how active the blog will be in the coming year. As usual, I plan to simply play things by ear and to write what I feel like writing about, which can vary somewhat from time to time.

Posted in ClimateBall, Interview, Personal, Philosophy for Bloggers | Tagged , , , | 107 Comments

Post-normal science

I’ve been reading a paper by Daniel Sarewitz, that was being highlighted by Jane Flegal on Twitter. The paper is called Of Cold Mice and Isotopes Or Should We Do Less Science? There’s quite a lot that could be said about the article, but since I’m trying to keep posts reasonably short, I thought I would comment on one thing.

The article says:

People who care about the quality and legitimacy of science could start insisting at every chance that science conducted and invoked in the post-normal context is not science. Post-normal science is easy to spot. When experts continue to disagree; when advocates continue to use science to advance value-based agendas and to accuse those they disagree with of misusing science; when decision makers don’t take action on urgent issues but call for more research; when action means that there will be winners and losers; when the quality of the science cannot be measured against any agreed-upon end-point—then, no matter how sophisticated the math or complex the scientific instruments, no matter how pure of motive and careful of method the scientists, it’s NOT science, and we should all say so.

Of course, the process of making decisions is not science. Also, the relationship between science and decision making is extremely complex; there isn’t a simple, linear process in which scientific information leads directly to an obvious outcome. However, I have a real problem with the idea that we should regard science conducted in the post-normal context as not science. The validity of some scientific research should not depend on its broader relevance.

The other problem is that anytime science suggests something inconvenient, all you need to do is find some experts who disagree, highlight the value-based agendas, point out that decision makers aren’t taking action, claim there will be winners and losers, and fail to agree upon any end point that would measure the quality of the science. If you can do this, then we’re all meant to say that this is NOT science.

This seems like a cop-out to me. What would be far more useful would be ways for us to assess the credibility of the underlying scientific information. We could develop methods for determining when the expert disagreement is actually significant, rather than it simply being a small minority who refuse to accept the most recent scientific evidence. We could even try to determine if value-based agendas have influenced the scienific research process in some substantive way. All of this would seem useful. Simplistic scenarios under which we’re meant to stress that some science is NOT science, does not seem particularly useful.

Also, why should we judge science on the basis of whether or not decision makers are taking action and if there will be winners and losers? Decision making is complex and shouldn’t really influence how we value the underlying information. Additionally, why should the quality depend on some agreed-upon end-point? There may be some truth to this when it comes to applied research, but a key aspect of fundamental research is that we can’t know the outcome in advance.

Of course, there may well be subtleties that I don’t understand, but if I am reading this right, then I disagree quite strongly with what is being suggested. Rather than helping society better understand how to utilise science in the decision making process, it seems to be providing a mechanism for avoiding making difficult decisions, or for validating information that could be severely lacking in credibility. I fail to see how this could be regarded as progress.

Posted in Policy, Politics, Research, Science, Scientists, The philosophy of science, The scientific method | Tagged , , , , , | 125 Comments

Models and scenarios

I was following, or trying to, a Twitter discussion about models and scenarios. It was – I think – about models that forescast technology development, and you can find it here if you’re interested. I didn’t entirely follow it, but my impression was that the suggestion was that if the scenario on which you were basing your model was unrealistic, then what you infer from your model could be very wrong. For example, if your baseline assumptions don’t properly reflect current policy, then you might infer a greater benefit to some action than is actually likely.

What I didn’t quite get is the reason for this. In many physical models, you can still infer something about how the system will respond to some perturbation, even if the underlying model does not capture the full complexity of the system. Of course, there are limits to this, but it is quite common to use relatively simple models to try and understand how a physical system will evolve.

So, is the problem with these forecast models that the system is so sensitive to the underlying conditions that if these don’t properly represent our current conditions that you really can’t say much about how the system will respond to changes? In other words, is it related to the lack of structural constancy that Jonathan Koomey discusses in this paper.

Alternatively, is it that people are not being clear about the limitations of their analyses? For example, we can’t use climate models to forecast the weather many years into the future, but we can use them to say something about how the climate will probably change if we perturb the atmospheric CO2 concentration by some amount.

I don’t actually know where I’m going with this. I didn’t completely follow the discussion and couldn’t quite tell what the actual problem was. It did seem, though, that the suggestion was that the underlying scenarios had to properly represent our current policy landscape and I found that slightly surprising. I’m much more used to the idea that one can use simple models to try and understand how a system responds to changes, without requiring that the model fully represents the complexity of the system being considered.

My concern would be that if the model is extremely sensitive to the underlying scenarios, then it would seem very difficult to be confident in the model results. As I’ve already said, though, I may have misunderstood what was being suggested, so would be pleased to have this clarified.

Posted in economics, Research, Science | Tagged , , , | 14 Comments

Existential threat?

I had a discussion with someone recently who asked if climate change really was an existential threat for humans. I responded that it wasn’t. However, I added that this didn’t mean that it couldn’t be severely disruptive or that it meant that we couldn’t be creating an environment that we’d find very hostile, both in terms of the climate we could experience and the damage to the natural ecosystems on which we rely.

I find myself in a bit of quandary, because I think the extinction rebellion type narrative is too extreme, but I also find myself getting irritated by those who seem to suggest that we should simply rely on our current political/economic toolkits. I think that even if climate change is not a true existential threat, it’s still a very different type of threat to almost anything else we’ve faced before. I don’t really feel that our current political/economic environment is well suited to dealing with it, or – if it is – it clearly hasn’t demonstrated this yet.

A common narrative seems to be that we should be aiming for policy that is palatable; anything too ambitious either won’t be successful, or might do more harm than good. The problem I have is that we can often recover from poor policies. This isn’t the case for climate change, as I try to argue in this post. So, why do we seem to reluctant to enact ambitious climate policy, while appearing to be not too bothered about substantially changing the very thing that is vital to our existence on this planet; our climate?

To me, there are 3 main reasons why climate change presents a threat that is unlike anything we’ve really faced before:

  • It’s irreversible on human timescales: Without some kind of as yet undeveloped negative emission technology, climate change is essentially irreversible on relevant timescales. At any instant in time, the best we can do is stop it from getting any worse. In fact, given that we can’t immediately halt all emissions, we can’t even do this. So, if we get to the point where the impacts are now obviously severely negative, all we can do is bring emissions down as fast as we possibly can, and try to avoid it getting too much worse.
  • It could be large: We have the potential to emit enough to increase global average surface temperatures by more than 4oC, and more than double this in places like the Arctic. This is a substantial change in temperature. This is similar to the difference between a glacial, when mile-thick ice sheets covered parts of North America and Europe, and an inter-glacial, when the only ice sheet in the Northern Hemisphere is the Greenland ice sheet. Global warming of more than 4oC will almost certainly be a substantial change to our climate and it seems very unlikely that the impacts of such a change won’t be severely negative.
  • It’s fast: In the past, most comparable changes to our climate happened over periods of thousands of years. The warming/cooling during the Milankovitch cycles typically took a few thousand years. The CO2 release associated with the Paleocene-Eocene Thermal Maximum (PETM) is also thought to have taken thousands of years. We’re doing this on the timescales of ~100 years. Not only does this mean that it will be more difficult for ecosystems to respond, and survive, but we’re rapidly perturbing a complex, non-linear system. We shouldn’t be surprised if something unexpected occurs.

So, as much as I think that some of the current narratives are too extreme, I also think that quite a lot of the mainstream narratives are too relaxed and seem to imply that we shouldn’t do anything too ambitious. I really do hope that we can effectively address this by just relying on mainstream politics and economics. I’m becoming less and less convinced that we can.

Posted in Climate change, economics | Tagged , , , , | 87 Comments

An updated Bayesian climate sensitivity estimate

I thought I would update my Bayesian climate sensitivity estimate, given the comments I received (peer-review in action). Based on James’s comment, I’ve removed the noise term and am now using the aerosol forcing as the forcing uncertainty. Based on Paul’s comment, I’m using the CMIP5 RCP forcing data, which you can get from here (Specifically, I’ve used the RCP6 forcing data).

I’m still using the Cowtan and Way global surface temperature data, which you can get from here, but I’m using the full HadCRUT4 uncertainties, which you can access here. I’m also still using the 0-2000m Ocean Heat Content (OHC) data from Zanna et al. (2019), which you can get here. I’ve doubled the OHC uncertainties.

Just to remind people, I’m using a simple two-box model:

C_1 \dfrac{d T}{dt} = F(t) - \beta T(t) - \gamma \left[T(t) - T_o(t) \right],

C_2 \dfrac{d T_o}{dt} = \gamma \left[ T(t) - T_o(t) \right],

where the upper box is the ocean’s mixed layer and atmosphere, and the lower box is the ocean down to 2000m. I’m going to be lazy and not describe all the parameters and variables, which are described in my previous post. I’m fitting my model using Markov Chain Monte Carlo, which I’m doing using a python package called emcee.

The figure on the top right shows the fit to the surface temperature data, while the figure on the bottom right shows the fit to the 0-2000m OHC data. In both cases, the orange curve is the median result, while the grey lines sample the range.

Below is a corner plot showing the resulting distribution for the parameters. The parameter \beta is essentially climate sensitivity, \theta_1 is essentially an intial temperature difference between the two boxes, \gamma represents the exchange of energy between the two boxes, and C_1 is the heat capacity of the upper box.

The table below shows the results for the Equilibrium Climate Sensitivity (ECS), Transient Climate Response (TCR), ECS-to-TCR ratio, and the heat capacity of the upper box. Since the C_1 value seemed a little high (the medium is equivalent to about 150m of ocean), I repeated the analysis, but used a fixed C_1 = 2 (67m). I should also make clear that the ECS here, is really an effective climate sensitivity because the model assumes a constant \beta.

Parameter 15th percentile median 84th percential
ECS (K) 1.92 2.18 2.47
ECS (K) – C_1 = 2 2.06 2.25 2.47
TCR (K) 1.55 1.74 1.94
TCR (K) – C_1 = 2 1.48 1.59 1.73
TCR-to-ECS ratio 0.75 0.80 0.85
TCR-to-ECS ratio – C_1 = 2 0.68 0.71 0.75
C_1 3.85 4.73 5.70

This updated analysis has narrowed the range for both the ECS and TCR, and brought the upper end down somewhat. However, the median estimate for the ECS is still above 2K, and the lower limit (15th percentile) is still close to 2K. The figures on the right show the resulting ECS and TCR distributions.

Having now updated my analysis, I will probably stop here. I do have some other work I need to focus on. James has suggested that he is working on a similar analysis, so it will be interesting to see the results from this work and how it compares to what I’ve presented here.

Update:
As per Peter’s comment, I’ve redone the analysis using Lijing Cheng’s OHC data, which starts in 1940. I’ve also assumed a constant heat capacity for the upper box of C_1 = 2. Below are the figures and a table showing the results. I’ve also just realised that I forgot to correct the units on the OHC plot; it should be J, not ZJ.

Parameter 15th percentile median 84th percential
ECS (K) 2.08 2.36 2.64
TCR (K) 1.44 1.58 1.72
TCR-to-ECS ratio 0.63 0.67 0.71

Update number 2:
I had forgotten that I meant to mention that I’d had an email from Philip Goodwin who has also done similar analyses. For example, in this paper that I actually discussed in this post. There is also a recent paper by Skeie et al. (2018) that also uses MCMC, as does this paper by Bodman and Jones.

Posted in Climate sensitivity, Research, Science, The scientific method | Tagged , , , , , , , , | 40 Comments

An attempt to do a Bayesian estimate of climate sensitivity

Update (02/04/2019): I’ve updated this in a new post. The updated result suggests a slightly lower climate sensitivity and a narrower range. The main difference is – I think – how I was handling the forcing uncertainty. In this post, I was simply using some fraction of the total forcing, while a more appropriate thing to do is to use the aerosol forcing, which is what I’ve done in the updated analysis.

I’ve been spending some time working on a Bayesian estimate for climate sensitivity. This is somewhat preliminary and a bit simplistic, but I thought I would post what I’ve done. Essentially, I’ve used the Markov Chain Monte Carlo method to fit a simple climate model to both the surface temperature data and to the ocean heat content data.

Specifically, I’m using a simple two-box model which can be written as

C_1 \dfrac{dT}{dt} = F(t) - \beta T - \gamma (T - T_o) + \epsilon

C_2 \dfrac{dT_o}{dt} = \gamma (T - T_o).

In the above, C_1 is the heat capacity of the upper box (ocean mixed layer and atmosphere), T is the temperature of this box, C_2 is the heat capacity of the lower box (deep ocean), and T_o is this box’s temperature. The term \beta is essentially climate sensitivity, \gamma determines the exchange of energy between the two boxes, and \epsilon is a noise term that I’ve added.

In the above equations, F(t) is the radiative forcing. Unfortunately, I can’t seem to work out where I got this data from, but I will update this when I remember. Any forcing dataset would work, though. The term T in the top equation is the global surface temperature anomaly. I used the Cowtan and Way data, which you can access here. To complete this, I also needed a ocean heat content dataset. Laure Zanna very kindly sent the data from her recent paper, which can also be downloaded from here.

A couple of other things. I couldn’t find a forcing dataset that included uncertainties, so I assumed a 1\sigma uncertainty of 25%. I also initially had trouble getting a decent fit between the model and the temperature and ocean heat content data, so have increased these uncertainties a little.

To actually carry out the fit, I used a python package called emcee. It’s well tested, quite commonly used in astronomy, and is what I used for the paper I discussed in this post. The model has 5 parameters: \beta, \lambda, \epsilon, C_1, and \theta_1. The priors for \beta and C_1 were uniform in log space, while all the others were simply uniform.

The term \theta_1 is essentially an initial value for the deep ocean temperature, relative to the global surface temperature anomaly. I also adjust C_2 so that C_1 + C_2 is the total heat capacity of the ocean down to 2000m and the fit is based the 0-2000m ocean heat content matching the combined heat content of the upper and lower boxes.

The figure on the top right shows the resulting fit to the global surface temperature anomaly. The orange line is the median result, while the lighter gray lines sample the range. The figure on the bottom right is the resulting fit to the 0-2000m ocean heat content data. The orange and gray lines are also the median result and a sampling of the range.

The figure below shows the resulting distributions for the 5 parameters. As will be discussed below, these can then be used to determine the equilibrium climate sensitivity (ECS) and the transient climate response (TCR).

The equilibrium climate sensitivity is simply given by the change in forcing due to a doubling of atmospheric CO2 divided by \beta (i.e., {\rm ECS} = 3.7/\beta), while the transient climate response (TCR) can be determined using that the TCR-to-ECS ratio is \beta / (\beta + \gamma).

The resulting ECS distribution is shown in the top figure on the right, while the resulting TCR distribution is shown in the lower figure.

The table below shows the 15th percentile, median, and 84th percentile for the ECS, TCR, TCR-to-ECS ratio, and C_1 distributions. The results for the ECS and TCR are reasonably similar to what’s presented by the IPCC (although the lower-limit for the TCR is a bit higher: ~1.5K, rather than ~1K). The only term that may not be clear is C_1, the heat capacity of the upper box. A value of C_1 = 3 is equivalent to an ocean depth of 100m. The values I get seem a little high but may not be unreasonable (I was expecting this box to have a heat capacity equivalent to an ocean depth of about 75m).

Parameter 15th percentile median 84th percential
ECS (K) 1.99 2.65 3.63
TCR (K) 1.55 1.94 2.44
TCR-to-ECS ratio 0.65 0.73 0.80
C_1 2.77 3.57 4.47

Anyway, that’s what I’ve been working on. There may be more that I could do, but I’ve probably spent enough time on this, so will probably leave it at this. I did find it interesting that a relatively basic analysis using a very simple model produces results that seem entirely consistent with much more complex analyses and that is also consistent with various other lines of evidence.

I did try various ways to carry out this analysis. The results were all consistent with what I presented here. In some cases, however, the median climate sensitivity estimates were actually higher. However, in these cases, the fits between the model and the data seemed poorer. However, in none of my analyses did I recover climate sensitivities that were substantively lower than what I’ve presented here.

Posted in Climate change, Climate sensitivity, Research | Tagged , , , , , , , | 50 Comments