Narrowing the climate sensitivity range?

There have been a couple of recent papers presenting analyses that claim to have narrowed the likely range for equilibrium climate sensitivity (ECS). One is Dessler et al. (currently a discussion paper under review) which suggests that the 500hPa tropical temperature better describes the planet’s energy balance and infers an ECS of 2.1K to 3.9K. The other is Cox et al. who use variability of temperature about long-term historical warming to constrain the ECS to 2.2K to 3.4K. Both suggest a narrower range than that suggested by the most recent IPCC report (1.5K to 4.5K).

James Annan has a post which suggests that these new papers are interesting but that there may be unaccounted for uncertainties. I largely agree and won’t say any more myself. I was, however, going to mention a few aspects of this that I think are relevant.

I thought how this was framed in the media was somewhat unfortunate. For example, Yes, global warming will be bad. But these scientists say it won’t reach the worst-case scenario. It does indeed seem that these studies are suggesting that the worst case scenarios might be less likely than we had previously thought. However, the public debate seems to be dominated by those who think everything will be fine (Lukewarmers) and those who are mostly in the middle of the mainstream. In fact, there is often quite a lot of pushback against any who present worst case scenarios.

The significance of these new studies to the public climate debate therefore seems to be that they largely rule out the Lukewarmer position. Yet, this is not really how they’ve been presented. One prominent Lukewarmer has even claimed that these studies are a vindication for Lukewarmers. Presenting these studies as having ruled out the worst case scenarios, rather than the best case scenarios, probably hasn’t shifted the public climate debate very much.

Another issue is that there is often an apparent confusion between climate sensitivity and how much we will warm. Yes, climate sensitivity is relevant, but so is how much we emit. These new studies have potentially narrowed the range, but they don’t really change the best estimate (about 3K). The more extreme scenarios (both low and high) may be less likely, but we can still potentially emit enough to warm substantially. In a sense, how much we will probably warm is largely unchanged.

Additionally, a common way to quantify how we could achieve some temperature target is to present a carbon budget; the amount of CO2 we can emit if we want some chance of staying below the target. Given that these new analsyses don’t really change the ECS best estimate, the carbon budget that would give us a 50:50 chance of staying below some target should be unchanged. Often, however, the carbon budget is presented as giving us a 66% chance of staying below some temperature target.

Given that the range might now be narrower, this might suggest that the carbon budget for a 66% chance might be slightly bigger. However, it’s not only the uncertainty in climate sensivity that constrains this; there are also carbon cycle uncertainties (i.e., what fraction of our emissions will be taken up by the natural sinks). Hence, I suspect that the impact on the carbon budget framework might be small (I might be wrong about this).

Also, although I’m mostly in favour of working within a carbon budget framework (it’s a pretty straightforward metric) I do sometimes think that there might be a better way to present it (to be fair, I don’t have any good suggestions as to what this should be). A carbon budget that gives us a 66% chance of staying below some temperature target doesn’t mean that we will do so 2/3 of the time, and fail 1/3 of the time. There is only one outcome; we will either stay below the temperature target, or we will not.

If we think that there is now a bigger chance of staying below some temperature target, it’s not clear to me that we should then adjust the carbon budget. Maybe it would be better to present it as there now being a bigger chance of succeeding, than suggesting that we can now emit more while still having the same chance.

Okay, I think that’s long enough, so I’ll stop there. The latter part of this post is not as clear as I would have liked, so if anyone has other suggestions as to what we should do given these new results, feel free to make them in the comments.

Update:
Andrew Dessler’s comment is worth reading. I had forgotten that their paper was more presenting what might be a better way to constrain the energy balance, rather presenting a firm estimate for an ECS range.

Advertisements
Posted in Climate change, Climate sensitivity, Policy, Research, Science | Tagged , , , , | 70 Comments

Guest post: A ‘new’ measurement of climate sensitivity?

This is a guest post by Mark Richardson, who is currently a Caltech Postdoctoral Scholar at the NASA Jet Propulsion Laboratory. Mark has a particular interest in the role of clouds in climate change. This post is a response to a suggestion that it is possible to more tightly constrain Equilibrium Climate Sensitivity (ECS). This article is all personal opinion and does not represent NASA, JPL or Caltech in any way.

A ‘new’ measurement of climate sensitivity?

The oceans are massive and their deeper layers haven’t caught up with today’s fast global warming. Unfortunately we don’t know exactly how far behind they are so it’s hard to pin down “equilibrium climate sensitivity” (ECS), or the eventual warming after CO2 in the air is doubled.

Blogger Clive Best proposes that data support an ECS range of 2–3 oC, with a best estimate of 2.5 oC. The 2013 Intergovernmental Panel on Climate Change (IPCC) consensus range was 1.5–4.5 oC with a best estimate of 3 oC. He asks “why is there still so much IPCC uncertainty?” Here we’ll see that part of the reason relates to the oceans, and that surprisingly Best’s results actually agree with IPCC climate models.

Clive Best mixes temperature data with a record of heating due to changes in gases in the air, solar activity, volcanic eruptions, air pollution and so on. Apparently without realising it, he accurately reproduced a textbook calculation including a reasonable way to try and account for the oceans lagging behind surface warming. This is a good start!

This calculation is often called a “one-box energy balance model” but by 2010 it was known to have issues with calculating ECS. Clive Best misses some of these because he uses a 1983 climate model to estimate that the oceans lag about 12 years behind the surface, which combined with the HadCRUT4 data gives an ECS of about 2.5 oC.

But in a like-with-like comparison HadCRUT4 warms about as much as the IPCC climate model average since 1861. Given this agreement, anything that uses HadCRUT4 and gets a lower ECS than the model average 3.2 oC has some explaining to do!

Figure 1: Temperature change over 150 years in abrupt 4xCO2 simulations of four climate models. Black lines are a one-box fit with ECS and response time (τ) allowed to vary. Legend lists model name, true ECS and fit parameters.

The reliance on a 1983 model is the explanation. The 1983 NASA GISS Model II was mostly designed for the atmosphere and had a simple ocean. For example, its ocean currents couldn’t change. Modern models are more realistic and the graphs to the right (Figure 1) show their temperature after an immediate 300 % increase in CO2. Each legend has the known model ECS, along with the ECS and time lag (labelled τ) calculated for the one-box model.

The ECS is off and the time lag can be as long as 21 years instead of 12! On top of that the fits are bad because the oceans aren’t just 12 years “behind”, instead the system acts as if the ocean has multiple layers and each one can respond on a different timescale. Now let’s look at simulations of the climate since 1861 and the one-box fits.

Figure 2: Simulated temperature change from 1861–2015 inclusive in 4 climate models using historical-Representative Concentration Pathway 8.5 scenarios (RCP8.5, blue). Model output is sampled in the same way as HadCRUT4. The thicker lines are fits using a one-box model with either the lag from Figure 1 or assuming a 12-year lag. Radiative forcing is the Forster et al. historical-RCP8.5 in all cases.

Consider the Figure on the left (Figure 2). Imagine living in the world of the top left panel. In this world we might read a blog that says ECS is around 1.7 oC but in reality it would be 3.8 oC. Now let’s compare the one-box and true ECS values for 18 models.

Figure 3: Model true equilibrium climate sensitivity (True ECS) as a function of that calculated as in Figure 2, using historical-RCP8.5 temperature change with the Forster forcing and a one-box model with a 12-year lag. All of the points are above the 1:1 black dashed line, showing that the one-box model underestimates true ECS in all 18 cases. The red line is a best fit to the models, although the fit is weak.

If this one-box calculation works, then it should give the right answer when applied to complex climate models where we know the answer (e.g. Geoffroy et al. (2013) do this sort of test). With this data being free online, anyone can work out that climate models with ECS from 2.3–3.8 oC are consistent with the data & one-box approach. A little exploration shows us that the climate’s response time matters, and measured ocean heating shows a single 12-year lag doesn’t make sense (Figure 3).

Clive Best asked why the IPCC give a range for ECS that’s bigger than his calculated 2–3 oC. This post shows that partly this is because his approach missed lots of uncertainty related to ocean layering. A 2013 paper found that the way in which oceans delay warming could even affect future sea ice and clouds while a 2017 study brought together the key physics and data. The conclusion? Observational data support a “best estimate of equilibrium climate sensitivity of 2.9 oC”, with a range of 1.7–7.1 oC.

Posted in Climate change, Climate sensitivity, Global warming, The scientific method | Tagged , , , | 148 Comments

A little bit of sociology of science?

I recently published a paper on turbulence in discs around young stars. The basic conclusion was that turbulence tends to inhibit, rather than promote, a potential planet formation process. However, rather than talk about the paper itself, I thought I would briefly highlight some of the background.

The paper was actually a response to an earlier paper, suggesting that turbulence could act to promote, rather than inhibit, this planet formation process. However, the authors of this earlier paper had essentially taken an analysis that is appropriate for star formation in galactic-scale discs and applied it to planet formation in disc around young stars. The problem, though, is that planet forming discs are not directly analogous to galactic-scale discs, even though a lot of the basic phyics is very similar. This is mostly what our paper was highlighting.

What was interesting, though, was that I was at a meeting with one of the authors of the other paper and mentioned their work in my talk. Impressively, they then publicly acknowledged that their analysis may not have been appropriate for planet forming discs, even though it is appropriate for discs in other contexts. One might argue that they should have avoided this in the first place. However, noone had really looked at turbulence in this context, so – ultimately – we’ve hopefully learned something about its role.

The other interesting aspect is that my co-author on this paper has been promoting a planet formation process that myself, and others, have suggested doesn’t really work, or – if it does – rarely operates. However, despite having a scientific dispute about one aspect of this topic, we were quite capable of working together on a related problem. What is partly motivating this, though, is a desire to try and resolve (as best we can) our scientific disputes.

Okay, I’m not all that sure what I’m trying to suggest by this post; maybe just an interesting story that highlights something of how science can work. Maybe I’ll finish by highlighting another interesting science story that I came across on Eli’s blog, but that originates here.

The basic argument is that the validity of some scientific theory (whatever those who support it might say) does not depend on how elegant/beautiful it appears to be. I agree; reality can be complicated. However, my corollary would be that once we have a good understanding of some system, it is often possible to develop elegant descriptions. The problem, which I may expand on in another post, is that these elegant descriptions are often – by their nature – simplifications. This means that sometimes people (especially on blogs) can claim to have falsified some theory because some data doesn’t exactly match what the theory appears to suggest. Essentially, it’s important to appreciate the complexity, even if the basics seem quite simple. I’ll stop there.

Posted in Personal, Research, Scientists, The philosophy of science, The scientific method | Tagged , , , , , , | 52 Comments

Can Contrarians Lose?

No.

Thesis – Contrarians always win

Proof. Let the following assumptions hold: (1) science is a corrective process; (2) scientific beliefs are revisable; (3) contrarians could (and probably will, one day, with AI) claim everything science doesn’t claim. Ergo, Betteridge’s Law strikes again. End of proof.

Remark. The last assumption (3) might be the most contentious one. For our proof to work, we need something like the totality of all possible ideas. It would contain the sum of all current scientific knowledge. The existence of both sets may be disputed. Further, the proof itself implies that it will: if there’s no scientific counterpoint to it, there’s at least a contrarian who would, hopefully in the current comment thread.

Corrolary 1 – Contrarians never lose.

Proof. Define losing as not winning. Confer to the above thesis. There’s no third step.

Observation. This definition sidesteps the possibility of any kind of Pyrrhic victory. That would be a bummer. Forget I told you about that. Look at the silly monkey!

Corrolary 2 – When a contrarian win, every contrarian win. Forever.

Proof sketch. Since contrarians fill all the niches the scientific establishment does not, any scientific revolution makes contrarians win. We have empirical evidence that contrarians share the wins of any other one, however contradictory their mutual beliefs could be. Make provision for the idea that a contrarian isn’t an agent characterized by its belief states, but by opposition to what the scientific establishement holds. Keep that idea for later – you’ll need it.

Example. Galileo was once a contrarian. Science progressed. He then was right. Checkmate. Every single contrarian since Galileo are thus enshrined by that righteousness. Contrarians won, are winning and will win again and again.

Open problem. Are contrarians tired of winning?

Corrolary 3. Every scientific correction confirms that contrarians were right all along.

Proof. It’s an easy one. Left as an exercice to readers.

Gloss. Inconsistency is an asset, not a liability. It allows contrarians to split their roles and build an infinite amount of Dutch books executed through double binds. Being right all along doesn’t hinder the following:

Corrolary 4. It is preferable when contrarians are “not even wrong.”

Proof. Obviously, having no standpoint to defend accelerates the validation process, as it precludes any possibility that science ever become contrarian-proof.

Alphabetical Listicle.  How to express contrarian concerns –  Arguing alternatives. Counterfactual thinking.  Dogwhistling FUD.  Incredibilism.  Just Asking Questions. Plausible deniability.  Sealioning.  Whataboutism.  Et cetera. You know the drill.

Corrolary 5 (Muller, 2012). As soon as a contrarian identifies with a winning belief that gets added to mainstream science, he loses its title of contrarian.

Proof. Recall the idea I told you to hold? Apply it. Bingo.

Warning. Losing the contrarian title includes side-effects like having to support your claim, read what otters write before commenting, cite your sources, and quote those with whom you disagree. Incidents of reciprocation and constructive criticism have also been reported. In the extreme, we witnessed a feeling of aloofness among those who had to deal with contrarians. You know, having to work and work and work to get nothing in return except being treated like the rug in The Big Lebowski? That feeling of aloofness.

Conjecture. Happy 2018, a year with many more contrarian wins to come!

 

Posted in ClimateBall, Contrarian Matrix | Tagged , , | 83 Comments

No, we’re not slipping into a proper ice age

Matt Ridley, who I have written about numerous times before, has a new article in The Times called global cooling is not worth shivering about, which claims that

The Earth is very slowly slipping back into a proper ice age

Well, this is just nonsense. It is quite probable that the Earth will – at some stage in the future – enter another ice age. It’s also true that if we were following the same pattern as we’ve followed for the last 800000 years, we might expect this reasonably soon, but we’re not, and we don’t. The reason is very simply that anthropogenic influences are now swamping the natural forcings that act to trigger the glacial cycles, and the current trajectory is very clearly not towards another proper ice age.

If you consider this paper, discussed in more detail here, it says

moderate anthropogenic cumulative CO2 emissions of 1,000 to 1,500 gigatonnes of carbon will postpone the next glacial inception by at least 100,000 years

The basic idea being that whether or not we move into another glacial period depends on the solar insolation at high northern latitudes and the amount of CO2 in the atmosphere. Our emission of CO2 has essentially guaranteed that atmospheric CO2 will remain enhanced for more than 100000 years, and – consequently – has delayed the next glacial cycle by at least the same timescale.

Matt Ridley’s overall argument, however, is that we don’t really need to be concerned about the next ice age, because it is still quite a long time away, by which time – as long as we keep using cheap, plentiful energy – we’ll have the technology to deal with it, and we can still thrive. The problem, though, is that if we simplistically follow Matt’s advice, we could pump enough CO2 into the atmosphere to produce a change comparable to that between a glacial and an inter-glacial, but in the other direction, and at least 10 times faster.

It’s also not only that could we produce rapid changes to our climate, we could also produce changes in some regions that lead to temperature and humidity levels that would be difficult to endure. We may also pass tipping points; sudden climatic shifts that are essentially irreversible.

So, for some reason Matt Ridley thought it worth talking about something that is unlikely to happen for another 100000 years. Even though Matt suggests we shouldn’t be concerned about this, he does suggest that we should prepare for this eventuality by using cheap plentiful energy. If, however, this cheap, plentiful energy is associated with the emission of CO2 into the atmosphere, we could produce changes on human timescales that may make it difficult for us to continue to thrive. I guess it’s just another example of people being willing to consider anything other than the possibility that we should think of ways in which we can reduce the emission of CO2 into the atmosphere.

Posted in Climate change, ClimateBall, GRRRROWTH, Policy | Tagged , , , , , , | 67 Comments

Reproducibility?

I came across an interesting paper about the replication crisis that I thought I would briefly discuss (H/T Neuroskeptic). The paper in question is Reproducibility research: a minority opinion. It’s not open access, but I have found what I think is an early draft copy.

The background is basically that there have been a number cases in which people have been unable to replicate, or reproduce, some earlier scientific/research study. A suggested solution is that researchers should make everything available, so that others can check their results. Some have even suggested that this is a key aspect of the scientific method/process. The new paper takes a rather dissenting position and, in my view, makes some interesting, and valid, arguments.

For starters, a key aspect of science is basically to test hypotheses. Our confidence in a result increases as more and more research groups produce consistent/convergent results, ideally doing so using different methods and, in some cases, different data sets. We don’t really gain confidence if we get the same result by exactly repeating what others have done, using what they’ve provided. There’s nothing wrong this, and there may be scenarios under which this would be important (for example, if a single study is likely to play a dominant role in determing some decision), but this isn’t really a key aspect of science.

Similarly, we often talk about the scientific method, but it’s not really a well-defined process. There are certainly aspects that we’d probably all agree on, there are certainly philosophical descriptions of a scientific method, but there isn’t some kind of rigid set of rules. There are always likely to be exceptions to any set of rules, and I do think we should be careful of thinking in terms of some kind of checklist. We shouldn’t really trust something simply because it ticked all the boxes. Similarly, we shouldn’t simply dismiss something because it doesn’t. Again, we gain confidence when results from different groups, using different methods, converge on a consistent interpretation.

The paper also discussed the issue of misconduct. It suggested that misconduct is not really new, that it’s not responsible for this reproducibility crisis (which I would agree with), and that what’s proposed may not really be a solution. This isn’t to suggest that we shouldn’t take misconduct seriously and suitably deal with it when aware of it, but just suggests that it isn’t really new and that it impacts science less than one might expect; it is typically uncovered, especially if what is suggested is of particular interest.

When it comes to public confidence in science, the paper says

it would seem that any crisis of confidence would best be addressed by better informing the general public of the way science works

which I think is an important point. The idea is that people’s confidence is more impacted by apparent failures, than by explicit misconduct. It’s important, therefore, to make clear that science isn’t some kind of perfect process in which each step incrementally adds to our understanding in some kind of linear fashion. We get things wrong, we go down dead ends, we try things that end up not working. We can even spend some time accepting something that later turns out to be wrong. In a sense, we learn from our mistakes, as well as from our successes. However, over time we still expect to converge towards a reasonable understanding of whatever it is that is being studied.

As usual, I’ve said more than I intended, and there is still more that could be said. I certainly have no real problem with people making everything associated with their research available. There may well be some issues with doing so (waste of resources, and some searching for errors) but I don’t think any are sufficient to strongly argue against this. However, I don’t really think that it is necessarily required, and I don’t really see it as some key part of the scientific method. What’s key is that there is enough information to allow others to test the same basic hypothesis; this does not necessarily require providing every single thing associated with some research result. There may well be cases where it is more important to do so than in others, but I’m not convinced that it should become the norm. Others may well disagree.

Posted in Research, Scientists, Sound Science (tm), The philosophy of science, The scientific method | Tagged , , , , , , , | 49 Comments

Being wicked

There’s been an interesting discussion on Twitter about how to frame anthropogenically-driven climate change. In particular, should it be framed as a wicked problem? A number of people involved in the discussion had a problem with this framing. One very simple reason was that if you consider the standard definition of a wicked problem it is

a problem that is difficult or impossible to solve because of incomplete, contradictory, and changing requirements that are often difficult to recognize.

Many, therefore, object to framing climate change as a wicked problem because it implies that it’s impossible and we don’t really know what’s required. Dealing with anthropogenically-driven climate change may not be easy, but it’s certainly not impossible, and the requirement is pretty straightforward; get net anthropogenic emissions to zero, or pretty close to zero. In fact, some would argue that we do know how to address it; impose a carbon tax based on an estimate of future costs, discounted to today. This should allow the market to develop the optimal future energy pathway.

Those defending the wicked framing suggested that it applied only to the socio-political aspects of the problem, not to our scientific understanding (which is pretty clear). Okay, but again this potentially implies that it’s a problem we can’t solve. Others, however, suggested that there are many examples of wicked problems for which there are solutions. Okay, but this suggests that the definition isn’t very clear, or consistent. Others suggested an even more extreme definition; a problem for which you don’t know the solution in advance and for which you can’t learn from your mistakes. If a solution doesn’t work, you can’t modify things and then fix it.

In many cases, it can be very useful to describe a complex issue with a few simple terms. However, it’s important that the terminology is well-defined. It’s no good if different people use the same terminology to describe different scenarios; that doesn’t simplify, it confuses. Also, if some terminology is already perceived in some way, it is very difficult to use it in a different way, even if you try to be very clear as to your intended meaning.

So, I can’t really see why we would want to describe climate change as a wicked problem. I think the science is pretty clear as to what we need to do if we wish to address anthropogenically-driven climate change; get net anthropogenic emissions to zero. Doing so may well not be easy, but we already have numerous technological solutions, many of which could be implemented now (in fact, some are being implemented). We also have policy options, such as a carbon tax, that would incentivise a change in energy infrastructure. None of this makes it easy, and there are clearly many complications, but framing it in a way that could be perceived as it being impossible, would seem rather counter-productive.

Links:
Less science, nore social science! (A post about Reiner Grundmanns’s Nature Geoscience Comment about Climate change as a wicked social problem.)

Posted in advocacy, Climate change, economics, Policy, Politics | Tagged , , , | 65 Comments