After listening to Nicholas Lewis present evidence at the Select committee hearing this week, I thought I would try and understand his 2013 paper (An objective Bayesian Improved Approach for Applying Optimal Fingerprint Techniques to Estimate Climate Sensitivity). I read through it, and didn’t really get what he was doing (at least not in any detail). So I then downloaded two papers on which it is largely based (Forest et al. (2002) and Forest, Stone & Sokolov (2006)).
Having read the two Forest papers, I think I understand what they did. They used the MIT, two-dimensional climate model. For each model run, they specify the climate sensitivity (S), the ocean diffusivity (Kv), and the net aerosol forcing (Faero). The ocean diffusivity essentially determines the rate of deep ocean heat uptake. As I understand it, Forest et al. run this model using different values of these 3 parameters and then compare the model outputs with the observed global-mean surface temperature and with the deep ocean temperature. In the later of the two Forest papers, they conclude – using expert priors – that climate sensitivity has a 90% confidence interval of 2.2 to 5.2 K, and that the net aerosol forcing has a 90% confidence interval of -0.62 to -0.05 Wm-2.
As I understand it, Lewis (2013) takes the data from Forest and uses an improved method to reduce the 90% interval for climate sensitivity to 2.0 – 3.6K. Lewis (2013) then goes on to say
Incorporating 6 years of unused model simulation data and revising the experimental design to improve diagnostic power reduces the best-fit climate sensitivity. Employing the improved methodology, preferred 90% bounds of 1.2–2.2 K for ECS are then derived (mode and median 1.6 K).
The main results from Lewis (2013) are illustrated in the figure below (taken from Lewis 2013).
So, an improved Bayesian method can reduce the climate sensitivity range a little, but adding 6 years of new data changes it completely. I find that quite remarkable, if not slightly worrying. Equilibrium climate sensitivity is a long-term response to a doubling of CO2. If the value you estimate changes dramatically by adding 6 years worth of data, that might suggest your method isn’t robust to short-term variations.
I was also surprised that the climate sensitivity could be a low as 1.2 K. We’ve already had around 0.9 degrees of warming since 1880, and we’re not even close to having doubled CO2. That will take another 50 years or so. If the latest Cowtan & Way (2013) paper is correct, then we’re likely currently warming at 0.1oC per decade. At that rate, we’ll have warmed by 1.3 degrees by 2050/2060. We would almost certainly not be in equilibrium at that time, and so the ECS would seem to have to exceed 1.3 K.
To try and understand this a little more, I decided to have a look at Otto et al. (2013) who use energy budget constraints to estimate the ECS and TCR. Their results are summarised in the table below.
If you consider the top row only, then their analysis also returns a lower value for the ECS of 1.2 K. To estimate the ECS you can use
where ΔQ is the change in radiative forcing, H is the system heat uptake rate, ΔT is the change in global mean temperature, and Q2x is the change in forcing after a doubling of CO2 (Otto et al. use values from Forster et al. (2013) – Q2x = 3.44 ± 0.84 Wm-2).
So, the only way I can see to get an ECS of 1.2 K, is to use extremes. Using the top row of the above table, set ΔT = 0.95 K, ΔQ = 2.53 Wm-2, Q2x = 2.6 Wm-2, and H = 0.37 Wm-2. Using these values I get an ECS of 1.14 K. The immediate problems are that the OHC data suggests that the total system heat uptake rate is probably at least double what I’ve used here. Also, to get such a low ECS I’ve used a radiative forcing today that is almost as big as that due to a doubling of CO2, which doesn’t really make sense. Looking at Forster et al. (2013), which lists model estimates of ΔQ and Q2x, a large ΔQ is associated with a large Q2x, as one might expect. Maybe there’s another way to get an ECS of around 1.2 K using values that make more sense and if I get a chance I’ll have a go at this, I just can’t see – at the moment – how it is possible.
So, I realise that this post is a little convoluted. Maybe others have read and understood Lewis (2013) better than I have. At the moment, I don’t see how adding 6 years of data can really change the climate sensitivity as significantly as his analysis suggests, and I don’t see how a climate sensitivity of 1.2 K makes sense. This seems to require some really extreme values of certain parameters. Of course, if anyone has any thoughts or a deeper understanding of this analysis, feel free to explain it in the comments.