## Lewis and Curry

Nic Lewis and Judith Curry have had a new paper published called The implications for climate sensitivity of AR5 forcing and heat uptake estimates. This seems to be pretty much the same as Otto et al. (2013), except with different choices for some of the values of some of the parameters. The basic idea is to determine the Transient Climate Response (TCR) and the Equilibrium Climate Sensitivity (ECS) using observations and models results for the changes in forcings. For example

$TCR = \frac{F_{2xCO2} \Delta T}{\Delta F}$

$ECS = \frac{F_{2xCO2} \Delta T}{\Delta F - \Delta Q},$

where $F_{2xCO2}$ is the change in forcing due to a doubling of CO2, $\Delta T$ is the change in temperature, $\Delta F$ is the actual change in radiative forcing, and $\Delta Q$ is the change in system heat uptake rate. This is all done by considering the changes from some base time interval to some final time interval.

The main difference between this paper and Otto et al. seems to be a different estimate for the system heat uptake rate in the final interval and an increase in the system heat uptake rate during the base interval (0.15 Wm-2, rather than 0.08 Wm-2). Otto et al. estimate $\Delta Q$ to be about 0.65 Wm-2 using a base interval of 1860-1879 and a final interval of the early 2000s. Lewis & Curry estimate $\Delta Q$ to be 0.36 Wm-2 using a base interval of 1859-1882 and a final interval of 1995-2011. This, as far as I can tell, is the main reason for the difference between Lewis & Curry and Otto et al. (2013).

The basic result from Lewis & Curry (2014) is illustrated in the table below.

credit : Lewis & Curry (2014)

Let’s compare this with what is said in the most recent IPCC report

Equilibrium climate sensitivity is likely in the range 1.5°C to 4.5°C (high confidence), extremely unlikely less than 1°C (high confidence), and very unlikely greater than 6°C (medium confidence).

and

The transient climate response is likely in the range of 1.0°C to 2.5°C (high confidence) and extremely unlikely greater than 3°C. {Box 12.2}

So, considering the bold row in the above table, the range is not wildly different to that presented by the IPCC. The top end of the range presented by Lewis & Curry (2014) is clearly lower, but there is still a chance (according to Lewis & Curry) of the ECS being greater than 3o and the TCR being greater than 2oC (although, I think the Lewis & Curry 17-83% range is probably the correct comparison with the IPCC likely range).

What has, of course, drawn attention to this paper is that the best estimate for the ECS is only 1.64oC, right near the bottom of the IPCC likely range. This is being interpreted as suggesting that the ECS is lower than the IPCC suggests. It is, of course, possible that it could be near the bottom of the range (that’s why there’s a range) but I think one should be very careful of interpreting this study as suggesting that it probably is. I’ll try and explain why.

• Firstly, this is just one paper and so one has to always be wary of single study syndrome (yes, Matt Ridley, I’m thinking of you here).
• What’s being determined here is actually the effective climate sensitivity, not the equilibrium climate sensitivity. As explained here, the effective sensitivity is really just a measure of the strength of the feedbacks at a particular time and it may vary with forcing history and climate state.
• The paper uses the HadCRUT4 temperature dataset, but makes no mention of Cowtan & Way (2013). Cowtan & Way consider coverage bias in the HadCRUT4 temperature dataset and illustrate that the lack of coverage in the Arctic probably indicates that we’ve warmed slightly more than the HadCRUT4 dataset indicates.
• The paper makes no mention of the work of Shindell (2014) or Kummer & Dessler (2014). These two papers point out that inhomogeneities in the aerosol forcing may mean that these energy balance models will underestimate both the TCR and ECS.
• There is some mention of variability and this may indeed influence the TCR and ECS estimates. Certainly variability could have both reduced or increased the amount of warming, but it can also (on decadal timescales) influence the system heat uptake rate (see, for example, Palmer & McNeall (2014)).

So, there’s nothing fundamentally wrong with Lewis & Curry (2014), but it does appear to have chosen the lowest possible change in system heat uptake rate, which then gives a low best estimate for the ECS. The range, however, is still quite similar to the IPCC range. Furthermore, this is just a single study and there are a number of things that such simple models really can’t capture, many of which would indicate that these estimates are quite likely to be lower limits, rather than accurate values. There’s certainly nothing wrong with doing such studies and they’re certainly valuable contributions to the literature. Assuming that somehow they prove that climate sensitivity is lower than the IPCC suggests would, in my opinion, be rather misguided.

This entry was posted in Climate change, Climate sensitivity, Global warming, IPCC, Judith Curry, Science and tagged , , , , , , , . Bookmark the permalink.

### 373 Responses to Lewis and Curry

1. Catalin C says:

Funny how Lewis and Curry copied in identical form the line from Otto “Both equations (1) and (2) assume constant linear feedbacks”. Too bad they don’t seem to understand what that really means in physical terms.

2. Something I meant to add to the post, but didn’t was that the method in Lewis & Curry (2014) is one where you simply determine changes between some base interval and a final interval. There are more sophisticated methods (such as Gregory & Forster (2008)) where you try to do a more detailed comparison between model outputs and observations. These tend to produce TCR and ECS values even more in line with IPCC estimates.

3. Catalin,
Indeed, that’s why I added the bit about the difference between effective sensitivity and equilibrium sensitivity.

4. Thank you for a clear and comprehensible review.

5. Eileen, no problem. Glad it was useful.

6. Tom Curtis says:

Anders, firstly, this is wrong:

“So, considering the bold row in the above table, the range is not wildly different to that presented by the IPCC.”

The bottom of the range (5% confidence limit of 1.05 C) varies hardly at all from the IPCC range, but the top of their 5-95% range at 4.05 C is well below the IPCC’s upper limit on their likely (17-83%) range. Further, because harm expands more than linearly with increased temperature, the upper end of the range is more significant in determining estimated harm from global warming. Saying a drastically reduced upper range of the confidence interval is “not wildly different” is a bit like saying removing the bullet from the sixth chamber does not wildly alter the risks of russian roulette.

Turning to Lewis and Curry, there method of estimating nineteenth century OHC can be called a mistake pure and simple. It is a mistake, firstly, because we do not have sufficient confidence in any individual model to rely on just one model for that estimate. Rather, an ensemble mean value should have been used, and an ensemble mean uncertainty to go with it. By using a single model, Lewis and Curry have drastically understated the uncertainty, and (I suspect) shopped around for a high value. I think the former is the greater problem.

Further, OHC flux is not independent of ECS. Specifically, a model with a higher OHC flux than others over a given period is likely to also have a higher ECS. Unless the ECS found by Lewis and Curry is the same as that for the model (which is not the case), the use of the model based OHC to determine the ECS becomes inconsistent. It argues, in effect, that if the ECS is that found in the model, then the ECS is lower than that found in the model, and as a corollary that if the heat content is that in the model, it is less than that in the model. (ECS = M -> ECS ΔQ < m).

This is OK if they are explicitly finding a lower limit on ECS, and explicitly acknowledging their result to be biased low. They, of course, do no such thing.

It may be possible to correct the problem by regressing ECS vs ΔQ across the model ensemble and using the regression to successively approximate to the value of ECS. That is, first estimate ECS as they have done (except using the model ensemble OHC). Then reducing the OHC to match the ECS determined in step (1) based on the regression, recalculate the ECS. And then repeat, and continue doing so until a stable value is achieved. In the absence of that, or an equivalent step, however, it is simply a mistake to consider the result of the calculation to be the actual ECS.

7. BBD says:

There seems to be widespread agreement among paleoclimatologists that paleoclimate behaviour is incompatible with an ECS <2K. I appreciate that L&C estimate Eff_CS, but if the difference between the two is indeed marginal, then this suggests that L&C's best estimate of 1.64K is an underestimate.

8. verytallguy says:

BBD,

There seems to be widespread agreement among paleoclimatologists that paleoclimate behaviour is incompatible with an ECS <2K.

I don’t disagree with you, but a citation would be lovely. Pretty please?

9. Tom,

The bottom of the range (5% confidence limit of 1.05 C) varies hardly at all from the IPCC range, but the top of their 5-95% range at 4.05 C is well below the IPCC’s upper limit on their likely (17-83%) range.

Fair enough, that is probably a big enough difference to call it different. Your point about the significance of that difference is well made. I was really just trying to get across the view that we’re still talking about an ECS range from 1 – 4 degrees using a method that probably underestimates the sensitivity.

By using a single model, Lewis and Curry have drastically understated the uncertainty, and (I suspect) shopped around for a high value.

Yes, they seem to have chosen the fastest sea level rise from a set of model runs that include the influence of volcanoes. They seem to have taken this rate of sea level rise, computed a system heat uptake rate for that period and then reduced it because the model did have a higher ECS than they were getting. So, they seem to have tried to compensate for the difference in ECS.

A bigger issue might be that a larger fraction of the sea level rise in the 1800s was glacier melt than is the case today (I think). So using sea level rise to determine the system heat uptake rate during that period may be an overestimate. Additionally, Lewis & Curry argue that the system heat uptake rate for the period 1850-1900 was 0.1 Wm-2 and yet there the surface temperature trend for that period is flat. Seems a bit implausible to me.

10. BBD says:

VTG

Sorry. I was thinking of Rohling et al. (2012) Making sense of paleoclimate sensitivity.

11. There’s also Hargreaves et al. (2012).

12. For what it’s worth:

IMO, the debate on climate sensitivity and TCR should still be very pertinent in the political context, even though it currently is not. When allied to the parallel debate on the level of carbon cycle feedbacks, which has barely started, lowish sensitivity/TCR estimates (in line with what AR5 forcing and heat uptake best estimates imply) point to global warming from now to 2081-2100 of little more than 1 K on a business-as-usual scenario.

The CMIP5 mean projected rise is about three times as great. Which is correct has huge implications for what the optimal policy response is.

http://www.climatedialogue.org/climate-sensitivity-and-transient-climate-response/#comment-1093

Perhaps Very Tall understands better why Cap’N instigated physical play.

13. Paul S says:

Tom Curtis,

I’m sure I read in the paper that the model’s 19th century imbalance was scaled to reflect a lower ECS estimate.

14. Tom Curtis says:

Paul S, you are correct:

” However, the CCSM4 model has TCR and ECS values of 1.8 K and circa 3.0 K that are some 35–85% higher than the best estimates for those parameters arrived at in this study. We therefore take only 60% of the base period heat uptake estimated from the Gregory et al. (2013) simulations, giving 0.15 Wm−2 for 1859–1882, 0.10 Wm−2 for 1850–1900 and 0.20 Wm−2 for 1930–1950.”

Clearly I missed that passage in my read through, and my criticism on that point is void. Thankyou.

15. Tom Curtis says:

Paul S, I will note first that their estimate of how much smaller their values are relative to model values is 60+/-25%, ie, it has an error range, and hence that they were not simply entitled to divide out the mean value to compensate. They needed also to account for that additional error in their error margins. Having said that, doing so gives an error or +/- 32% where they use 50%, which shows their rough figure to be adequate if we assume the model error from Gregory et al (2013) to be the only source of error, and we accept that use of a single model rather than an ensemble mean can adequately quantify the error. I accept neither.

Of more concern, ECS is proportional to ΔT/(ΔF-ΔQ). For this purpose, ΔT and ΔF can be treated as constants (although you may need different constants for different periods). It follows that ECS is proportional to 1/(ΔF-ΔQ). However, it is readily apparent that that equation cannot justify an assumption that downscaling ECS by a factor of 60% requires downscaling ΔQ by 60%. Their downscaling needs further justification.

16. Marco says:

As noted elsewhere (I think it was on HotWhopper?), doesn’t this mean Curry admits most of the warming since the 1950s is due to anthropogenic activity? After all, not much warming left to explain with such a low TCR…

17. Paul and Tom,
Isn’t another factor that glacial melt makes up quite a big fraction of the sea level rise prior to 1900?

Marco,
Indeed, I think that point was made on HotWhopper. I think that you’re right. If you want natural variability to explain a significant fraction of our warming you run into the problem that climate sensitivity then become unrealistically low.

18. Tom Curtis says:

Anders, this is what is known about glacier retreats:

and glacial mass balance:
http://www.grid.unep.ch/glaciers/

Both images are from the World Glacier Monitoring Service report hosted by UNEP.

The first thing you should note is that there is insufficient data for calculating mass balances prior to the 1940s. Calculating mass balances requires data not just on glacier length but also on glacier thickness. Second, you should notice the very limited information in the 19th century – shown in part by the reducing number of glacier trends shown (which may also be due to more glaciers being stable), but also by the dominance of European data.

The upshot is that contribution to global sea level melt in the nineteenth century by glaciers (let alone ice sheets) is a largely unknown quantity SFAIK. On this topic, however, you could sail a battleship through my “SFAIK”, so I would certainly be interested if somebody has better data.

19. Tom,
If you look at Figure 6 of paper it seems to show that the contribution due to glaciers between 1850 and 1900 was between 20 and 40 mm. I looked at the Church et al. paper and that seems to suggest sea level was rising at around 1mm/year. If we assume that glaciers contribute half then 0.5mm/yr is thermal expansion.

Gregory et al. (2013) suggest that the thermal expansion is 0.12m per ZJ (1024)J. If I convert 0.5mm/yr into a flux I get 0.25 W/m^2. So, higher than that used in Lewis & Curry. However, the sea level rise today is 3 times greater, so by the same argument we’d expect the flux today to be 0.75 W/m^2. Plus, the contribution due to glaciers is thought to be smaller today than in the past, hence the 0.75 W/m^2 would be a lower limit. So, this would suggest a greater $\Delta Q$ value than used in Lewis & Curry (and hence a larger ECS).

An issue, though, is that the was no surface warming trend for the period 1850-1900 (unless it’s simply masked by the large uncertainties) and that seems implausible if the TOA flux was as high as 0.25 W/m^2.

20. BBD says:

ATTP

If you want natural variability to explain a significant fraction of our warming you run into the problem that climate sensitivity then become unrealistically low.

Just a check that I have understood this. Do you mean that we cannot argue that natural variability plays a significant role in C20th warming unless we argue that TCR to radiative perturbation including GHG forcing is relatively high?

21. BBD,
What I was meaning was that if you consider the TCR, the energy budget formula is

$TCR = \frac{F_{2xCO2} \Delta T}{\Delta F}.$

If some fraction of $\Delta T$ is simply natural variability, that would increase or decrease the TCR depending on whether or not the variability produces cooling (i.e., the forced response is greater than the measured $\Delta T$) or warming (the forced response is smaller than the measured $\Delta T$).

So, if Judith thinks that half the warming since 1950 could be natural, that would reduce $\Delta T$ by about 0.25 degrees. Therefore $\Delta T$ becomes 0.46 (instead of 0.71) and the TCR becomes 0.9 instead of 1.33. That just seems a bit implausible.

22. BBD,

I need to think about this, but I think you have hit a key a point here.

(1) Even the low ball numbers for TCR in this paper back up the late 20th C warming as being anthro, based on GHG forcing. https://andthentheresphysics.wordpress.com/2014/09/24/curry-for-dinner/#comment-32151

(2) But paradoxically, high natural variability requires high TCR – otherwise natural variation would be damped.

What I don’t know is how (2) could be quantified to put a low bound on TCR (or a high bound on natural variability, given a TCR)

As ATTP says, (1) falsifies Curry’s claim that late 20th C warming is >50% natural – or gives a 0.9 upper bound for TCR to put it another way

23. BBD says:

ATTP

Oh I see. Yes. Obviously. Sorry for being a little slow on the uptake 🙂

So presumably the best way to avoid bias arising from natural variability is to avoid using little slices of climate time-series data which will, by definition, be sensitive to decadal-scale natural variability (eg ocean heat uptake)?

24. BBD,
My guess is that for the instrumental temperature record you have two issues. If you make your final and base time periods long enough that your average out variability, then it becomes so long that it becomes nonsensical (i.e., 1850 – 1910 as the base and 1960-2010 as the final). On the other hand if you make them short enough that you really are roughly representing the initial and final periods (1850-1869 as base and 2000 – 2010 as the final), then you have issues with variability.

My understanding is that variability could be as much as a few tenths of a degree, so that could explain the somewhat lower TCR value that these methods give. Add in Cowtan & Way and it goes up a little again. The you have possible inhomogeneities. As I think I’ve said before, these are nice sanity checks (i.e., they’re broadly consistent with other methods) and one can certainly explain the discrepancies as being a consequence of these simple methods being unable to properly represent some of the factors that will influence our warming.

25. With respect to the political implications of this new article by Lewis and Curry, since there is a very serious political war being waged on mainstream climate science, I have a question that I would like to have answered definitively by one who actually knows the answer:

Was this new article published as what is called a review article, or was it published as a refereed paper? (A “reviewer” need not be the same thing as a “referee” – we have to think like lawyers when trying to deal with how terms are used, especially when dealing with deniers.) And what is the quality of the journal in question in terms of its refereed papers?

The reason I ask is this: At least here in the US, the deniers are going to go nuts over this and try to squeeze every last drop of political blood they can out of this, to try to get the public to believe that it’s actually going to be OK even if humanity digs up all the coal and other fossil fuels in the planet’s crust and burns every last bit of it over the next couple or so centuries. I note the history here: They went crazy when that infamous greenhouse-gas-effect-denying paper

“FALSIFICATION OF THE ATMOSPHERIC CO2 GREENHOUSE EFFECTS WITHIN THE FRAME OF PHYSICS”
http://www.worldscientific.com/doi/abs/10.1142/S021797920904984X

was published in a peer-reviewed journal – they claimed repeatedly that the peer-reviewed literature supported them. But as I understand it, it was not published as a refereed paper. It was actually published only as a review article. And since almost no one knew this or even knew of the difference, these deniers scored a big political propaganda victory here in the US that lasted at least for a while.

Also, note that according to this

http://curry.eas.gatech.edu/onlinepapers.html

at her page at Georgia Tech

http://curry.eas.gatech.edu/

Curry has not actually published a refereed paper since 2010 or 2011, depending on what one counts as published. (If she has published not review articles but refereed papers more recently, and if someone has an updated list of her refereed papers, then please give a link.)

26. Tom Curtis says:

Anders, I’m not sure your line of argument works. Primarily that is because Lewis and Curry do not use the historical sea level rise to determine OHC, they use model sea level rise to determine model OHC and then downscale that to obtain their value. Consequently, unless your argument is that their model results are so far outside real world sea level rise results as to be falsified, the actual divisions within the real world sea level rise are not germain.

Further, even if we allow that they are relevant, your values do not seem to me to be correct. I estimate a glacier based SL rise from the paper you link to of 0.45 mm per year in the relevant period (close enough to your estimate to make no difference). However, Church and White only show approx 1 mm sea level rise averaged over the period from 1880 to about 1920 because of a period of less than 0.5 mm per annum sea level rise in the early twentieth century. In the late nineteenth century the average is closer to 1.5 mm per annum than to 1. Jevrejeva et al (2008) show 30 year trends peaking just shy of 2 mm/year over that period. Consequently your estimate of OHC flux based on sea level needs to increase by a factor of 2 to 3, although with very wide uncertainties. So wide, in fact, that I agree with Lewis and Curry when they say the historical records aren’t sufficiently accurate for their method.

27. Marco says:

K&A, her page is just not updated. She has had several in 2012, and there was her stadium wave paper with Wyatt in 2014 that I know of. Just look for JA Curry on Google Scholar.

28. Tom,
Sure, you’re right that they use model results. I was just trying to get a handle on plausible values. You may well be right that the value I get should be increased by a factor of 2 to 3 but that implies quite a large system heat uptake rate during that period and, hence, that a low ECS (defined using the energy balanced approach) becomes plausible. However, there appears to be little surface warming over the period 1850-1900 which seems difficult to reconcile with a system heat uptake rate well in excess of 0.2 W/m^2.

K&A,
I believe that this is a peer-reviewed paper and that the journal is a good journal.

29. Tom Curtis says:

Anders, I agree that it implies a low ECS, and that it seems implausible. I think there are simply too few tide gauge records, too narrowly concentrated for the record to be useful. The network of tide gauges used in Jevrejeva (2008) is essentially that of Jevrejeva (2006), which had just 5 tide gauges in 1850, all of them in Europe. They indicate the large trend from 1850-1870 is probably due to the addition of new gauges (in the NE Pacific and NW Atlantic, ie North America) rather than an actual trend in sea level. Jevrejeva (2008) extended the results prior to 1850 using just three gauges, all in Europe.

I note that a similar problem besets HadCRUT4 prior to 1880. GISS does not stop its temperature reconstruction at 1880 because they do not have temperature records prior to that, but because in their opinion the records are two few, and from too few regions to accurately represent a global temperature. I suspect that would represent a particular problem for accurately detecting trends.

30. BBD says:

ATTP

However, there appears to be little surface warming over the period 1850-1900 which seems difficult to reconcile with a system heat uptake rate well in excess of 0.2 W/m^2.

The obvious (to me, at least) implication is that there is a problem with the heat uptake estimate for this period. That said, the global recession of glaciers seems to get going around 1850 (eg. Leclercq et al. 2011, Fig 2). So perhaps the issue is with the surface temperature record.

31. I was just looking at the Hansen et al. (2011). Admittedly it’s models and they only go back to 1880, but if you consider Figure 7 the planetary energy imbalance between 1880 and 1900 is not positive.

As Tom is suggesting, maybe the data is just too uncertain to really say much about what it would be during this era.

32. Marco says:
.. doesn’t this mean Curry admits most of the warming since the 1950s is due to anthropogenic activity? After all, not much warming left to explain with such a low TCR…

That’s another example of an #OwnGoal. Stay inside the confines of the trick-box zone and watch what happens.

Perhaps Lewis and Curry need Adult Supervision (?) when they start playing with the math (?)

33. Joshua says:

Marco and WHT –

I couldn’t do the math, so I asked Juidith about that question….her answer was crickets (the same answer I got from the reset of her denizens).

VTG answered my question, and I just responded to him on hat very same point:

https://andthentheresphysics.wordpress.com/2014/09/24/curry-for-dinner/#comment-32325

34. Marco,

We should not go a bridge too far:

The paper was not intended as ammunition in the climate wars. It was designed to clarify the sensitivity of sensitivity to uncertainties in external forcing, something that hasn’t been systematically done before.

I remain very concerned about abrupt climate change, but I am also working to demonstrate that if you accept the IPCC framing of the climate change problem, e.g. ‘forced’, that models are over sensitive and the sensitivity is lower than inferred from climate models.

http://judithcurry.com/2014/09/24/lewis-and-curry-climate-sensitivity-uncertainty/#comment-632486

I have not read the paper, but I’d pay due diligence to how that IF part is represented in the article. It’s usually in the discussion or the conclusion, where what the authors say what they could have done instead.

***

On the other hand, it would be quite proper to insist that Judy declares what implies the IF she uses to “demonstrate”, e.g. regarding attribution.

One could also wonder what Judy is “also working to demonstrate” IF Lewis & Curry 14 is basically a rerun of Lewis & Crok 14.

I’m busy at Nick’s and elsewhere. You’re on your own. This ClimateBall ™ material is provided for general information only.

35. Christian says:

Hi,

In my opinion and agianst the openig, its fundamentaly wrong.. let me explain why..

So L&W(Levis&Curry) using Gregory (2013) as value of thermal expaning, the used also the function of 0.47W/m^2/mm. This value comes from Levitius (2012) who have given this value under recorded increase of OHC. Then, they used the function from Levitus (2012) to make with the value of Gregory 2013 this here:

“Taking their average of 0.47 Wm−2per 1 mm yr−1, the Gregory et al. (2013) GMSL rise rates equate to 0.26 Wm−2over 1860–1882, 0.16 Wm−2over 1860–1900 and 0.33 Wm−2over 1930–1950. ”

In the next Step they made this:

“However, the CCSM4 model has TCR and ECS values of 1.8 K and circa 3.0 K that are some 35–85% higher than thebest estimates for those parameters arrived at in this study. We therefore take only 60% of the base period heat uptake estimated from the Gregory et al. (2013) simulations, giving0.15 Wm−2for 1859–1882, 0.10 Wm−2for 1850–1900 and 0.20 Wm−2for 1930–1950.”

Thats physically wrong, because CCSM4 would give this values under this conditions, under conditions of L&W it would mean, that thermal expansion is under underestimated in compare to CCSM4-condition.

So, in their approch, they used the function of Levitus zu make the value of there period, there now downscale by about 60% this value.

So they get for 1995-2011 a value of: 0,51W/m^2

That implies, that thermal sea lever rising by heat-uptake to ocean only is 1,09mm/y for the 1995-2011 period. We have recorded in this period nearly 3,2mm/y, that mean only 33% of sea level rise would come from heat-uptake in the oceans. And that is very unlikly.

But if they not wrong, that would mean, that 2/3 of sea level rising has another Source (glacial-melting) this would imply, that previous glacial-meting heat-uptake is totally underestimated.

With the consequence, if they rigth, they still wrong, because L&W say:

“Heat uptake by other components of the climate system is ignored. ”

And thats the Problem, because then their value of dQ is totally low biased

36. anoilman says:

KeefeAndAmanda: Publishing dissenting opinions in journals is not wrong, and is encouraged. All a paper in a journal is, is a well structured argument, and nothing more. It should show its claims, show its evidence, and it should discuss its errors.

(Hence these debates should be left in journals, as they should be more level headed about this stuff. You can see from the comments that Anders is making what he thinks of the paper, and possibly what kind of technical rebuttal could be written if he’d bothered.)

What this means is pretty straight forward. No one listens to just one paper. It takes a lot of follow on work to really substantiate it. What’s not obvious about the follow on work is that folks who read these articles will take it upon themselves to go over and prove or disprove what was written. This is how the science is furthered, disproved, or fraud (actual fabrication of data) found.

This means that consensus more than anything else is what you need to look at. Not outliers. There will always be outliers. Anders discusses one such outlier here which the denial community latched on to;
https://andthentheresphysics.wordpress.com/2014/09/09/matt-ridley-you-seem-a-little-too-certain/

Regarding the paper you found, look up the man who did it. Its very enlightening.
http://www.desmogblog.com/gerhard-gerlich

I can assure you that the absorption of energy by green house gasses is real. I used to build equipment that measured those gasses for oil refineries, and I’d say Tyndall was spot on;
http://www.gps.caltech.edu/~vijay/Papers/Spectroscopy/tyndall-1861.pdf

37. Christian,
Yes, good points. I agree that their calculation of system heat uptake rate for the two periods is inconsistent unless the contribution to sea level rise for the 1995-2011 period is only 1/3 or so of the total. It’s clear that their assumptions about the change in system heat uptake rate have a dominant impact on their result for the ECS.

38. anoilman says:

Here’s Richard Alley talking about Climate Sensitivity; (About 15:30-16:30 minutes in, is the bit relevant here, particularly the ‘one paper syndrome’.)

39. Christian says:

Hi,again,

Thats not the only Problem, you have taken in your Topic the Effect of Cowton and Way.. i have made it with C&W-Data the ECS rise to 1,8K for the best estimate. Another, its not so important if they rigth or wrong about ocean heat uptake, because they cant explain the Value of recorded Sea-Level-increase for 1995-2011 and that imply that their dQ is to small, because they dont looking for Full-System-Heat-Uptake.

So what is the conclusion of N&C? That ignoring(or low) Full-System-Heat-Uptake can have great impacts for the effective sensitivity?

Soory, in my opinion, N&C gives nothing new.

What do you think?

Greets from Germany

40. Christian,
Cowtan & Way is clearly an issue. I agree that L&C not really anything new. They’re using a method that’s known to probably under-estimate climate sensitivity and have been able to find datasets that allow them to estimate a low change in system heat uptake rate and hence get a low ECS best estimate. The issue with sea level rise is a real issue as it appears to be inconsistent with at least one of their system heat uptake rates (either for the base interval or the final interval).

41. BBD says:

Just like to thank ATTP (who is supposed to be on a well-earned rest break from all this) for doing the L&C post. Helpful and informative and very much appreciated.

Now crack a good bottle and relax!

42. Now crack a good bottle and relax!

Already done. It’s well after 5 🙂

43. Tom Curtis says:

Anders, I believe you to have been mis-cited at SkS, and also at Carbon Brief. You may want to pop around in comments of both so you don’t get told how absurd your opinions are based on somebody else’s misunderstanding.

44. Nice pingback!

45. Rob Nicholls says:

What BBD said in his last comment.

46. Further to Christian’s well made point (IIRC thermal expansion vs glacial melt contribution to SLR is approx 50/50 these days), I remain to have strong issues with the 19th late century estimate of OHC changes (1859-1882). The (model derived) imbalance is clearly of volcanic origin (1810-1840 eruptions) back than, while it is mainly of anthropogenic origin now. I therefore tend to think of an additional pseudo-forcing between 1860-1880 which later disappears while temperatures will have slightly risen in response so that they got closer to the actual equilibrium state. As a result, both dT ((1860-1880) vs (1995-2011)) and dF (diff between same time periods) are lowered in “reality”. But in the analysis (Otto et al as well as the extremely lame copy in question) doesn’t seem to reflect this issue since dF wouldn’t be reduced. It’s certainly reflected in the GCM (CCSM4), but these numbers are conveniently downscaled at NLs whim. If I am not wrong, this might be another source of underestimation. Not sure, though, whether my reasoning is understandable in the first place 😉

On a related note, NL would of course stick to his arbitrarily lowered aerosol forcing (using only “observational” data). He remains to think that he is smarter than all the AR5 experts on the subject together. After all, nothing of substance and what’s a bit worrying, nothing new whatsoever which is usually reason for rejection. But then, I couldn’t care less.

47. Chic Bowdrie says:

ATTP,

I see Tom Curtis’ detection of the links to this post at SkS and Carbon Brief and raise you one more: go to http://www.bishop-hill.net/blog/2014/9/26/carbon-brief-does-energy-budgets.html

48. Tom,
I’ve left a comment at SkS and will do the same at Carbon Brief. One thing that I’ve been trying to work out (and failing) is whether their method incorporates the flux of energy into the deep ocean or not. Technically, even today there is a flux into the deep ocean which presumably slightly reduces $\Delta T$ and slightly increases $\Delta Q$. On the other hand, I think these energy budget models are essentially simplified one-box models which assume a fixed heat content, which makes me think that they don’t properly include the flux into the deep ocean which will continue for many hundreds of years after CO2 has doubled (i.e., I think that energy budget model assumes we will equilibrate at a rate set by the rate we’re warming now due to changes in forcing). It’s early, so what I’ve said may not make sense.

49. Chic,
That’s a bit of a pity but we can’t really expect Andrew M. or those who comment regularly at BH to consider the broader points. Nit picking minor errors and cherry-picking is what they do best, and if we took that away from them, they’d have nothing left.

50. Karsten,
Let me see if I get what you’re saying. Volcanoes produce a negative forcing and hence cooling. However, the system equilibrates quite quickly afterwards (a few years for a single volcanic event). Therefore volcanoes don’t, by themselves, move us as much out of equilibrium as anthropogenic forcings do, hence associating the imbalance in the 1800s (which is primarily due to volcanoes) with the imbalance today (which is primarily due to anthropogenic influences) is not quite consistent. Is that about right?

51. Actually, I was thinking about this on the bus and I don’t think the deep oceans are, strictly speaking, relevant. The equilibrium temperature depends on the change in forcing and the feedback response. What the deep oceans do is determine the rate at which we tend to equilibrium. If the rate at which energy accrue in the deep ocean is slow, then we’ll tend to equilibrium quickly but will remain slightly below for a very long time so as to sustain the small planetary energy imbalance required to feed energy into the deep ocean. If the rate is fast, then we will approach equilibrium more slowly so as to sustain a large planetary energy imbalance, but will reach equilibrium sooner than if the rate was slower.

The main reason that this is an effective climate sensitivity is because it essentially assumes that feedbacks are linear, and this may not be true (they may depend on the climate state). Of course, it also ignore slow feedbacks which will have an effect to but, strictly speaking, the formal definition of the ECS does too.

52. Christian says:

Hey Karsten,
Good to hear from you, it’s been a while..

I agree with your argument to vulcanic origin, their “correcting” of CCSM4 is fundamentally wrong. It is something like: Well, i got 5 Dollar in my pocket, but i want to by this item that cost 10 Dollar, i found its to expensive, so i reduce the price about 60%.

In meaning of their Paper, their approach totaly underestimate the GCM relfects thermal expansion for their periods. That the reason, why in their Paper only 1/3 of real sea level rising can explaint by ocean heat uptake and this Value is very unliky, how you said, it arround 50%.

And if we make their “correcting” undo, we get for 1995-2011= 1,75mm/y or 54% of total rise

@ ATTP

I have to correct you, after single Event, the System specially the OHC is not quickly equilibrated, the effects are detectable obove decadel timescales. e.g if i do work with a Box-Model (which in 2-Boxes generally reflects fast and slow reactions of dF) i found that early 19th Centuary vulcanic erruptions (only vulcanig-Forcing) have effects to 1940, because the ocean is unable to take all forcing Imbalance in a year, that implies that the relaxation phase (the Time which the ocean need to compensate the looosing by eruptions) is longer then 1 Year. So if i testing, i found for the early 19th Centuary erruptions a relaxation phase over 40 years.

Do i understand Karsten correct, that is what he is meaning, their base-period (also Otto et al) is totaly vulcanic infested

Greets

53. Christian says:

Soory i have done mistake, it must mean late 19th Centuary not early.

54. Christian,

I have to correct you, after single Event, the System specially the OHC is not quickly equilibrated

Yes, I did wonder. I guess I’m trying to work out what Karsten is getting at. A volcano produces a negative forcing and hence cooling. This means the system will lose energy. Once the volcanic aerosols have precipitated, we’ll then have a positive energy imbalance and will regain the lost energy and the time will presumably depend on how much energy was lost. So, if there was a lot of volcanic influence in the early 19th century then the late 19th century may see a positive imbalance and a reduce temperature because of these volcanoes. Hence it would seem that this should be included in the calculation of the ECS (or EffCS), since part of the reason temperatures are rising is because of this influence. So, it sounds like what you’re saying is that they way they’ve corrected for this is wrong, rather than that they’ve corrected for this. Is that about right?

55. Christian,
Actually, if there was lots of volcanic activity in the late 19th century, that would suggest that the planetary imbalance during that period should have been negative (losing energy) rather than positive (gaining energy). Right?

56. AlecM says:

[Mod : Sorry, that’s just a bit too wrong to post and I’ve had too many comment threads degenerate through regulars trying to explain the basics of the greenhouse effect to those who neither understand it nor are willing to try and understand it. Come back if and when you’ve worked it out.]

57. BBD says:

Karsten

But then, I couldn’t care less.

And it is easy to see why a serious scientist would feel this way about NL and Curry. However, NL and Curry are building up a small body of published work which is being used to “prove” that the consensus is “wrong” and policy need not substantially change.

It would therefore be enormously helpful if scientists who understand what NL is doing would respond – ideally a reply in the literature. Otherwise these studies will be used for years to mislead and confuse policy makers.

58. Christian says:

@ BBD

Its not true, that NL and Curry build a Body that consensus is “wrong” and policy need not substantially change, because their Model is only a part of estimating ECS, their a lot of other parts like paleoclimate, Models or combination methodes or obsorved based Models like N&C. But their is a lot of other methodes to estimating ECS. e.g using Box-Models.

So if i use Box-Models i can allow that ECS=3,22K and with this Value can explain what we have record of risen Temperature.

You see, its all a Question of what Model or Method you using and none of this methods are perfect and this means that we have to look on the sum of Methods, not only one Part of it.

@ ATTP

My answer follows later, for now…

59. BBD says:

Christian

You see, its all a Question of what Model or Method you using and none of this methods are perfect and this means that we have to look on the sum of Methods, not only one Part of it.

I know that and you know that 😉 but these subtleties are lost on most policy-makers. And the professional misinformers and lobbyists exploit that knowledge deficit ruthlessly and continuously.

I assume you are aware of Curry’s role in misleading the US Senate and Lewis’ affiliation with Lord Lawson’s lobby group the GWPF here in the UK? Seeing them now working together is a rather disturbing spectacle for many observers of the interface between science and public policy.

60. ATTP, I think you got it right in your 10:41 comment. Early 19th century eruptions (1809, 1815, 1835-38 … all of them very strong) pushed the oceans considerably put of equilibrium, causing a pos Imbalance which will lead to a very small but sustained rise in global temperature over the next 50-100 years (it is indeed reasonable to assume a 40 years response time as Christian suggested). So we have an easily detectable 3-5 year fast response (corresponding to particle residence time in the stratosphere) and a 40-50 year slow response which reflects the thermal imbalance of the oceans. See work of Gleckler, Gregory, or Stenchikov on that subject. In other words, this is the rate at which the deep ocean exchanges energy with the atmosphere (we may call it deep ocean recovery).

In essence, perhaps +0.1-0.2K post 1850 are due to preceding volcanic activity. So you will have a slight temperature rise over the 1860-1880 start periods, forced by the oceans. This forcing will of course fade eventually (reducing the actual forcing slightly), while the temperature remains at the higher level. Won’t make a big difference, but it is something which has to be appreciated in the uncertainty estimate.

On top of that, Krakatoa (1883) and other later eruptions will have caused similar issues (resulting in a negative imbalance), but it might not matter in an analysis which ignores 1880-1990 completely as things will have balanced in that the neg imbalance due to the eruptions got counteracted by the following pos imbalance due to deep ocean recovery. However, Pinatubo and Co will have had a minute effect on the analysis, although much smaller than post 1840. To know better, inevitably you need GCMs. Sure, we still won’t know exactly what the ocean-atmosphere heat exchange coefficient is, but EBMs suggest the oceans return heat for 30-50 years after volcanic eruptions (longer for multiple eruptions).

61. BBD,
I really don’t care what people make out of NL and/or Curry’s work anymore. As long as those scientists who are in charge to condense the published work get it right, I am not concerned. However, some serious scientists seem to be falling for NL’s superficial work as well (the reviewers of the article quite obviously did). Sure it would help to counter each of these attempts, but then, it has already been done several times. Insofar, I don’t really see why anyone would need another futile attempt to do the same thing all over again. Those who want to believe in NL’s and Curry’s work won’t be pushed back by evidence to the contrary anyways, right!?

In any case, one of Myles DPhil students is working on a more sophisticated version of Otto et al. Don’t know the outcome yet, but NL is not involved, so it should be unbiased 😉

62. BBD,
one more thing: What really concerns me is the fact that not only several distinguished MetOffice colleagues use the word “pause” interchangeably with “slowdown” (see MO report which uses “pause” even in the headline), but that it has permeated into the scientific literature without the scientists to be realising how detrimental it actually is to any communication effort to a wider audience. I am not saying that scientists should start to have to weigh every single word they publish, but the way they frame certain issues will inevitably confuse people. A nightmare for science communicators, much more so than any single NL paper could ever be. I don’t see any quick solutions given that this is the way scientists are “setup” to work (in particular, they don’t like to be received as “one voice”, which explains the resistance in some quarters to John Cooks efforts). What I do know, however, is that my own colleagues sloppiness in that regard has put me off from more frequent contributions in the blogosphere and elsewhere.

63. Tom Curtis says:

Running through some numbers, Lewis and Curry claim that, “However, the CCSM4 model has TCR and ECS values of 1.8 K and circa 3.0 K that are some 35–85% higher than the best estimates for those parameters arrived at in this study.” According to the IPCC AR5, the values are 1.8 and 2.9 K respectively, but according to Gent et al (2011) (citing Bitz et al (2011)), they are (respectively) 1.72 K and 3.2 +/- 0.1 K for the 1 degree resolution. The values for the ECS are 2.93 C for the lowest (T31) resolution, and 3.13 C for the 2 degree resolution. Bitz et al, however, use a version of the CCSM 4 with a slab ocean, whereas Gent et al describe a full AOGCM, which was presumably also used by the IPCC and also by Gregory et al (2013), although at what resolution is unclear. Thus far the Lewis and Curry numbers seem a fair representation of the case.

Lewis and Curry find an TCR of 1.33 (0.9 – 2.5) K, and an ECS of 1.64 (1.05-4.05) K. That yields a ratio of 0.74 (0.5 – 1.39) for TCR, and 0.55 (0.35 – 1.35) for ECS. Their uncertainty range of 35-85% seems to understate the case, although in a way (with their downscaling method) which favours a higher derived ECS.

Turning to that method,
1) ECS = F(2x)*ΔT/(ΔF-ΔQ), while
2) TCR = F(2x)*ΔT/ΔF.
Therefore ECS = TCR * ΔF/(ΔF-ΔQ), or simplifying, ECS = TCR * 1/(1-ΔQ/ΔF), and hence
3)TCR/ECS = 1-ΔQ/ΔF

Considering only mean values, that means for the CCSM4, 1-ΔQ/ΔF = 0.6, or ΔQ/ΔF = 0.4. In contrast, based on Lewis and Curry’s results, 1-ΔQ/ΔF = 0.81, so that Delta;Q/ΔF = 0.19. If somebody could work out the uncertainties, I would be interested, but my maths isn’t up to skewed uncertainties as yet. For their main result, Lewis and Curry find a ΔF of 1.98 W/m^2 and ΔQ of 0.36 W/m^2, giving a ratio of 0.18 – close enough to be accounted for by rounding error.

Now, the fun bit.

Lewis and Curry give a 2011 forcing relative to 1750 of 2.2 W/m^2. Together with their ΔF for their main result, that implies a average Forcing in the period 1859-1882 0.68. I’m not sure, but it appears to me that that ratio should be the same as that between ΔF and ΔQ, and that consequently this is an inconsistency, showing they have over estimated Q in the late nineteenth century by a large margin. Is that correct?

64. Tom Curtis says:

In the second last paragraph, Delta;Q should obviously be ΔQ. Less obviously, the stranded Δ in the last paragraph should be ΔF. Oh the trials of cutting and pasting html code at 2:30 am 😉

65. Christian says:

ATTP,
Hm, already well explained by Karsten. In my opinion, N&Cs “downscaling” is the very most of problem, because simple adapting a GCM-value to their value is physically wrong with consequence that therman expansion is totally underestimated.

@ BBD

In fact, i am not really interested in such things like interactions from climate-science on public policy. Because in my opinion, it is impossible to bring evidence to “ordinary people”. And the Way we bring evidence to ordinary people is that we simplify climate-science on public discussions and this gives lobby groups and missinformers the chance to win the public debate.

66. BBD says:

Karsten and Christian

Thank you both for your responses.

67. Karsten,
Your 3:44pm comment is very interesting. I’ve wondered the same myself. Why are we using terminology that’s neither correct (i.e., what’s paused?) and appears to have been generated by skeptics as a way to underplay AGW. In a similar vein, there is a recent BH post about “angry skeptics” and why they have a right to be angry. Part of what’s been suggested seems to be that few mainstream climate scientists have disowned John Cook and Michael Mann (who are both, according to the writer, obviously unethical people) and until they do so, climate science as a whole cannot be trusted. There were various other claims. Not only have I seen nothing to suggest that John Cook or Michael Mann deserve to be thrown to the wolves to appease some of the most unpleasant people I’ve had the misfortune to encounter, why scientists should comply with what is essentially blackmail is beyond me. I don’t understand why we should appease people who make such demands.

Tom,
What you say seems reasonable, although I’ve been out all day and am fairly tired. It seems related to the point that I think Christian was trying to make. If you use sea level rise to determine the system heat uptake rates in the 1800s and then use OHC etc. to do so today, the result appears inconsistent since using sea level rise today would give a different result.

Having said that, I’m not entirely sure that the ratio of $\Delta F$ to $\Delta Q$ has to be the same if the planetary energy imbalance in the mid-1800s is due to a period of earlier volcanic activity. In that case, however, I think we run into the issue that Karsten was suggesting which I think is that even though it make take a few decades to equilibrate, having a planetary energy imbalance due to earlier volcanoes is not quite the same as having one of the same magnitude today due to an increase in anthropogenic forcings (i.e., the influence of volcanoes is unlikely to persist for long enough to move the entire system out of equilibrium, so the recovery is shorter – I think).

68. Christian,

And the Way we bring evidence to ordinary people is that we simplify climate-science on public discussions and this gives lobby groups and missinformers the chance to win the public debate.

I think that is a good point. Any attempt to explain something is typically attacked because the analogy wasn’t perfect or because something was said that appears wrong (although this is often because the person doing the criticising doesn’t know enough to really understand if it’s right or wrong).

69. Tom Curtis says:

Anders, I’ve now convinced myself that the ratio cannot be constant. Specifically, if we consider the time when temperature reaches equilibrium after a given change of forcing. Then F is constant, but Q approximates to zero, so that Q/F also approximates to zero. Ergo, we can only require that Q/F can only equal ΔQ/ΔF when you have a near constant change of forcing over time (if then).

70. Tom,
Yes, I think that is right. I’m still trying to make sure I understand the whole volcano issue, but it’s been a long day and I think I need to go and cook something on the barbecue.

71. John Hartz says:

In his article, Study lowers range for future global warming, but does it matter? (Capital Weather Gang, Washington Post, Sep 26, 2014), Jason Samenow references and links to ATTP’s OP.

Here’s what Samenow has to say:

“The blog And Then There’s Physics lists several reasons why Lewis and Curry’s estimate could be too low, including not fully accounting for the transfer of heat between the ocean and atmosphere.

“It further cautions one must use be careful in leaping to conclusions from the results of a single study. Computer modeling studies generally estimate higher values for ECS, as do some studies based on paleoclimate data.”

72. Christian says:

ATTP,
You rigth at 5.09 pm about that what i want to underline.

It is also to note, that calculation of what is dQ by 1mm is questionable

They written: “..data for the 0–2000 m ocean layer in Levitus et al. (2012). Taking their average of 0.47 Wm−2per 1 mm yr−1, the Gregory et al. (2013) GMSL rise rates equate to 0.26 Wm…”

The next Problem is, that effective sea level rise by heat-uptake is slowing down if you look for deeper oceans.

We can test: (Levitus 2012)

be carefully, for 0-700m you have to transform from ocean to earth surface, because (Levitus 2012) only gives for this part a ocean-surface eqivalent.

0-700m = 0.19W/m^2 by 0.41 mm/yr= 0,46W/m^2/mm
0-2000m = 0.27W/m^2 by 0.54 mm yr = 0,50W/m^2/mm

That implies, that effective sea level rise by heat-uptake is different in function of dept but then, they use their 0.47W/m^2/mm but if i look Gregory (2013) i cant find anything about information in which depts results the thermal expansion (if other find, please respond). That would imply, that determine heat-uptake is more difficult by sea level.

So if we looking to Balmaseda (2013) and of Temperature of 0-100m and 0-2000m (http://www.nodc.noaa.gov/OC5/3M_HEAT_CONTENT/basin_avt_data.html ) we see that low dept increase slowed down and depper ocean risen much, In other words:

In the period 1995-2011 (linear interpolated, that means Last Value of Trend divided by First Value of Trend)

0-100m increased by a factor of 2.28
0-2000m increased by a factor of 22.5

That means, the Temperatur increase was 10 times larger in 0-2000m as in 0-100m. That also implied that more heat is stored under 0-100m for the period 1995-2011. This confirms with

How (Levitus 2012) demonstrate is sea level rise by expansion also a function of dept. This mean, that simple using thermal sea level rise very problematic and their value of heat-uptake is very questionable at all timescales

73. Steve Bloom says:

Karsten, my impression, albeit from a considerable distance, is that in the aftermath of Climategate, the rise of UKIP, the public failure of their attempt at seasonal projections and the unfortunate fact that the UK press seems to be composed largely of scandal rags, the MO decided that they needed to appear to be more humble, engaging politely with denier critics and (over)emphasizing uncertainty. Whether they did that just based on internal discussion or also took advice from communication professionals would be interesting to know. This approach seems to have propagated itself into the WG1 report’s treatment of the “hiatus,” perhaps unsurprisingly due to the lowest-common-denominator nature of the IPCC process.

The MO itself makes for a big fat juicy target for those deniers and scandal rags, unlike government climate research centers in the U.S., which are divided across several departments and then further sub-divided into individual shops. It doesn’t help that the MO has a mandate to get income by serving private customers.

I once had an interchange with Richard Betts based on an assertion on his part that government regulations prevented the MO (or him as an individual) from engaging in policy advocacy in public. He pointed to two sets of regulation as the basis for this stance, neither of which said any such thing. It’s entirely possible that Richard believed what he was saying to be true since he had been told by MO higher-ups that it was, but my respect for him went down a few notches at that point. I can only hope the same carelessness doesn’t creep over into his modeling work.

74. Steve,
I don’t know the regulations, but I happen to work with a number of people who are also – like Richard – essentially civil servants and there certainly appears to be quite strict rules about what they can say publicly. Whether the interpretation of the rules is correct or not, I don’t know, but I certainly get the impression that they do have to be careful about what they say publicly.

75. WFC says:

May I intrude on this very interesting discussion to make one or two comments about some of the remarks made on this thread?

Firstly, that whilst it begins by (very properly) examining and critiquing the paper, it (the discussion) seems to have (partly) changed into a “not in front of the children” discussion – whereby concerns about the political message which might be picked up from it are being considered and, similarly, about the bona fides of the authors.

Do either of those issues have any relevance to the question of whether or not this paper is robust? And if they do not, how are the people raising them any different from the “Mann is a whatever” brigade?

ISTM that the vast politicisation of this field (and nobody should pretend that it is all the fault of the “other side” whichever “side” they are on) serves only to muddy waters: making it far more difficult for interested laymen to navigate.

Moreover, if you do have the stronger case, any competent advocate would tell you that the last thing you would want to do, if you have the stronger case, is to allow it to become obscured by irrelevant smoke.

76. WFC,

Firstly, that whilst it begins by (very properly) examining and critiquing the paper, it (the discussion) seems to have (partly) changed into a “not in front of the children” discussion – whereby concerns about the political message which might be picked up from it are being considered and, similarly, about the bona fides of the authors.

Maybe you could point out where it has turned into a discussion about the political message and the bona fides of the authors. I haven’t read the comments in excruciating detail, but they seem to have focused on the calculation of the change in system heat uptake rate – which is neither political nor related to the bona fides of the authors.

77. Steve Bloom says:

Careful, of course, Anders, but there’s a big difference between that and perceiving oneself to be gagged. I should add that one set of regulations Richard pointed to was a prohibition of involvement in partisan politics while speaking in an official capacity, a principle I entirely agree with but which has *nothing* to do with (or maybe better put, can easily be kept quite distinct from) climate policy discussions. That he purported those regulations to cover the latter left me gobsmacked.

78. Steve Bloom says:

WFC, any competent advocate would also tell you that the first thing you would want to do, if you have the weaker case, is to work to obscure the stronger one with irrelevant smoke. If the weaker side has better access to the media and other levers of power, the strength of the respective cases isn’t very relevant in the short term. The relevant techniques are tried and true, and that they’ve been extensively applied against climate science is a matter of clear public record. I refer you to John Mashey’s research.

79. Steve,
Indeed, I was tempted to point that out myself. In an ideal world we’d focus on what was actually presented, its strengths and its weaknesses, and evaluate whether or not it has merit. In the real world, it’s attack whatever you can and if success requires focusing on the political message or the bona fides of the authors, go ahead and do so. To be clear, I’m not advocating this and find it objectionable myself, but it certainly seems to be a standard tactic for some.

80. WFC says:

10.08, 12.51, 11.44 (27th), 12.35, there may be others.

I might add that I thought that the 50% anthropogenic comment from Curry was an extrapolation from a paper she was discussing – ie “if that paper is correct then that would lead to the conclusion that 50% of warming was anthropogenic” – rather than a statement of fact.

Steve Bloom

Indeed the weaker side (if it knows it is weaker) will try to muddy the waters – and try to drag the stronger side into those muddied waters. Oldest trick in the book: and the easiest one not to fall for.

Not sure about the “better access to the media” point, though. The day Lindzen is invited to appear on the BBC is the day I might consider that to be an arguable point.

81. Thermal expansion of ocean water does, indeed, vary considerably from one part of the oceans to another. The expansion is strongest at relatively high temperatures and at very high pressure even at low temperature, it’s weakest at low temperatures, when the pressure is very high. Very large volumes of cold water at a pressure that still leads to relatively little expansion as is at depths of 1000-2000 m. Thus warming of those layers leads to less expansion than warming in most other parts of the ocean. (Cold near surface water expands even less.)

My above comments are based on this table of coefficients of thermal expansion and a superficial survey of distribution of temperatures in oceans.

82. WFC,
Okay, there are maybe some. Clearly Judith doesn’t claim that the 50% warming is a fact, but she does suggest it is possible, when the evidence in support of this position is extremely weak (virtually non-existent).

83. Steve Bloom says:

You’re gonna need a better example, WFC. In any case debating about the balance or lack thereof in the coverage that exists misses the far more important point that irrelevant smoke can succeed simply by getting the media to not cover an issue in scale with its importance. The little bits of coverage given the likes of Lindzen aren’t the problem.

Re Curry and Lewis, they very much do have form, and it’s entirely appropriate to make reference to that when discussing their new joint exercise in cherrypicking. Were the paper a serious contribution to the science, I expect you’d be seeing fewer such references.

BTW, it’s interesting that you should want to see someone with Lindzen’s poor track record given more prominence by the media. Doubtless you think the same of the tobacco industry apologists and advocates for creationism, all ideas being at root equal.

84. There’s no doubt that “the other side” makes its best to spread confusion and smoke, but it’s a totally different question whether answering in kind is a good choice. As WFC wrote the side which is backed by the stronger real case must do its utmost to stop the practices that spread smoke and confusion. Answering in kind makes the better evidence moot as all rational argumentation will disappear in the smokescreen.

85. Pekka,
Not that I want to get into a lengthy debate about this, but I think the point being made is that there may be a difference between the ideal way in which to engage and the way that would be most effective. I’m not advocating engaging in an unsavoury way and would, ideally, choose not to do so. That doesn’t mean, though, that it wouldn’t be effective.

86. BBD says:

WFC

a “not in front of the children” discussion – whereby concerns about the political message which might be picked up from it [L&C] are being considered and, similarly, about the bona fides of the authors.

You mention one of my comments in this context:

[Karsten:] “But then, I couldn’t care less.”

And it is easy to see why a serious scientist would feel this way about NL and Curry. However, NL and Curry are building up a small body of published work which is being used to “prove” that the consensus is “wrong” and policy need not substantially change.

It would therefore be enormously helpful if scientists who understand what NL is doing would respond – ideally a reply in the literature. Otherwise these studies will be used for years to mislead and confuse policy makers.

Sound science matters. Especially when speaking to power.

I hope you have found this thread as informative as I have.

87. Tom Curtis says:

Christian, the discussion you are looking for can be found in Kuhlbrodt and Gregory (2012) who write:

“In all models, there is an excellent scenario-independent linear relationship, but ε varies across models (Fig. 1, Table S1) because the thermal expansivity of sea water (1/ρ) ∂ρ/∂T increases with pressure and temperature. Therefore, the magnitude of thermal expansion depends on the latitudes and depths at which the heat is actually stored; this pattern depends on the model, but not on the scenario for a given model.”

“The ranges of ε in the CMIP3 and CMIP5 ensembles are similar: 0.12 ± 0.01 m YJ−1 in CMIP3 and 0.11 ± 0.01 m YJ−1 156 in CMIP5. This is consistent with the observational estimates for 0 m to 2000 m, 1955–2010 [Levitus et al., 2012], from which we infer ε = 0.12±0.01 m YJ−1. The observational estimates by Church et al. [2011] for 1972–2008 for the full ocean depth indicate ε = 0.15 ± 0.03 m YJ−1 , which is slightly higher but not significantly different.”

The CMIP 3 values are in fact cited in Gregory et al (2013).

Lewis and Curry use a figure of 0.47 W/m^2 per mm per year, citing Church et al. That equates to 0.00756 YJ per mm, or 0.132 M/YJ, so somewhat less than Church et al according to Kuhlbrodt and Gregory, but inline with other estimates. Using the Church et al value as cited by Kuhlbrodt and Gregory would have reduced their estimate of 19th century OHC flux by 12%. However, I think it would have been more appropriate to use a model derived value given it is a modelled rise in sea level they are evaluating. As such the CMIP 5 value of 0.11 m/YJ, or the CCSM3 value of approximately 0.12 m/YJ would have been better, though each would have raised the estimate of OHC flux. Unfortunately I cannot find a vaue for CCSM4, which is what should have been used. Regardless of which is used, Lewis and Curry are not entitled to the assumption that CCSM4 heat storage at depth shows the same pattern as empirical data, and hence has the same expansion efficiency of heat (ε).

88. Steve Bloom says:

“As WFC wrote the side which is backed by the stronger real case must do its utmost to stop the practices that spread smoke and confusion.”

Looking forward to your ideas on how exactly to apply this concept in a climate science and policy context, Pekka.

89. ATTP,
A recent twitter exchange was quite telling in that regard. Naomi Oreskes was pointing out that using “pause” (in a talk from Thomas Stocker) should be avoided, which provoked a reply from a colleague (for whom I have a lot of respect) to her that he doesn’t want to be told what or what not to say. He went on pointing out that “pause” is now frequently used in the community, so communicators simply have to accept that (disclaimer: only rough account of the exchange, so is shouldn’t be taken literally). After some back and forth with another science communicator, he seemed to have started reconsidering his position slightly. Bottom line for me from that exchange was, that some colleagues are literally scared to be considered environmentalists. Appeasing some skeptical quarters seems to be one of the strategies. I don’t think it works, but I can’t prove that it doesn’t.

Re Mann/Cook: I haven’t noticed that any reputable scientist has ever thrown Michael Mann under the bus yet, though John Cook certainly received some flak. Don’t know why some people seem to be bothered, but I guess they consider him as a somewhat angry environmentalist. Following such logic, the middle ground would be somewhere in between environmentalists and skeptics. It goes without saying that I am baffled by such logic, given that the level of “angriness” and vitriol is utterly skewed towards those in denial. But then again, scientists are strong individuals with a strong tendency to avoid group thinking (even in cases in which it might be necessary). Whether there is a more distinct tendency to oppose “environmentalism” (whatever actually constitutes such specification) in the UK (or the MetOffice for that matter) is hard to tell, but it seems to be the case. If I had to guess, I’d say it’s due to a more friendly, much less confrontational attitude towards other people in general (which I do appreciate a lot) rather than due to some particular policy regulations as Steve seem to be suggesting.

90. There has been a lot of criticism of the arguments that L&C have used in obtaining their value, but the ultimate questions to be answered in their approach are:
– What’s the best estimate for ΔF?
– How to obtain the best possible estimate for Q of the ending period of the comparison?
– How to obtain the best possible estimate for Q of the initial period of the comparison?

It’s possible that using the same method for both periods in the determination of ΔQ, would result in better accuracy for this difference, but that’s not at all obvious. It’s plausible that choosing the methods independently leads to a better result.

With all the criticism I haven’t been able to find from the comments any proposals that would be obviously preferable to those used by L&C. Some arguments indicate that the choices of L&C may be biased, but I would think that it’s possible to present arguments of comparable strength in favor of their choices.

Details of the thermal expansion of oceans go beyond the set of questions that are really significant for an analysis of the nature of theirs. They take out of that only a single number that has limited influence on their results. Is there a clearly superior way of determining that particular input to their analysis?

It’s important to remember that this is just one of several methods that have been used to estimate ECS (and not the only one for TCS either). As a method of determining ECS this approach is dependent on the validity of highly questionable assumption of linearity.

The most important question is in my view, whether their lower limits for the uncertainty range of ΔF-ΔQ are significantly too high. If the probability of significantly lower values is not small that would make the upper tail of the PFD of ECS longer and/or fatter. The upper tails of TCR and ECS are the most important factors for policy purposes. Q of the initial period affects this question, but much less than the uncertainty in aerosol forcing.

91. Joshua says:

WFC –

==> “I might add that I thought that the 50% anthropogenic comment from Curry was an extrapolation from a paper she was discussing – ie “if that paper is correct then that would lead to the conclusion that 50% of warming was anthropogenic” – rather than a statement of fact.”

What source are you using for that?

What do you think about Judith’s statement that it would be “foolish” to think that ACO2 dominates climate on decadal scales – after writing a paper that supports just such a conclusion for temps over the past 6 decades?

92. Steve,

I have written on related questions many times, both on this forum and elsewhere. I believe firmly that science should be presented in a way that both really is objective and conveys effectively signs of objectivity. When the other side presents untruths and misleading simplifications, that must not lead to presenting in support of science “balancing simplifications” at a level that may be criticized as less than fully true. The approach that I promote is not the most effective one in short term, but I remain convinced that it is ultimately more productive than joining the fight using rules dictated by the other side.

The approach is often described as fighting with one hand bound behind the back. That may be true, but the only real strength of the side of the science is that it is backed by the truth. If the argumentation becomes symmetrical in style, the audience has no method of seeing that one side is backed by the truth, and the other is not. This is the dilemma of hearing both sides of an argument.

There are various ways of educating key people like journalists and other opinion leaders. Such activities are probably very important, but again it requires skill to succeed in that, when the misinformers try also their best to find support form there, and again the value of truth must be used correctly.

93. curryja | September 26, 2014 at 7:52 pm |
If I disappeared, I wonder what the poor dears would have to talk about

physics(?)

94. WFC says:

Steve Bloom

It’s very simple really. You state your argument and defend it.

You refuse to engage with attempted diversions but you do engage with criticisms of the argument itself: with reasons, reasoning and evidence.

What you do not do is dismiss all criticisms of the argument with ad homs, as far too many do. Thus, for example, hearing an argument dismissed as “disinformation” or “debunked” or “fraudulent” without any explanation how and why it is considered so, impresses nobody except (possibly) people who have already made up their minds.

Yes, it might be tedious to have to repeat arguments, and more pleasurable to give vent to uncharitable sentiments but, take it from me, you really cannot repeat a good argument often enough.

A good argument speaks for itself.

95. Pekka,

With all the criticism I haven’t been able to find from the comments any proposals that would be obviously preferable to those used by L&C. Some arguments indicate that the choices of L&C may be biased, but I would think that it’s possible to present arguments of comparable strength in favor of their choices.

I would argue that such arguments have much less of a strength. As far as the aerosol forcing is concerned, he is just plain dismissive and his arguments simply don’t hold water. Why a sane person would dismiss Cowtan & Way is also beyond me. NL just shows too many signs of bias as to consider it a neutral approach. It isn’t! Apart from that, any reputable scientist would know that “observationally based” ECS estimates don’t tell you anything meaningful (non-linearity simply messes things up as you’ve also pointed out). So the entire OHC issue is essentially irrelevant. All what can be inferred from such studies (with some degree of confidence) is TCR. Not only did NL all he could to lower this value, but he would also bang on the (almost comically low) ECS number as if it were gospel.

96. Steve Bloom says:

“you really cannot repeat a good argument often enough”

“A good argument speaks for itself.”

Wrong on both counts. Excessive repetition mainly serves to convince uninformed listeners that there must be some substance to the other side, and even outside of science, where most listeners lack an independent basis for judging correctness of an argument, other factors such as the source of an argument have a large impact.

There’s a fair body of social science research relating to this, BTW, so rather than just arguing from your personal experience, maybe read up on the subject and then cite to some of it?

97. WFC says:

Steve Bloom

Whatever you may think of him, Lindzen is somebody who knows what he (and you) is talking about.

Talking about “poor track record”s and allusions to finance are only meaningful to somebody who believes that assertions like that are meaningful. Those Otoh who believe that a paper speaks for itself are not going to be at all impressed with such a comment: indeed, I would go further by saying that such people are going to assume that anybody who makes such meaningless assertions does so only because they are incapable of refuting the argument made by that person.

If there is a problem with Lintzen’s position, then try to refute it. You cannot do this by introducing things which are totally irrelevant to the arguments he makes.

98. WFC says:

Steve Bloom- 12.49

You do not refute an argument by pretending it does not exist, or by refusing to address it.

Some climate scientists ( not you, I’m sure) have been doing that for years.

How’s that worked for them?

99. Steve Bloom says:

Karsten,

“All what can be inferred from such studies (with some degree of confidence) is TCR.”

I have my doubts about even that. It seems clear that slow feedbacks are kicking in too soon for even a correct calculation of TCR to mean much.

“I haven’t noticed that any reputable scientist has ever thrown Michael Mann under the bus yet”

Well, Tamsin Edwards did (until it got noticed and she had to climb down). This was by way of wanting to find common ground with deniers, and is part of why I have such a jaundiced attitude toward that sort of behavior.

“I’d say it’s due to a more friendly, much less confrontational attitude towards other people in general (which I do appreciate a lot) rather than due to some particular policy regulations as Steve seem to be suggesting.”

I think you’re confusing two different things. I’ve certainly given orders to, er um, advised, Richard in particular that he’s wasting his time catering to the likes of Andrew Montford and Barry Woods trying to correct their understanding of climate science, but that’s got nothing to do with regulations (and to be fair he seems to be wasting less time on that these days, maybe because he discovered, contra our new friend WCF, that there really is such a thing as repeating a good argument too much). The regulation discussion had to do with climate policy advocacy in public.

100. Steve Bloom says:

“If there is a problem with Lintzen’s position, then try to refute it.”

Long refuted in the literature. I’m not going to waste my time giving you links to papers you clearly haven’t bothered to look for yourself.

101. WFC says:

Steve Bloom

“Well, Tamsin Edwards did (until it got noticed and she had to climb down).”

And this is what you call scientific method, is it?

102. WFC says:

Steve Bloom – 1.09

Interesting how many times that comment pops up in such discussions.

If I ha £1 for every time an actual link was provided, I’d have £0.

103. Steve Bloom says:

?

No, it had to do with Tamsin throwing Mike Mann under the bus on a denier blog. I doubt her motivation for that had anything to do with the scientific method. More likely currying favor, if you know what I mean.

104. Steve Bloom says:

Yeah, it’s been a very common response for going on ten years. Wonder why?

105. I wonder who could be relied upon to know more about the rules and working practices of the British Civil Service?
(a) someone who has worked in it for over 20 years (like me), or
(b) an American political activist who attempts to undermine the integrity of professional climate scientists by making unfounded assertions about them on the internet (like Steve Bloom).

106. Oh, and Steve, stop bullying Tamsin.

107. WFC says:

Steve Bloom

“Yeah, it’s been a very common response for going on ten years. Wonder why?”

Because it’s an easy assertion to make, doesn’t require any reasoning or evidence to back up, and because the people who make it believe it to be convincing?

108. PS Steve, yes I spend a bit less time on blogs of all flavours these days. Face to face discussions are more productive. I also had a drink with Barry and others in the pub after Mike Mann’s talk in Bristol. That was, incidentally, after having spent a very interesting couple of days in a meeting with Mike, Tamsin, Steve Lewandowsky, John Cook, Stefan Rahmstorf and others, discussing the very point you raise above – communication of uncertainty. Jon Krosnick from Stanford showed results of his research which suggested that full bounding of uncertainties (lower as well as upper bound) increased trust in scientists.

109. Why is anybody wasting their time with WFC? Lintzen? I can only guess.

WFC. there is a wiki page that links to pdfs and there is a SkepticalScience page in it. It’s dead. Get over it. It would be helpful if you could spell the name of the scientists whose works you are or claim to be intimately familiar with. Nobody cares about this but deniers. I think the subject here and now is ocean heat buffering.

If you remain unconvinced about Lintzen (sp) try here or here. Lot’s o’ links.

110. > [H]earing an argument dismissed as “disinformation” or “debunked” or “fraudulent” without any explanation how and why it is considered so, impresses nobody except (possibly) people who have already made up their minds.

In the civil group, those who initially did or did not support the technology — whom we identified with preliminary survey questions — continued to feel the same way after reading the comments. Those exposed to rude comments, however, ended up with a much more polarized understanding of the risks connected with the technology.

Simply including an ad hominem attack in a reader comment was enough to make study participants think the downside of the reported technology was greater than they’d previously thought.

http://www.nytimes.com/2013/03/03/opinion/sunday/this-story-stinks.html

Playing the man instead of the ball leaves an impression indeed.

Now, the question that begs to be asked is: which Climateballers profit most from this effect?

111. Steve Bloom says:

Richard, out of curiosity, how many laws have you written? Let’s compare!

So perhaps I have a bit more practice reading laws than do you, too. That’s OK, though, I’m a little deficient on the model coding front. BTW, I’ve known lots of civil servants with long service who were rather murky on regulations, even ones they used on a daily basis.

Anyway, you’re certainly entitled to your position, but pointing to regulations as a reason for it when a fairly quick read of them (“quick” because they’re not all that long and convoluted) shows they’re not isn’t too respectable.

Bullying? It being (for you) 1:00 AM on a Saturday night, do I detect some pre-commenting lubrication? Anyway, maybe you should stop protecting Tamsin as if she’s not a grown woman fully capable of doing so herself.

I do agree about the value of face time. I even agree that from the MO POV there’s much to be said for trying to keep the likes of [Mod : redacted, if someone can’t comment to defend themselves, then I’d rather their name wasn’t mentioned] tamped down by cultivating him as you are. And he seems like a rather pleasant fellow on a personal level.

Krosnick’s new work sounds very interesting, although it does seem to have been in the pipeline for a very long while. IIRC he had said that he was undertaking additional survey work to expand/bolster the results. Did he mention anything about that?

“Perhaps there is a trade off between honesty and trust, with mild uncertainty being best, he suggested.”

Channeling Steve Schneider!

112. David Young says:

Richard Betts seems in all his interactions to be a gentleman. I therefore would regard his opinion more highly than that of an …..

113. Steve Bloom says:

…Boeing engineer? Yes.

114. Joshua says:

WFC —

I asked you a question above:

https://andthentheresphysics.wordpress.com/2014/09/25/lewis-and-curry/#comment-32443

Any particular reason that you didn’t answer it, and instead choose to exchange comments with someone that you think makes arguments without reasoning or evidence?

Allow me to quote you:

==> “You refuse to engage with attempted diversions but you do engage with criticisms of the argument itself: with reasons, reasoning and evidence.”

115. anoilman says:

WFC: Lindzen’s work is unproven. There are no measurements to back up his theories. Where as there is plenty of data and evidence backing up physics as well as our climate system.
http://www.skepticalscience.com/skeptic_Richard_Lindzen.htm

I don’t know about BBC but Lindzen has spoken to your government.
Interestingly he’s not talking about science… he’s talking about economics. Something he is uneducated in. Does he study it? Does he teach it?
http://judithcurry.com/2014/01/28/uk-parliamentary-hearing-on-the-ipcc/

“Lindzen: Whatever the UK decides to do will have no impact on your climate, but will have a profound impact on your economy. Trying to solve a problem that may not be a problem by taking actions that you know will hurt your economy.”

He’s even managed to tick off Gavin Schmidt;
http://www.realclimate.org/index.php/archives/2012/03/misrepresentation-from-lindzen/

Here’s more on Lindzen, and I do I wish him luck on proving his unproven theories. There is still a remote chance he could vaguely be right.
http://www.skepticalscience.com/Richard_Lindzen_quote.htm

116. Steve Bloom says:

“There is still a remote chance he could vaguely be right.”

No, paleo rules out his ideas as a significant factor, although if by vaguely you mean “exists but doesn’t have much effect on the climate response,” maybe.

117. Steve Bloom says:
September 28, 2014 at 3:30 am

…Boeing engineer? Yes.

Touche.

And a PhD engineer that believes that the hydrodynamics of the earth’s ocean is impossible to solve because he is not working it. Yet, we all know that the solution may be in reach, and, amazingly, without his immense (?) help
http://azimuth.mathforge.org/

Double (?) touche

118. Steve

You often have a go at Tamsin, using accusatory phrases like ‘Throwing Mann under a bus’. In my book that’s bullying. Tamsin doesn’t need ‘protecting’, but I’m entitled to call out bad behaviour when I see it. Mike and Tamsin got on just fine at the meeting this week, so you can stop stirring it on that score.

BTW no you did not detect ‘lubrication’ thank you very much. I had been out for the evening, but was driving, and am up now to head up to London on the train.

David Young – thank you, I appreciate your comment.

119. anoilman says:

Steve Bloom: That was just me trying to be polite. I recognize that I can be a first class jerk at times.

120. “you really cannot repeat a good argument often enough”

“A good argument speaks for itself.”

Wrong on both counts. Excessive repetition mainly serves to convince uninformed listeners that there must be some substance to the other side, and even outside of science, where most listeners lack an independent basis for judging correctness of an argument, other factors such as the source of an argument have a large impact.

You are fighting a strawman. Repeating valid arguments should mean presenting the arguments in different contexts, and reformulating the arguments to suit the intended audiences. It should not mean repetitive comments on a blog.

Communicating science, and especially communicating science that involves uncertainties, is not easy. Some scientists are more competent in that than others. Only very few are both excellent in that and willing to spend the great effort that producing such quality arguments repeatedly takes. Good intentions are not enough as a wrong approach may counteract the positive results that others have obtained.

121. I was thinking a little about this exchange this morning, as I missed most of it by going to bed before it really got going. Given that I let people express opinions about Judith Curry and others, it seems wrong to not let people express opinions about Richard Betts or Tamsin Edwards, as uncomfortable as it might make me feel.

My personal view, FWIW, is that this is a very complex situation and I don’t think anyone knows what works or what doesn’t. Criticising what others choose to do or say doesn’t seem, IMO, all that constructive. I don’t think that pandering to people who are essentially imposing a form of blackmail (do this or else we won’t trust you) will be productive, but it might so criticising those who try seems unnecessary. There are also the lurkers who may benefit, even if the dissenters are unmoved. Also, everyone is free to do or say as they wish. There’s nothing fundamentally wrong with someone not wanting to express a view about something; that’s their right.

As it stands, there are people now on Twitter going on about how this shows you have to be part of the club, or else. I guess there’s little one can say that some won’t choose to criticise, but providing easy ammunition seems a little counterproductive.

122. WFC says:

Joshua

Yours was more of a request for information than an argument, and I apologise for missing it.

http://judithcurry.com/2014/08/24/the-50-50-argument/

123. WFC says:

Thomas Lee Elifritz

To summarise the links you have provided, Lindzen published his Iris hypothesis in 2001. Critiques have been published which have either questioned the existence of the phenomenon or confirmed its existence but questioned the effects (or the extent thereof), and in 2011 Lindzen published a further paper refining the hypothesis in the light of further work and the published criticisms.

And that’s what you call “dead” is it?

But I do agree that the topic of this thread is Curry’s paper, which is why I came to it. This site comes highly recommended as one which provides more light than heat, and I was already familiar with ATTP from his postings on other sites.

124. WFC says:

Willard

I didn’t say that “playing the man” didn’t have an effect.

I said that it was not a persuasive effect (not with those who understand advocacy, in any event) – quite the contrary. A person reduced to playing the man is (to mix the metaphor) somebody who is going to be assumed to be in the last trench with only one bullet left.

125. WFC,
Judith 50:50 post is very confused, in particular this comment,

Further, the attribution statement itself is at best imprecise and at worst ambiguous: what does “most” mean – 51% or 99%?

In fact, if you look in this post, you can see a distribution function for the anthropogenic influences, and the chance that it provided less than 50% is extremely small (less than 1%). The IPCC statement is actually,

It is extremely likely that more than half of the observed increase in global average surface temperature from 1951 to 2010 was caused by the anthropogenic increase in greenhouse gas concentrations and other anthropogenic forcings together. The best estimate of the human induced contribution to warming is similar to the observed warming over this period.

In IPCC-speak, Extremely likely means > 95%, and the next category is virtually certain which means > 99%. So, what the IPCC is saying is (as indicated by the distribution function in the RealClimate post) is that it is extremely likely that anthropogenic influences provided more than 50% of the warming and that the best estimate is that it provided all the warming. The 95% confidence interval is actually something like 85 – 115% of the warming. Judith has interpreted it as the 95% confidence interval being 51-100% of the warming, which is incorrect.

126. Steve Bloom says:

You need a different book, Richard. I suggest a dictionary.

Re bad behavior, that was all Tamsin. As I noted, she climbed down and IIRC even apologized. Is it possible you missed that incident? Presumably you saw the original comment by her when she made it at Bishop Hill’s, but chose to say nothing. Why?

Also, while I have used it in the past, the under the bus phrasing wasn’t mine here, I was quoting a colleague of yours from up thread who asked if anyone had done such a thing to Mike. Go on, have a go at him now, tough guy. [Mod: a bit inflammatory]

Your defense of Tamsin puts me in mind of the classic Mayor Daley line: “They have vilified me, they have crucified me; yes, they have even criticized me.” I do criticize her when she does something dumb, as with the Mann comment, that execrable Guardian column when tout le monde dumped on her (and rightly so), and most recently with this odd exercisein modeling over-confidence. (Although in the comments on the latter she says “I just want to add that there are lots of caveats to this work, several of which I mentioned, and many better ways to do it, none of which are yet possible (lack of observations or modelling capacity). So it’s a first attempt. But we feel it is pretty emphatic.” Huh?) Otherwise, not much IIRC. Was the “often” an attempt to refer to “recent’? If so, no.

But that said, I suspect rather like Mike, I think Tamsin is well-intentioned and just gets carried away with her rhetoric from time to time. In any case, she seems quite determined to carry on with therapizing the deniers, so best to keep her as much as possible inside the tent pissing out.

Appreciating DY’s attempted nastiness puts you in the same boat you accuse me of being in, BTW.

But tell you what: As you’ve staked your personal credibility on being right about those regulations, this coming week I’ll write up and post the necessary deconstruction of them. As I said, they’re really not that complicated, so it won’t be too hard to do it accessibly.

Something for you to look forward to.

127. Steve (& Richard),
TBH, I’d really rather this didn’t escalate. I’ve rather lost track of how or why this even started. I know there was a discussion earlier about certain climate “skeptics” who seem to be suggesting that climate scientists need to throw Michael Mann and John Cook under the bus before climate science can regain their trust. I think this is a form of blackmail and – IMO – just indicates that they’ll find some other reason to distrust climate scientists if they ever did throw Michael Mann & John Cook under the bus. However, given that I don’t really know what works and what doesn’t means, I find it hard to criticise those who do choose to engage with such people. That doesn’t mean that they shouldn’t be criticised, but I don’t really see the point of it getting out of hand and – right now – have no great interest in having to moderate a discussion along those lines. I really was trying to take a bit of a break and I know I’ve failed by writing a couple of posts, this particular topic isn’t really relevant to either of those posts.

Also, I’m not sure I can really face a lengthy debate here about the interpretation of UK civil servant rules.

128. WFC says:

ATTP

She addresses that in her post.

“In my previous post (related to the AR4), I asked the question: what was the original likelihood assessment from which this apparently minimal downweighting occurred? The AR5 provides an answer:

“The best estimate of the human induced contribution to warming is similar to the observed warming over this period.

“So, I interpret this as scything that the IPCC’s best estimate is that 100% of the warming since 1950 is attributable to humans, and they then down weight this to ‘more than half’ to account for various uncertainties. And then assign an ‘extremely likely’ confidence level to all this.”

Her interpretation of that paragraph, therefore, appears to differ from yours. Or have I misunderstood?

129. WFC says:

ATTP

Thank you for that.

130. WFC,
I think Judith’s argument for downweighting is wrong. I suspect the choice that the IPCC made was that the data doesn’t support a statement that it is “virtually certain” so they’ve gone with “extremely likely”. I suspect that this is because if you look at the distribution function in the Realclimate post, 3 σ would include less than 50% anthropogenic and hence saying “virtually certain” is too strong. I may have been wrong above when I said the 95% confidence was 85% to 115%, but it’s still the case that the best estimate is 110% and 1 σ looks like it is about 15%.

131. Manny Calavera says:

[Mod : Sorry, not really interested in your views on my moderation.]

132. Manny Calavera says:

[Mod : Whatever; I don’t care for your views.]

133. WFC says:

ATTP

I would love to discuss that Realclimate post, which I have read several times and think I have (mainly) understood. There are some parts which appear to me to be questionable. Unfortunately I have to prepare some work for tomorrow (which I’ve already put off for too long:-).

But I would say in closing that your post (and the discussion underneath) has been very interesting. I think that both “sides” have a tendency towards what you call the “one paper syndrome” (or, perhaps, the “latest paper syndrome”?) and it is useful (to me, in any event) to be able to read (what appear to me to be) well thought out critiques.

134. Manny Calavera says:

[Mod – Ahh, so a sock-puppet? Go away.]

-Shub

135. > A person reduced to playing the man is (to mix the metaphor) somebody who is going to be assumed to be in the last trench with only one bullet left.

Perhaps by “those who understand advocacy,” which I believe is just a way to refer to yourself, WMC. And even then, it’s not clear that it’s the only interpretation available. Someone who understands advocacy could simply see by such abuses an expression of daily sadism.

The evidence I provided shows that “those who understand advocacy” are not the only ClimateBall ™ players around.

136. Willard,
You may mean WFC, rather than WMC.

137. And that’s what you call “dead” is it

Yes, deader than dead, Buried and composted under a mountain of evidence, including a 2012 SkepticalScience article that also contains a mountain of links and simply googling it destroys it even further. But by all means cling to it like a survivor of the titanic. Floating it won’t help you.

My suggestion to you – if you are looking for autoadaptive atmospheric physics, do the physics. Regurgitation of links just doesn’t cut it in the scientific world. Admit you can’t do the physics.

Richard Lindzen

138. Joshua says:

FWIW, if I have a vote, I cast it for more of the type of discussion that was going between Anders and WFC than the one between WFC and Steve or Steve and Richard. The Steve/Richard, Steve/WFC argumenst are not something I can learn anything from. They are merely same ol’ same ol.’

WFC –

FWIW, I think you would have been better off if you had just started discussing the science rather that editorializing about where the discussion veered into the wrong direction. Ironically, at the very point where you began editorializing, the discussion moved even more in a direction other than discussion of science.

Of course, you certainly have a right to express your opinion – and it’s possible that my comments immediately above might have the same effect (because they too are editorial about the discussion rather than discussion of the science) but I tend to doubt it. We’ll see.

Anyway, I have read that post from Judith that you linked, and I was hoping that you would have selected a more specific statement from Judith to support your argument.

==> “I might add that I thought that the 50% anthropogenic comment from Curry was an extrapolation from a paper she was discussing – ie “if that paper is correct then that would lead to the conclusion that 50% of warming was anthropogenic” – rather than a statement of fact.”

I think that she used that paper as a form of support for a preexisting point of view – not merely to extrapolate an argument that could be true contingent on the veracity of that paper. In fact, there has been discussion with Judith about more/less than 50% warming on numerous occasions prior to her writing that post that are very much in line with the argument she makes in that post.

And I did ask you a question that I thought relevant. Here, Let me ask it again:

What do you think about Judith’s statement that it would be “foolish” to think that ACO2 dominates climate on decadal scales – after writing a paper that supports just such a conclusion for temps over the past 6 decades?

139. Richard B,
re your short comment at 1:19, you may grant me one very short question: How often have you asked Montford to stop bullying Mann (or other colleagues for that matter) in similarly clear terms while you were commenting on BH? I am not reading BH, which is why I’ve got no clue.

I am asking, because I do genuinely think that it might damage the reputation of the scientific community as a whole if smears go unchallenged (in case scientists are involved in the discussion, that is). For me, it signals implicit agreement to what’s been said in the associated blog posting. While it might well increase trust in the scientists involved, it diminishes trust in those who don’t participate in equal measures. Perhaps I am entirely wrong with my judgement, but I thought I share my humble opinion anyways. I don’t follow the social sciences on these subjects close enough to know what the best strategy might be. It does put me personally off from online discussions though.

Btw, I am absolutely in favour of face-to-face encounters (well, with a few exceptions I guess) as it puts a personality to the online character, but it doesn’t mean that I will accept vitriol against colleagues from those I met before once I get involved in an online discussion with them again. Choosing to avoid further discussion is a valid option though.

Anyway, it’s not really on topic anymore, so I better stop at this point.

140. Karsten,
FWIW, Richard’s been through a form of this discussion here before, starting about here. If memory serves me right, Richard’s point was along the lines of choosing the battles that you think you might win (or more, properly, not lose). Challenging all the rather extreme statements made on some blogs is probably not worth the time or the effort. That does maybe beg the question as to whether or not engaging there at all is worth the time or the effort, but as I don’t really know what works and what doesn’t, and what’s worth the effort and what isn’t, I find it hard to criticise.

141. Steve Bloom says:

Challenging statements trashing colleagues is a bit more specific, Anders. Is it really hard to criticize those who are silent when such statements are made?

142. Steve,
Sure, I don’t understand how people can engage pleasantly with those who mock and denigrate their colleagues and their institutions. However, I can easily see why they realise that fighting back (as individuals at least) is pointless and I can see how some might think that engaging with such people might achieve something. Even if it doesn’t convince those doing the mocking and denigrating, it might be of benefit to the lurkers. I also don’t see how there’s much benefit to strongly criticising those who do choose to engage, especially if the criticism implies something about their character. This, for example, may not have been necessary

I can only hope the same carelessness doesn’t creep over into his modeling work.

143. BBD says:

WFC

You do not refute an argument by pretending it does not exist, or by refusing to address it.

Some climate scientists ( not you, I’m sure) have been doing that for years.

How’s that worked for them?

Just fine. Lindzen was shown – repeatedly – to be wrong. He has a bad argument. But he’s still pretending he has a good one, and he’s fooled you.

Here is what may well be an incomplete list (abstracts only) of replies in the literature to Lindzen starting with his ‘infra-red iris’ hypothesis (Lindzen et al. 2001):

Hartmann & Michelsen (2002)

http://journals.ametsoc.org/doi/abs/10.1175/1520-0477%282002%29083%3C0249%3ANEFI%3E2.3.CO%3B2

Lin et al. (2002)

http://journals.ametsoc.org/doi/abs/10.1175/1520-0442%282002%29015%3C0003%3ATIHANO%3E2.0.CO%3B2

Harrison (2002)

http://ams.allenpress.com/perlserv/?request=get-abstract&doi=10.1175%2F1520-0477(2002)083%3C0597%3ACODTEH%3E2.3.CO%3B2

Fu et al (2002)

http://www.atmos-chem-phys.net/2/31/2002/acp-2-31-2002.html

144. BBD says:

Lindzen cont.

Replies to Lindzen & Choi (2009)/Spencer & Braswell (2009):

Trenberth et al. (2010)
http://www.agu.org/pubs/crossref/2010/2009GL042314.shtml

Lin et al. (2010)
http://www.sciencedirect.com/science/article/pii/S0022407310001226

Murphy et al. (2010)
http://www.agu.org/pubs/crossref/2010/2010GL042911.shtml

Dessler (2010)
http://www.sciencemag.org/content/330/6010/1523.abstract

145. BBD says:

Lindzen cont.

Replies to Lindzen & Choi (2011)/Spencer & Braswell (2011):

Dessler (2011)
http://www.agu.org/pubs/crossref/pip/2011GL049236.shtml

Trenberth, Fasullo & Abraham (2011)
http://www.mdpi.com/2072-4292/3/9/2051/pdf

146. Steve Bloom says:

Necessary? Possibly not, Anders, but apparently you’re uninterested in the background material, so I suppose you’ll never really know.

147. Steve Bloom says:

BBD, don’t forget that the 2001 iris business was a re-work of a prior refuted idea (“cumulus drying”) dating back to at least the early ’90s. That’s why the iris got some pretty strong push-back at the outset. It seemed like, and indeed was, a dodge.

148. Steve Bloom says:

[Mod: Off-topic]

149. [Mod: The comment this refers to has been deleted]

150. HR says:

Joshua says:

“What do you think about Judith’s statement that it would be “foolish” to think that ACO2 dominates climate on decadal scales – after writing a paper that supports just such a conclusion for temps over the past 6 decades?”

I know you want WFC to answer this but do you mind if I have a go? She states in her blog post that this method assumes no role for internal variability on a century-scale while she believes that such an assumption is wrong. Does that mean she’s wrong to perform the experiment? I think one of the joys of science is dogma does not dictate what you can and cannot do. So its OK to say assuming no role for internal variability what is the likely ECS or TCR from a century of data and simple energy balance models even if you don’t ‘believe’ that assumption. Especially if another question you want to investigate is do these estimates agree with estimates from other sources (eg GCM) that seem to also suggest this assumption to be largely correct. Science is like that, it allows you to ask “what if’ questions, even ‘what if’ questions you believe are flawed.

It would be a good question for somebody to directly ask her if she ‘believes’ in these estimates. I hate to put words in her mouth but I suspect the answer is no.

151. HR,

She states in her blog post that this method assumes no role for internal variability on a century-scale while she believes that such an assumption is wrong. Does that mean she’s wrong to perform the experiment?

What experiment? I’ve heard Judith complain that she can’t get funded to do this kind of research, but I’ve neither heard her clearly define what she means by “internal variability” and nor have I heard her explain how she would test this. I don’t know how to do this either, and so would be fascinated to here someone both explain their definition of internal variability and also how they would test whether or not it can play a role on century timescales. So far, it mostly seems to consist of “maybe it can”.

152. HR says:

ATTP,

By experiment I meant the Lewis and Curry study. And I was specifically trying to answer Joshua’s question (one he repeated 3x) about whether Curry is justified in doing studies that include assumptions she doesn’t adhere to.

I absolutely agree the data is such that separating forced from internal variability would seem tricky at this stage but there are some pointers to suggest a role. For example, the 1910s-1940’s warming was on a similar scale to the 1970s-1990s even though forcings were very different. What drove warming in that period? More generally I

153. HR,
Okay, I see. I have no problem with Judith being involved in studies that include assumptions she may not completely agree with.

For example, the 1910s-1940’s warming was on a similar scale to the 1970s-1990s even though forcings were very different.

The forcings were different, but the change in forcing over 1910-1940 is not that different to the change over the period 1970-1990. There was more solar influence in the 1910-1940 interval than in the 1970-1990 and the volcanic forcing was different, but you can easily do a basic forcing model that roughly matches both the 1910-1940 and 1970-1990 interval. It’s quite likely that internal variability played some kind of role in one or both intervals, though. The main reason we don’t completely understand the 1910-1940 interval isn’t because we can’t explain it, but because we don’t have enough data to be certain of our explanation.

154. One essential difference is that the maximum around 1940 did not last long. Thus the warming over the period 1910-40 may have been enhanced significantly by rather short term variability around 1940.

Judith has emphasized on several occasions uncertainties of unknown nature (like the unknown unknowns of Donald Rumsfeld). She has discussed Italian flag as an metaphor, and she has discussed the issue in many other ways. In meeting such uncertainties the easy way out is to propose fifty-fifty changes or something equivalent to that, but that’s, of course, not the right approach. I have criticized her approach in several threads, where she has discussed uncertainties.

An even more important question is, what kind of conclusions should be drawn, when the uncertainties are large and alternatives include very severe threats. Somehow she has always succeeded in avoiding even discussing seriously, how risk aversion (or the precautionary principle) should be taken into account. It’s true that uncertainties offer support for robust or low regret solutions, but only when the actual risks are properly included in the determination of what’s robust. A properly robust choice must be also effective against the potential severe risks, and that’s something that she hasn’t recognized as far as her writings tell what she is thinking.

155. > Challenging statements trashing colleagues is a bit more specific, Anders. Is it really hard to criticize those who are silent when such statements are made?

This kind of argument may be reconcilable with the Auditor’s claim:

Some ClimateBallers, including commenters at Stokes’ blog, are now making the fabricated claim that MM05 results were not based on the 10,000 simulations reported in Figure 2, but on a cherry-picked subset of the top percentile. Stokes knows that this is untrue, as he has replicated MM05 simulations from the script that we placed online and knows that Figure 2 is based on all the simulations; however, Stokes has not contradicted such claims by the more outlandish ClimateBallers.

http://climateaudit.org/2014/09/27/what-nick-stokes-wouldnt-show-you/

The expression “Some ClimateBallers” signals a nice ClimateBall ™ move: generalizing a phenomenon based on a single instance, to whom I’ll soon return.

156. Paul S says:

Just a general point on the logic of their approach: They frame the paper in terms of working out the sensitivity implications of the AR5 net forcing and TOA imbalance estimates but there’s no clear reason why sensitivity should be the object of the calculation rather than either of the other two uncertain factors. The same equation can simply be rearranged to use the new IPCC sensitivity range and TOA imbalance estimates to understand the implications for aerosol forcing or net forcing. Using the central deltaT, imbalance and other forcings adopted by Lewis & Curry and AR5 likely sensitivity range of 1.5 – 4.5K I find a central estimate for aerosol forcing of about -1.65W/m2 with range -0.6W/m2 to -2W/m2.

I can’t see this really gets us anywhere. All we know is that, according to a simple energy balance model the given central estimates in AR5 for these three factors are not consistent.

157. This goes even further from the subject of this thread, but this link I got from Revkin’s recent post discusses well decision-making under deep uncertainty and explains thus issues I mentioned in my above comment.

I’m planning to take a closer look at World Bank White Papers on this subject (there are now at least three).

158. Joshua says:

==> :And I was specifically trying to answer Joshua’s question (one he repeated 3x) about whether Curry is justified in doing studies that include assumptions she doesn’t adhere to.

That wasn’t my question.

Of course a scientists is justified in investigating a hypothesis.

My question was to challenge whether it is logically consistent for her to say that it is “foolish” to think that ACO2 dominates (changed from influences) the climate on a decadal scale even as she writes a paper that supports such a conjecture.

159. HR says:

Joshua I don’t get what you are trying to tease out here it the way you ask the question if how I have stated it has no relevance. Could you reframe the question?

I actually asked the question of Judith at her blog in the way I understood it and got an answer. FWIW heres the Q+A

Me say “Judith I have a rather direct question for you. You say this method assumes no role for internal variabilty on the ~century scale of this study and you have stated that internal variability does play a role on this scale. Does this mean you don’t actually believe in these sensitivity estimates?”

She say “As described in the meta-uncertainty thread, it is not clear how to interpret or calculate ‘climate sensitivity’. that said, climate sensitivity is arguably the most single important parameter used in economic cost benefit models and the social cost of carbon. So it is important to clarify the uncertainties in this parameter, even if i am concerned in a meta sense that this parameter may not be very meaningful scientifically”

Joshua I think if you are looking for logical consistency between her two approaches then you have to ask yourself what questions is she trying to answer with each approach. At the same time this is an ongoing scientific investigation it seems perfectly OK to use tools and ideas to investigate different aspects of the problem even if they appear to be inconsistent with each other. Dare I say it’s what you might expect from an open-minded scientist! She has obviously not closed her mind to investigating this problem from the perspective of it being a purely forcing problem even though her instinct is that such an approach may be flawed.

160. David Young says:

[Mod: The host has asked for an end to this discussion. It’s not really related to the topic of this thread so let’s honour his request. Thanks]

161. HR says:

ATTP said
“….. but the change in forcing over 1910-1940 is not that different to the change over the period 1970-1990. ”

That statement surprised me because I believed that radiative forcing had changed quite significantly over those periods. So I had to go away an check it was accurate.

http://www.pik-potsdam.de/~mmalte/rcps/ – this page has a link to Global Annual Mean Radiative Forcing for 20C historical runs used in CMIP5 starting around 1750

If you calculate a mean radiative forcing for each period then you get quite different values. The Hansen data starts around 1880 and give average nett forcing of around 0.1-0.2W/m2 for 1910-1945 while 1975-1998 is around 0.7W/m2. The PIK data as I said begins in 1765 and relative to this date the average imbalances for the two periods are 0.66W/m2 and 1.25W/m2. The temperature change for each period is in the order of 0.4-0.5oC. So it seems that both the magnitude as well as the source of the imbalance are quite different for both periods.

(caveat: I may be doing this all wrong.)

162. Joshua says:

HR –

==> “At the same time this is an ongoing scientific investigation it seems perfectly OK to use tools and ideas to investigate different aspects of the problem even if they appear to be inconsistent with each other. Dare I say it’s what you might expect from an open-minded scientist!”

I agree completely. I have no issue with Judith investigating issues to quantify uncertainty. I respect her focus on uncertainty. Absolutely, investigating phenomena from different hypothetical frames to help establish causality is exactly the meat of science.

But here I am focusing on her rhetoric, specifically, when she says that it is “foolish” to think that ACO2 dominates on decadal levels. That is why I focus on the lack of consistency of saying something like that while writing a paper that supports an argument that in fact, ACO2 has dominated on a decadal scale over the last 60 years or so.

It isn’t her empirical, scientific endeavor that is my focus. It isn’t the science of her paper that is my focus. I’m focusing on her unscientific approach to the debate, and her lack of respect for uncertainty within her rhetorical approach.

I think that Judith overstated her perspective. First, obviously, with her extemporaneous statement that it’s foolish to think that ACO2 “influences” climate, and then more subtly with her follow-up replacing “influence” with “dominate.”

Two problems there, IMO. (1) she is engaged in poor advocacy. Advocacy in and of itself is not a problem, IMO, but overstating arguments is poor advocacy. (2) Everyone makes those kinds of mistakes, but with respect for the scientific process and for uncertainty and quality advoacy, she should make it explicit that she overstated the case and go back to being clear about the uncertainties that were clear in her paper but “papered over” (pun intended) in her public appearance.

It seems we’ve been back and forth here, essentially repeating the same arguments. I’m not sure why. It seems to me that you’re not dealing with my argument. I imagine you think the same of me. Not sure where else to go with that.

163. Joshua says:

HR –

BTW – In reading Judith’s last two posts that discuss her recent papers, I am struck with her respect for stating caveats. That seems to me to be an appropriate approach to science and I think that kind of approach should be carried forward into the public debate about climate change. Judith says that kind of approach should be used by others, and I agree. I also think that she should utilize the same approach.

164. Tom Curtis says:

Anders, HR, ΔF from 1910-1940 is 0.43 W/m^2; between 1970-1990 it is 0.71 W/m^2. That is calculated by multiplying the OLS regression by the number of years for the relevant periods using the AR5 forcing data from annexe 2. of WG1.

165. Steven Mosher says:

“It would therefore be enormously helpful if scientists who understand what NL is doing would respond – ideally a reply in the literature. Otherwise these studies will be used for years to mislead and confuse policy makers.”

folks who want to take issue with the paper have a unique opportunity to put up or shut up.

In my comments on draft one I suggested that Cowtan and Way should be examined as an alternative temperature series. The response I got was pretty good. A) thats not what the IPCC used. B) If the reviewers asked for it it would be included.
I was also encouraged by the fact that code would be released so that people could test the
choices made. Its pretty simple. You actually get to choose your data and defend that choice.
you get to do that as a scientist. The unique thing here is that they give you the code so you can show the sensitivity of the answer to the choices they made. I suggest folks do that. Look at the full envelop of choices and sensitivities. The aim of this paper was not to look at all choices. The reviewers could have demand that. They didnt. So, you dont like their choices? Do your own damn science. They made it easy.

Now, I would contrast this papers approach with say the approach taken by the IPCC ( or the literature it relied on) I might take issue with some of the choices the IPCC made. can I get their method ( code) to test the robustness of their results to the choices they made? nope. thats a fail.
or take the paleo work in the area ( say hansen on LGM) he also made many analyst “choices”
can I get it his code to check his choices? errr nope. Hmm there are a couple other papers in sensitivity that supply code, but I’m un aware of any meta study that does.

its just one paper. But You have the code. Change the choices they made, and go publish.

166. I’m thinking it would be a bit more definitive to grid the entire ocean with robotic sensors. That way we can get some definitive data to plug into the models and not have to rely on some wishy washy macroscopic approximations. Certainly the IceSat and satellite data will help there as well.

It’s your planet. Do you know what your ocean currents are doing tonight?

167. anoilman says:

Steven Mosher: I don’t think anyone will care. Its deficiencies are well understood, and known. You can read about a similar paper Otto et al.. here;
https://andthentheresphysics.wordpress.com/2014/09/09/matt-ridley-you-seem-a-little-too-certain/

You can see Richard Alley talk about that paper as well, and why its an outlier, etc in the video I linked above.
https://andthentheresphysics.wordpress.com/2014/09/25/lewis-and-curry/#comment-32348

What would Richard Tol say… something like “If you use a bad method, it doesn’t matter what the result is.”

168. anoilman says:

Thomas Lee Elifritz: Umm… That data is quite correct. Submarines use the same data sets to hide. They can’t very well surface and launch an XBT. “Don’t attack us just yet, we’d like a time out to see where exactly to hide from you at.”

The point is that they rely on exactly the knowledge you question in order to function. If they were wrong, subs would be an utter waste of money. They’d be easy to track and easy to sink.

Oceanographers study currents and have a very good idea about all the macro and micro effects.

169. Joseph says:

I believe firmly that science should be presented in a way that both really is objective and conveys effectively signs of objectivity

I think it is difficult for experts to communicate with a lay audience without oversimplifying the science. The complexity makes it difficult to explain the details (arguments for and against) of all the various uncertainties especially in a public lecture or news article and almost impossible for a non-expert.

170. Steven Mosher says:

seriously oilman, linking to a video? a non peer reviewed video? I sat through that presentation.
ho hum.

have you looked at Paleosens results? Point me at their code and data?

Further it is not a one paper syndrome. its a recent science syndrome.

Skeie, R. B., T. Berntsen, M. Aldrin, M. Holden, and G. Myhre, 2014. A lower and more constrained estimate of climate sensitivity using updated observations and detailed radiative forcing time series. Earth System Dynamics, 5, 139–175.

This one is nice. done in R

van Hateren, J.H., 2012. A fractal climate response function can simulate global average temperature trends of the modern era and the past millennium. Climate Dynamics, doi: 10.1007/s00382

now, of course none of this is definitive. There were some things that Nic and Judith did that I thought should have been done differently. The method you should know goes back to gregory 2002. It was quite good enough for the IPCC ar4 and ar5. I searched for any comments or objections Alley may have submitted to Ar5 ( chapter 12) disputing the use of papers relying on this method. None.
I searched for any commments by any scientist rejecting papers using this method in the writing of Ar5. None.

Simply put. Every method has issues and limitations. Lewis and Curry use a method that was just fine when gregory 2002 used it. Of course the data choices there gave different results.

Different approaches and different data.. go figure. you get different results. two things are clear however.

171. Oceanographers study currents and have a very good idea about all the macro and micro effects.

Get a grip. Nuclear submarines do not descend to the bottom of the oceans and they cost hundreds of billions of dollars and are not everywhere, all the time. And oceanographers don’t have a clue what is going on down there in terms of absolute salinity, temperature, density, momentum vectors, etc. Therein lies the problems. I repeat, we need to grid the entire ocean.

It ain’t that hard, compared to building hundred billion dollar nuclear submarines. Trust me.

172. Steven Mosher says:

here oilman

tell me what you see

173. Tom Curtis says:

Steven Mosher, the selection criterion is that excluded studies only used GHG or CO2 as forcings, ignoring land ice (LI) and/or other forcings. The distinction is one made in the Paleosens paper. What is your point exactly? That you didn’t bother reading the paper before criticizing?

174. Steven Mosher says:
175. Tom Curtis says:

Steven Mosher says, “folks who want to take issue with the paper have a unique opportunity to put up or shut up.”

Fair enough.

I downloaded the AR5 forcing data from Annexe 2, HadCRUT4 from the Hadley Center, Domingues et al OHC data from the CSIRO and Levitus et al forcing data from the NODC. I then proceeded to calculate ΔT, ΔF, and ΔQ from that data. Using C&L value for Q over the period 1859-1892(which I also dispute). The result was that L&C incorrectly estimated ΔT by 0.57% (small enough to be a rounding error), ΔF by 2.68%, and ΔQ by 17.03%. The later two are too large to be rounding errors. All errors favour lower values for TCR and ECS. Combined, the errors deflated TCR by 3.16% and ECS by 8.27%.

The “error” in ΔT is just a rounding error as noted. That in ΔF may be due to an adjustment to the aerosol forcing. If so, it means Lewis and Curry are not, after all, trying to show what is obtained from the IPCC data, and need to independently justify their choices of data. If they were trying to show the results of the IPCC data, and also obtain the difference when the forcing data is modified as Lewis claims it ought, then they should have shown both.

The difference in ΔQ is the most interesting. I obtained the most recent values by downloading the 0-2000 meter pentadal record, and using the difference between successive values to determine the difference between individual years six years apart. I then used the the 2005-2012 annual data as an anchor point from which annual values were reconstructed back to 1955. Comparison of rolling 5 year averages with the pentadal values showed a constant offset over the reconstructed period, which because constant has no effect on trends. I then deducted the 0-700 meter annual OHC, added in the Domingues 0-700 meter OHC and the Purkey and Johnson trend from 1992 (as per box 3.1) and dividing by 0.93 to account for the heat content of ice loss, into the ground and into the atmosphere.

Interestingly, my figures and L&C’s figures agree within 1% if I neither add in the Purkey and Johnson trend for OHC below 2000 m, nor apply a modifier for non-ocean heat storage. This looks like a likely source for the error.

Resolving the errors results in a mean TCR of 1.37 C, and a mean ECS of 1.79 C per doubling of CO2. These are still low values, but well within the IPCC range. Further, at this stage the errors amount to errors in arithmetic rather than errors in assumptions (of which I believe there are plenty).

176. Tom Curtis says:

Mosher, if I missed the point it is because you didn’t bother explaining it. If you are too lazy to do so, my assumption will be that you in fact have no point.

177. HR,
I know Tom’s already responded to the change in forcing issue, but I was using this figure, where it appears as though they differ by maybe a factor of 2. Also, the latter period has a lot of volcanic activity. Given that internal variability can play a role on decadal timescales, it seems to me that one can explain the similarity in the two periods as being a combination of a forced response plus internal variability that may have contributed a different amount in each period. To me that these two periods have similar amounts of warming is consistent with internal variability playing a role on decadal timescales, but doesn’t say anything about the kind of role it can play on century timescales.

178. Tom Curtis says:

Anders, I would strengthen that. The similar warmings certainly show that internal variability can have an impact on decadal time scales, but not on multidecadal timescales. Neither the negative perturbation around 1910 nor the positive one around 1940 are more than a decade long.

179. Steven Mosher says:

“Steven Mosher, the selection criterion is that excluded studies only used GHG or CO2 as forcings, ignoring land ice (LI) and/or other forcings. The distinction is one made in the Paleosens paper. What is your point exactly? That you didn’t bother reading the paper before criticizing?”

1. you’ll note that I did not criticize it.
2. I point you at a question. I’ll ask it directly. What is the effect of using different selection criteria?
what is the effect of changing the assumptions ( look at the SI)?

Now, with Nic paper I had a question about his selection of hadcrut4 as opposed to C&W
He makes it possible to investigate the robustness of his answer to that selection. That’s good.
It’s really a simple point. Let’s let gavin speak

“There are three main methodologies that have been used in the literature to constrain sensitivity: The first is to focus on a time in the past when the climate was different and in quasi-equilibrium, and estimate the relationship between the relevant forcings and temperature response (paleo constraints). The second is to find a metric in the present day climate that we think is coupled to the sensitivity and for which we have some empirical data (these could be called climatological constraints). Finally, there are constraints based on changes in forcing and response over the recent past (transient constraints). There have been new papers taking each of these approaches in recent months.

All of these methods are philosophically equivalent.”

Given that, I tend to favor those studies that give people the ability to question for the themselves all the data and all the methodological assumptions. Operationally, this means providing people with the tools to do this. The code and the data. So, paleosens made some decisions. they get to do that. They are defensible. Nic and Judith made some decisions. They get to that. They are defensible.
The question is how do the results change if we change those decisions.

I observe that one publication gives me the tools to question it. I can actually verify that the figures in Nic paper are the result of his method. That is actually the first hurdle. Can the claim be verified.
not is it true. But do the methods as described actually produce the results as published.
Then after this is verified we can look at changing or perturbing the choices made

here. you can take a course

https://www.coursera.org/course/repdata

180. Steven Mosher says:

Tom

“I downloaded the AR5 forcing data from Annexe 2, HadCRUT4 from the Hadley Center, Domingues et al OHC data from the CSIRO and Levitus et al forcing data from the NODC. I then proceeded to calculate ΔT, ΔF, and ΔQ from that data. ”

cool. post your code and data. What answer did you get when you ran Nics code.
If you got it running you can save robert way and I some time

181. Mosher,
Maybe I’m missing the point you’re trying to make, but I don’t think anyone is suggesting that there is an error in any calculation in NL14 or that one couldn’t go and find all the necessary data and repeat the calculation.

But do the methods as described actually produce the results as published.

Sure, I’ve seen nothing to make me think that they don’t.

However, there is a difference between doing a calculation that is correct and can be verified, and doing a calculation the results of which are a reasonable representation of reality. My feeling is that NL14 have selected datasets and made assumptions that minimise the ECS and that there are other issues with this method in general (as explained in the post) that likely mean that the results are a lower limit. This isn’t a criticism of this paper or this method, since I think this method is very useful and informative; simply an observation that I think is broadly accepted.

182. Tom Curtis says:

Mosher, I cannot program in R so Nic’s code is irrelevant to me. I have cited the sources of the data I used, and specified the only unusual procedure I used so my results are easily reproducible. If it matters at all, I used Open Office Calc for the calculations. You are more likely to discover a genuine error in what I or Nic did if you download the data for yourself, and devise the best algorithm to reproduce the result that you can using that data. Only if there is disagreement at that stage is back tracking through Nic’s code and data relevant, but something I will have to leave in others hands.

183. Tom Curtis says:

Mosher, “using different selection criterion” in Paleosens makes the difference between calculating a reasonably close approximation of the ECS, or calculating the Earth System Sensitivity. Very simply, when you leave out as important a forcing as Land Ice, you treat changes in Land Ice as a feedback, and hence are calculating ESS. The same, in reverse, would apply if you retained LI but left out the greenhouse forcing.

So, yes, if you use a different method you can calculate an entirely different theoretical value. What does that have to do with anything discussed above?

184. Tom Curtis says:

Mosher:

“1. you’ll note that I did not criticize it.
2. I point you at a question. I’ll ask it directly. What is the effect of using different selection criteria?”

Exactly. You used the method of innuendo rather than that of discussion. Again, the choice of method reflects a choice of purpose.

185. Tom Curtis says:

Anders, not an error of calculation as such, but it certainly appears that L&C have not included all the data they indicate they included. Specifically, they state:

“Estimation of final period mean heat uptake is derived from the climate system energy accumulation observational best estimates and uncertainty ranges shown in Box 3.1, Figure 1 of AR5. These estimates include 0–700 m OHC from an update of Domingues (2008), 700–2000 m ocean heat content (OHC) from Levitus et al. (2012), and allowances for minor heat uptake by the abyssal (2000–6000 m) ocean, ice melt, land and atmosphere, as described in detail in Box 3.1 of AR5. “

However, from my analysis it appears they have not included Heat flux into non-ocean systems, nor heat flux below 2000 meters. That is, they did not include the elements mentioned above, and bolded by me.

186. Tom,
Interesting, I didn’t realise that. If not, that would further reduce their system heat uptake estimates and, obviously, reduce the ECS best estimate.

187. jsam says:
188. It seems obvious that most people commenting here expect that L&C would have a bias and favor choices that minimize sensitivity. There’s no doubt that they can cherry pick evidence to support that belief.

I haven’t seen any evidence that would directly show that their work is biased. Everything written here allows also for the interpretations that they have little bias and their choices are justifiable, and that they have also made choices that lead to higher estimates for sensitivity. It’s obvious that nobody has had an interest to search for such choices.

Someone, who has spent considerable effort in doing a similar analysis, and who has therefore been forced to compare alternative choices systematically might be able to present a less biased assessment about the paper of L&C. My impression is that none of the people who have contributed to this discussion has such background. People, who have such a background may choose other fora to present their criticism.

This is a scientific paper published in a quality journal. It studies a well defined problem stating clearly its assumptions and describing its methods. It’s a more careful and more detailed repetition of work done before. Thus it’s not great science, but the reviewers and the journal judged it worth publishing. It’s just one addition to the pool of analyses that tell about climate sensitivity. It’s value as such will probably be assessed in AR6 (I’m afraid AR6 will appear around 2022, although IPCC should now switch to a new mode of operating that would also allow for assessing this paper much faster like in 2016).

I tried to understand some relationships between the numbers they published. Some of the error ranges appeared somewhat contradictory, but for now it’s likely that the mistake is in my reasoning. It’s related to the use of asymmetric PDF’s and having uncertainty in the denominator in calculation of the sensitivities. I’m not sure, whether I’ll spend more effort to study this point. The apparent inconsistency is not very large, but it’s not negligible either.

189. BBD says:

If Mosher is as smart as he thinks he is, why is he the only person here who cannot see that L&C is a constructed result? All the choices made go in one direction only. This is unlikely to have arisen by chance.

Is Mosher naive?

190. BBD says:

Pekka

You are defending the indefensible.

There’s no doubt that they can cherry pick evidence to support that belief.

Frankly, this is risible. You are asking us – me – to ignore Nic Lewis’ affiliations, public statements and previous work.

It is borderline insulting and at this point I am struggling to remain civil.

191. jsam says:

“David Cassatt Are McI and crew still partying like it’s 1999? Wait until they check out Watson and Crick’s second DNA paper and see that there’s one too few hydrogen bonds in the figure. Sheesh.”

192. verytallguy says:

BBD,

Sorry, but I’m with Pekka. I’m not qualified to judge the technical content, but I’ve seen nothing to suggest that the conclusions of the paper are “risible”

If they have cherrypicked so blatantly, it should be very easy to publish a rebuttal with either much wider uncertanties or a different central estimate.

In the same way that sceptics whine about the disappearing of the MWP but *never* put up a proper analysis to show it was real, so here it seems to me incumbent on anyone who claims this paper is fundamentally wrong to simply publish an alternative energy balance approach analysis showing what is right.

193. BBD says:

VTG

You have misunderstood my comment. It is Pekka’s blanking of the context that is risible. In the real world, we cannot simply ignore where Curry and Lewis are coming from. They are advocates.

194. BBD says:

I’m starting to doubt my senses. Are we seriously now arguing that L&C didn’t construct their result by consciously choosing data sets and methodology? Are we seriously arguing that they are not advocates peddling low S?

It may be time for a reality check.

195. verytallguy says:

BBD,

I think Pekka’s blanking of the context is exactly right in considering the merits of the paper.

Does the science stand up is the question, not who is the author.

After all, what if they are right – not just on this, but more broadly? How do we judge that if we *start* from the perspective that they *must* be wrong.

By all means criticise the use to which the paper is put – like most of the science it will be deliberately and consciously misrepresented by “sceptics”. But I don’t think it’s either right or helpful to assume the paper itself is “constructed” to show what they want it to unless you can show by a new analysis that it is.

And that means a new analysis showing a different midpoint or uncertainty range.

196. BBD,

Cherry picking counterarguments is not wrong in itself. That’s always the first step in assessing a new scientific paper and searching for weaknesses in it.

What’s wrong is to conclude from some success in that activity that the paper is biased as a whole.

Being eager to conclude that a paper is biased implies a specific role in the Climate Ball of Williard. It means also that people of different views, and also those who search for objectivity observe the role taken and give less weight for the further contributions of that commenter. I have unavoidably built in my mind a view of the roles various people play in Climate Ball and the rules they apply in the game. Some people have lost much of their credibility for me. I’m not going to tell names, but they are not restricted to one side of the spectrum. (All those who contradict physics are out of scale from the start.)

There are also many whom I see as sincerely searching for truth, perhaps a little biased, but ready to accept opposing arguments, when they are good. For them the originator of some idea may be a major factor, but they don’t dismiss fully the possibility that an “opponent” might in some case be right.

197. BBD says:

We have now veered off into counter-productive pseudo-objectivity.

198. BBD,

Curry is not only an advocate, she’s more a scientist.

My impression of Nic Lews is that he entered the field with very strong bias, but that he does also want to be seen as a scientist and that he takes a sincere effort to be worth that judgement.

They know that the general approach of their paper tends to give relatively low values for sensitivities. Therefore it’s important even from the advocacy perspective to write a paper that’s not technically biased and that cannot be scientifically dismissed as such. Add to that their motivations to be seen as competent scientists.

The motivations are not as simple as many seem to think. Even biased motivations do not necessary work towards a biased paper.

199. verytallguy says:

BBD, here’s an alternative narrative:

NL want to show low sensitivity (1)
NL know that of the possible methodologies, this technique gives the lowest answer (2)
NL have done a perfectly reasonable analysis using this technique (3)
The wider denial community will spin this as the one true answer on sensitivity (4)

Attacking NLs motivations merely makes their science look stronger.

(1) We know this from Nic’s GWPF affiliation, and JCs stated aim to attack the IPPC
(2) This is obvious, see AR5
(3) Otherwise how would it pass review in a quality journal
(4) I can’t be bothered looking, but I’m sure this will play out.

200. Paul S says:

Mosher,

‘A) thats not what the IPCC used.’

Are Lewis and Curry aware that the IPCC gave an estimate for climate sensitivity? Could have saved them some time 😉

201. 8:52 am I wrote a comment that started
===
BBD,

Cherry picking counterarguments is not wrong in itself. That’s always the first step in assessing a new scientific paper and searching for weaknesses in it.

What’s wrong is to conclude from some success in that activity that the paper is biased as a whole.

===

Some words it the remaining text of the comment may have got it stuck. Therefore this comment.

202. Hello all,

Just a note to avoid confusion – although we wrote about coverage of this paper here – http://www.carbonbrief.org/blog/2014/09/your-questions-on-climate-sensitivity-answered/ – the ‘Christian’ commenting above is not me.

Cheers,

Christian (Hunt) of Carbon Brief

203. BBD says:

Pekka

Yes, L&C is not fatally flawed. Yes it has scientific legitimacy. No we are not going to indulge in yet more grey mastication of these side-issues instead of focussing on the core issue. Which is that L&C is tactical. It was conceived and executed with a purpose. A spade is a fucking spade in the real world. Wake up.

Let’s not muddy the waters with stuff like this:

What’s wrong is to conclude from some success in that activity that the paper is biased as a whole.

That’s not what I said. That is a strawman. Enough, please.

204. Pekka,

“I haven’t seen any evidence that would directly show that their work is biased.”

I provided one clear example above. He reduced the aerosol forcing by +0.12W/m2 because he believes that “observational” estimates are the ones that can be relied on. That is simply not true (since there is no such thing as an observational aerosol forcing estimate). Since it is well outside his area of expertise (and his associated reasoning is a case in point), he has no business disputing the AR5 best guess. Clear evidence for bias.

These are the tiny bits that make the difference between trustworthy and not so much. Apply a few reasonable modifications and TCR won’t be lower than 1.5K. Why would you publish almost the same study (Otto et al 2013) again, if it weren’t for you to use the latest updates available since then? Why still ignoring the problem which arises due to inter-hemispheric forcing differences (a problem which becomes very obvious in Skeie et al 2014)? Perhaps it is because it involves a few more physical considerations? Well, Steve Mosher is quick in finding an excuse for this paper, but that’s so lame that I have difficulties to imagine that he really believes what he is saying.

——————————————————————————————–

Paul S,

re your EBM-point you’ve made at 9:28pm yesterday: This is why some quarters keep insisting on the validity of EBM (-ECS) estimates. The sole purpose is to show that things seem to be inconsistent. Non-linearities are overrated anyways, aren’t they?

205. KarSten,

My point was specifically that there are certainly details, where you can make such a observation, but any scientific paper of that nature is likely to have potential issues of that kind. The question is, whether that indicates strongly that the paper has an overall bias.

When I said that it’s certainly possible to cherry pick such evidence I meant that a paper may have potential biases in both directions in several places, and that picking one proves little by itself. Much more is needed to conclude an overall bias.

206. BBD says:

Much more is needed to conclude an overall bias.

207. Pekka,

didn’t Tom C provide enough evidence to show that his bias tends be one-sided. But more importantly, it’s what NL has NOT done which clearly indicates bias. I don’t doubt his actual analysis is flawed for a minute, but his physical choices are very one-sided. NL certainly believes he isn’t biased, but that alone doesn’t make it so. And of course, it is only my very own opinion based on my own expertise. The fact that I am not the only one who thinks that way shouldn’t matter, but it may help to get a better feeling though (hint: I am now working in Myles Allen’s group).

208. Tom Curtis says:

K.a.r.S.t.e.N, the published forcings from IPCC WG1 Annexe 2 show a ΔF of 1.928 W/m^2 between the 1859-1882 and 1995-2011 periods, compared to 1.98 shown by Lewis and Curry. That is a 0.05 W/m^2 increase in forcing, which is probably related to a reduced strength of aerosol forcing. It is, however, not the 0.12 W/m^2 reduction in aerosol forcing you are describing. It may be that he has increased the 1995-2011 aerosol forcing by 0.12 W/m^2, and the 1859-1882 aerosol forcing by 0.07 W/m^2 on a similar basis. Do you have evidence of that?

209. KarSten,
You may have sufficient reasons for your opinions. Being a totally outsider limits severely my possibilities for judging the balance. I have fair trust in active scientists of the field being discussed, but it’s obviously not easy to decide, when a view presented by a scientist should be taken to be supported by the right kind of authority.

On the other side one objective fact is that the paper was accepted for publication in a high quality journal. The authors claim that the reviews were favorable, and I consider it very unlikely that they lie on that. There are also some other factors that influence me, but they are not simple enough to list here.

What Tom C and you have presented in this thread is still highly insufficient to prove to me that the paper is biased.

My own judgment on the relationship of the paper with what else is going on in its relationship to climate change discussion are similar to those presented by VTG, but that does not mean that the paper need to be biased in presenting that particular research issue. The paper is also fully explicit on the adjustment they made for aerosol forcing and their argument for it.

210. Marco says:

Mosher tell us:
“In my comments on draft one I suggested that Cowtan and Way should be examined as an alternative temperature series. The response I got was pretty good. A) thats not what the IPCC used. B) If the reviewers asked for it it would be included.”

Pretty good response? The IPCC obviously could not include Cowtan & Way in any of the analyses, since AR5 WGI was already released before Cowtan & Way was accepted.

As for response 2…what can I say. Sounds to me like “let’s hope the reviewers don’t say anything, it would be inconvenient if they do”.

211. Tom Curtis says:

Further to my discussion of the determination of Q during the nineteenth century, one reasonable approach is to make use of the Ocean Heat Uptake Efficiency, κ (units of W/m^2 per degree K), as discussed in Gregory and Forster (2008). κ indicates the uptake of Ocean heat relative to a change in temperature, with ΔQ = κ 8 ΔT. Again, according to Gregory and Forster, across the ensemble of CMIP3 models, κ = 0.6 +/-0.2 W/M^2 per degree K. Coupled with the HadCRUT4 trend of 0.0085 K per annum over 1859-1882, that yields a mean Q of 0.005 W/m^2 over that period, and (with the other corrections discussed above) a mean ECS of 1.98 K per doubling of CO2.

According to Kuhlbrodt and Gregory (2012), κ is inversely correlated with TCR. Based on their figure 1, and a TCR of 1.4, we should increase κ by about 25% to adjust for the TCR findings of L&C. Doing so does not reduce the calculated ECS within rounding factors.

The striking thing here is that this reasonable method for determining Q in the nineteenth century returns values one twenty-third of the size of those found by L&C, with a consequent adjustment upward of ECS greater than the combined effects of the three errors noted previously. As I have noted above, L&C did not apply their chosen method correctly, both by using the values of a single run of a single ensemble member to determine Q, and because of an invalid downscaling. Even so, the magnitude of the difference between these two purportedly reasonable techniques is striking.

212. Tom Curtis says:

Pekka, given your most strenuous argument to date on this thread is that I (and others) should not be even doing the sort of analysis that I am doing, your conclusion that finding two errors and one dubious methodological choice inappropriately applied, all tending to reduce the headline ECS as not sufficient evidence of bias shows more about your desires than your grasp of the situation.

213. Tom Curtis says:

In my comment two above, κ 8 ΔT is intended to be κ * ΔT. Sorry.

214. Joshua says:

BBD –

==> “In the real world, we cannot simply ignore where [X] and [Y] are coming from. They are advocates.”

Consider that I could have read this very comment from Judith or some of her “denizens” without skipping a beat (and certainly have read many, many, that employ the same logic).

215. Joshua says:

Tom –

==> “Pekka, given your most strenuous argument to date on this thread is that I (and others) should not be even doing the sort of analysis that I am doing,”

I don’t read Pekka saying that. Could you be specific?

216. Joshua says:

Pekka –

==> “Curry is not only an advocate, she’s more a scientist.”

Please show your math. Judith has long been a scientist. She now seems to devote a significant amount of energy to activism. You seem to think that there is some clear imbalance of evidence. I don’t know how you make that evaluation. I would ask the same question of someone if they said that Mann is “more” a scientist than advocate. Or you, for that matter. In fact, I think that embedded in your comment is a false concept – that advocate and scientist are somehow distinct categories (that might overlap within one individual). I don’t think that they are separable in such a fashion.

217. Tom,
I didn’t go to all the details in the paper. Perhaps the difference is to do with the scaling issue for aerosols (adjust 1750-2011 AR5 forcing to 1880-2011 period), although I’d expect to see a smaller difference. So I am afraid I can’t really help without crunching the numbers in detail (which I am not planning on doing due to time and motivation constraints).

218. BBD says:

Joshua

Consider that I could have read this very comment from Judith or some of her “denizens” without skipping a beat (and certainly have read many, many, that employ the same logic).

If I lie or misrepresent, then please correct me. If I am correct, why are you carping again?

219. BBD says:

Joshua

Do you disagree that L&C is tactical? Yes or no.

220. To me the worst part of Climate Etc are the organized attacks on Mann – and the worst part of this site are the comments that draw their conclusions from assumed influence of wrong motives in the activities of a variety of people who present “wrong views”.

I have written that selecting an approach that cannot be separated from the approach of the worse practices of the opponents makes the argumentation symmetric. There remains for an outsider no reason to believe one side more that the other.

Science wins by being presented differently, in a way the other side cannot adapt successfully.

221. I was going to make a longer comment, but I have some things to do. Let’s keep this pleasant and bear in mind that most of what’s being said has some merit even if not all of what someone says is correct.

222. Paul S says:

Tom Curtis,

‘κ = 0.6 +/-0.2 W/M^2 per degree K. Coupled with the HadCRUT4 trend of 0.0085 K per annum over 1859-1882, that yields a mean Q of 0.005 W/m^2’

Surely ‘per degree K’ refers to the total temperature change over a particular timescale rather than the per annum trend? Otherwise a 0.2ºC/Decade trend would indicate Q of only 0.012W/m2.

0.0085K/yr over a 24 year period is about 0.2K, which would indicate Q = 0.12W/m2.

223. HR says:

BBD,
Presumably given their association with SkS one could maybe uncontroversially suggest that Cowtan and Way have a bias in the opposite way. I wonder knowing this if you hold their work up to the same scrutiny using the same logic you use for Lewis and Curry.A link to a comment byyou expreessing similar concern about C&W would be illuminating.
Personnally I don’t care whether any of these people are Good Guys or Bad Girls. Given my own limited technical abilities I’m looking to see whether reasonable and intelligent people are pointing to problems with the work that could seriously alter the conclusions. I don’t see that yet beyond the usual caveats but I guess you have to stay open-minded to the possibility.

224. Joshua says:

==> “To me the worst part of Climate Etc are the organized attacks on Mann”

Geebus, Pekka –

How many threads has Judith written that amount to basically, read meat within a feeding frenzy of that attack environment?

That’s why I’m asking you for your math. Here’s some simple math: Consider the # of posts Judith has written about Mann (as the primary subject) as compared to the number of posts that she has written with appropriate scientific caveats such as were included in her last two posts about her papers?

Given that a common theme in many of your comments is the counterproductive impact – within the arena of general public opinion – to overconfidence and overstatements from scientists in the face of uncertainty, as a result of “advocacy”….. I really think that you should address my question head on.

By what calculus do you conclude an obvious imbalance in the degree to which Judith is a scientist as opposed to advocate?

Again, I don’t think that the two categories are really separable. But your statement implies that you do. So what is your measure? I’m not really looking for numbers – I’m asking for the nuts and bolts of your calculus.

225. Joshua says:

Anders –

==> ” Let’s keep this pleasant and bear in mind that most of what’s being said has some merit even if not all of what someone says is correct.”

FWIW –

If you must moderate, I would suggest that as the one rule for moderation.

226. HR,
Firstly, I don’t think Robert Way or Kevin Cowtan have ever presented evidence to a senate committee (or congress) or to parliament. Judith and Nic Lewis have both done so, and the unfortunate thing – in my view – about Nic Lewis’s testimony to parliament was that he promoted his own work over others and that he promoted the idea that the TCR is what’s relevant and that the low values that his work suggests are most likely. As much as I sympathise with those who think we should judge a piece of scientific work on its merits (and we should) it’s hard not to conclude that both Nic Lewis and Judith Curry are biased towards wanting the climate sensitivity to be low (to be fair, I’d also like it to be low, I just don’t think the evidence supports such a position).

The other factor is that what Cowtan and Way did was to try and understand the implications of the HadCRUT4 undersampling the Arctic. They did a number of different tests and got broadly consistent results and got results that even Judith Curry did not regard as surprising. We expect the Arctic to warm faster than other parts of the globe and hence undersampling in the Arctic means that the temperature we estimate will be reduced below what it actually is.

227. Joshua says:

BBD –

==> “Do you disagree that L&C is tactical? Yes or no.”

Yes, I would agree that it is tactical. In two ways. First, it is tactical in the sense that any scientist who is trying to affirm a hypothesis is tactical. The important question there, for me, is whether the appropriate acknowledgement of limitations is included – through transparency in methodology and discussion of caveats. My understanding of her recent papers is that such process is included. To the extent that it can be shown that they haven’t, then it certainly is fair game to question whether the “tactical” nature is consistent with the practice of good science.

Second, as an advocate, Judith is certainly within her rights to use science “tactically.” The problems enter, IMO, when science is exploited by poor advocacy – i.e., advocacy that fails to acknowledge uncertainties. I am certainly critical of Judith’s form of advocacy. But that is not the same thing as reverse engineering to say that someone’s science is flawed by virtue of them being an advocate: (1) all scientists are advocacates and, (2) That is one of the most common fallacies I see among SWIRLCAREs (Someone Who Is Relatively Less Concerned About Recent Emiossions) at Judith’s.

228. Joshua says:

Anders –

==> “As much as I sympathise with those who think we should judge a piece of scientific work on its merits (and we should) it’s hard not to conclude that both Nic Lewis and Judith Curry are biased towards wanting the climate sensitivity to be low”

Oy.

229. But that is not the same thing as reverse engineering to say that someone’s science is flawed by virtue of them being an advocate

I agree, and I think this is quite important. Simply advocating does not make one a poor or untrustworthy scientist.

230. Michael says:

Lewis and Curry is quite useful – irrespective of any perception of bias.

Yes, they probably tend to ‘low-ball’ many elements, but every paper of this kind involves decisions based significantly on judgement.

The result – a significant overlap with AR5 estimate. Which increases my confidence that the AR5 has it right.

What would be very interesting, is if they did the flip-side; making choices that tended to the reverse.

231. Oy

Ooops, did I show some bias there 🙂

232. Joshua says:

Heh –

==> “How many threads has Judith written that amount to basically, read meat…”

That was supposed to be “red meat” but I actually like read meat better.

233. Joshua says:

==> “Ooops, did I show some bias there 🙂

Shit happens.

234. Indeed it does. I would add though, that we should all want it to be low. We don’t, however, always get what we want.

235. Concerning the C&W temperature index I copy here a comment a wrote at Climate Etc

I add first that in the first paragraph of that comment I have in mind issues like that
– the temperature is measured in air at the altitude of 2 m in land areas
– over the oceans the temperature is measured from near surface water
– temperature does not describe well, what happens when open water is replaced by snow-covered ice
– under conditions of inversion the air temperature at the 2 m altitude is much more variable than under non-inversion conditions Thus the regions of persistent inversion may get too much weight in calculation of average temperature relative to their importance for climate or other effects.

Factors like the above lead to my comment that the definition of GMST is less than obvious.

The rest is from the earlier comment.

HadCRUT4, BEST, and CW2014 are all temperature indices. They all tell about the warming of the Earth near surface. They are calculated using specific methods and describe something that can be crudely described as the average surface temperature, but it’s not clear, what The Global Mean Surface Temperature really is. Even less clear is, what’s the “best” or most useful index of global surface temperatures.

It need to be recognized that the methodology HadCRUT4 leads to lesser warming, and that the difference is related to lesser coverage of high latitude temperatures. Thus “HadCRUT4-sensitivity” is smaller than “CW2014-sensitivity”. When either one is used in a situation where a difference of about 10% matters, it must be known, which of the sensitivities is used. When comparing the result to some other number like the sensitivity seen in a model, apples should be compared to apples and oranges to oranges. Thus it should be known, what is the coverage implied in the model calculations, and comparison should be made with the corresponding index.

In many cases it may be easier to use the model to calculate its index for the coverage of a specified empirical index. That’s probably the preferred approach for comparison. It could be worthwhile to check also, whether the model can explain the difference between HadCRUT4 and CW2014. If not, that’s a weakness of the model. In such cases it’s not obvious at all, which of the indices is more applicable for the particular comparison.

236. Tom Curtis says:

Paul S, on closer reading you are right. Thank you for picking up my error.

237. Paul S says:

Karsten, Tom,

Lewis and Curry mention ‘an argument’ for changing the -0.9 aerosol forcing to -0.78 by reference to a list of values which nominally come from observational studies. This is a bit misleading since 4 of the 6 studies are associated with values more negative than -0.78. The median is -0.85, which is more representative. But the numbers given aren’t strictly observational anyway. The IPCC authors made substantial adjustments from the estimates given in the original papers (which average about -1.15W/m2) based solely on modelled estimates for various considerations.

In any case they opted to stick with -0.9 for the main result.

On the ratio of 1750 to 1850 forcing, it surprised me how large it is in the AR5 time series – about 25% of the aerosol forcing occurs prior to 1850. I think this ratio comes from Skeie et al. 2011, no idea if it’s broadly representative or if there are many other estimates. I guess it would have to be primarily biomass burning prior to 1850, which would be largely informal, so there must be huge uncertainties concerning inventories of anthropogenic emissions over 1750-1850. It seems frankly unbelievable to me, with the industrial revolution and population increasing sevenfold, that one-quarter of 1750-2011 aerosol forcing occurred prior to 1850.

238. Paul S says:

Pekka,

There is a coverage bias in HadCRUT4, but relative to models there’s also a bias in use of SSTs whereas most model global temperature statistics you’ll see are global SAT. My preferred method of comparison is to clip both models and obs. to 60ºS-60ºN, thus mostly getting around the coverage and sea ice problems, then produce a model landSAT+oceanSST combination. As a shorthand I’ve found models indicate the bias between HadCRUT4 and model SAT should be about 15%.

239. Paul

When models and data are compared the most important thing is that they refer to the same variable. Choosing what the variable is, is less important, but a good choice has the folloeing features:
– It can be determined well from both the data and the model with as few as possible additional assumptions.
– It’s not disturbed by unnecessarily much noise.
– The coverage is as wide as allowed by the above considerations.

Your choice of restricting the comparison to the range 60ºS-60ºN may be close optimal, when all three requirements are taken into account as higher latitudes may add to the noise due both to sparser coverage and to the other factors I mentioned at the beginning of my previous comment.

240. Paul S,

thanks for revisiting the aerosol issue. It is indeed important to stress that the AR5 authors already reduced the model estimate (mainly based on Shindell et al 2013) considerably, which is why the aerosol adjustment in Otto et al. 2013 was +0.3 W/m2 prior to AR5. But luckily, they presented the numbers also without adjustment (TCR goes up from 1.3 to 1.5 for the 2000s). The Otto et al. 2013 TCR estimate for all four decades (1970-2009) thus translates to 1.5 (rather than 1.4). Using Cowtan & Way makes it another tenth of a degree. Interhemispheric considerations might change things further, though I am not convinced that it plays a role. Kuehn et al. 2014 is interesting in that respect as it might have implications for Shindell 2014.

On the 1750 to 1850 ratio, I am also not convinced that it plays such a strong role given that sulphate emissions seem to be dominating the aerosol forcing. I can see that Europe got pretty “dirty” prior to 1850, but that would have to be sulphates which simply don’t show up in the emission inventories. Don’t know what it is with the biomass burning stuff (according to Table 1 in Skeie et al 2011, BB seems to be of anthropogenic origin indeed). To be frank, the estimate of the potential temperature impact of anthropogenic aerosols past 1900 (see Wilcox et al. 2013) is much more valuable an information than any forcing estimate could ever be. As far as my opinion is concerned, an inverse estimate of the aerosol forcing from this temperature change (-0.45K between 1900 and 2005) appears to be the most successful way to pin down the forcing. Shouldn’t be too far away from the -0.9 W/m2 if conventional sensitivity wisdom is applied.

241. okay, as always, sth went wrong with the html tag (quotation mark at the end where it doesn’t belong). Here are the plain links in the right order again:

242. Joshua says:

Pekka –

Still waiting….

https://andthentheresphysics.wordpress.com/2014/09/25/lewis-and-curry/#comment-32656

Given the attention that Judith pays to the debate about the debate, in particular by posting many threads about Mann, I would be curious to know on what basis you made your statement above.

It seems that you are very quick to criticize the rhetoric of “advocacy” on the “realist” side of the debate. What about Judith’s advocacy?

Sometimes you avoid direct answers to direct questions. I’m hoping this might be an exception.

243. Rob Nicholls says:

A few questions and thoughts from the slow student at the back:
1) How much does Lewis and Curry’s paper change the overall picture in terms of the whole of the evidence around climate sensitivity? (I’m guessing it doesn’t really alter it at all as there were already several studies involving these kinds of models (basic energy balance models?) with estimates at the low end of the IPCC’s range; but then what would I know?)
2) From reading ATTP’s post and the comments (all of which I’ve really enjoyed), and from reading Lewis and Curry’s paper, I’ve got the impression that there is a lot of uncertainty around the kind of basic energy model used by Lewis and Curry (L&C), partly because these models are rather simplistic, and partly because going back to the late 1800s and early 1900s there’s a lot of uncertainty around the parameters in question. Is it fair to say that uncertainty for these kind of models may be too small?
3) Is there somewhere I can read more about the limitations of these basic energy balance models? Is there something inherent in these models that results in them being biased low? (I’ve read some posts by ATTP on this subject and have been looking at AR5 WG1 chapter 10. I get the impression that all methods for estimating climate sensitivity have limitations. I haven’t found yet anything in detail in AR5 on the kind of model used by L&C).

It’s interesting that it wasn’t possible for L&C to avoid completely using a more sophisticated climate model, when estimating changes in ocean heat content. If an AOGCM (albeit with tweaking) is good enough for these purposes, then presumably AOGCM’s might have an important contribution to make in terms of estimating climate sensitivity? (Perhaps I’m misunderstanding how L&C used the CCSM4 model).

I don’t have the ability to evaluate whether L&C has obvious flaws or not (I mean flaws rather than the general limitations of the method used) – I note the comments above on this issue. As ever, I’ve enjoyed and appreciated Pekka’s comments and I’ve been trying not to assume that L&C’s paper is flawed just because of the bias that I think is apparent in (at least some of) Curry’s blog posts and congressional testimonies.

What’s striking for me is that even if the effective climate sensitivity estimates in L&C’s paper are too low (and they do seem to be at the low end of available estimates based on a number of different methods), their 95% confidence intervals, using different time periods, all include 4 degrees C (one of them includes 9 degrees C). To me that would suggest that we need drastic cuts in CO2 emissions to prevent the possibility of very serious consequences, but I somehow doubt that Lewis and Curry see it that way.

244. Rob Nicholls says:

When I said “Is it fair to say that uncertainty for these kind of models may be too small?” above, I meant “Are the uncertainty estimates quoted for these ‘basic energy balance models likely to be too small?’ i.e. is the real uncertainty likely to be greater than just the quoted uncertainty? I suppose in most science the answer is usually “yes” as it’s very difficult to eliminate or quantify completely systematic errors).

245. Joshua says:
246. Joshua,

Worst part does not refer to most common part.

247. Rob,

some short feedback perhaps:

1) No change whatsoever.

2) EBMs rely on input data which can only be obtained from GCMs. Uncertainty is greatly underestimated for ECS (only at the upper end of the tail!) as non-linearities are completely ignored.

3) Since EBMs can’t account for non-linearities by design, you have to rely on other metrics. What you could do is to check how currently observed ocean heat content change compares with GCM values for the past 50 years where we have ocean observations. IPCC AR5 has very conveniently done this for us already:

And they didn’t even account for a small forcing overestimate in the CMIP5 runs (which would bring the modelled ocean heat uptake down a bit if properly accounted for). Hence, knowing that these models are in the ballpark of 3K for ECS, you can basically tell that NLs estimate must inevitably be wrong.

As mentioned earlier, the actual analysis is certainly carried out properly. But bringing it up to scratch (and in agreement with AR5 expert judgement) would almost entirely remove the low bias in his transient TCR estimate (which is the only parameter which provides meaningful information in these sorts of EBM exercises).

248. Rob,
Your understanding seems about right. There’s nothing fundamentally wrong with these energy balance models, but they are quite simple. The main issues are probably that they can’t capture non-linearities in the feedbacks, slow feedbacks, or inhomogeneities in the forcings. This would tend to suggest they underestimate the climate sensitivity.

As far as LC14 goes, their data has produced a result that is probably about as low as you can get with any reasonable set of data. It would be quite easy to choose different datasets and make different (but still valid) assumptions that would give higher values (for the best estimates at least). At the end of the day, though, the range they get for the ECS (1 – 4) and the TCR (1 – 2) is not so different to the IPCC range that one would conclude that we really don’t need to worry.

249. Karsten,
Okay, we managed to overlap completely. I thought I would respond to Rob since noone else had and got two at the same time 🙂 Thanks.

250. Christian says:

Soory, for a while be away from here, i have written my own Heat-Uptake-Model, testet against Gregory et al. (2013) , match is well and test it on LC14, match is also well.

If you want to know how and why is LC14 wrong and understand a little german you can look here, to translate its to hard today..

The Approach of LC14 is wrong..

“However, the CCSM4 model has TCR and ECS values of 1.8 K and circa 3.0 K that are some 35–85% higher than thebest estimates for those parameters arrived at in this study. We therefore take only 60% of the base period heat uptake estimated from the Gregory et al. (2013)”

Thats a fat mistake, because i have shown, that this reduce heat-uptake but not heat-uptake-efficents (based on Response Delay). If they argue, that heat-uptake ist not so much as we recorded, but holt Respone-Delay constant, it would mean, that Forcing is also lower, because heat-uptake-rate is the same before and you cant get lower Heat-Uptake, if forcing and uptake-rate is constant.

Other Words, the Timespan is constant, they reduce Heat-Uptake by adjust but translate not in their forcing.

Its very simple:

You have 10 peices of X (Forcing) and you loosing over the next 10 Days (Timepan) every day 1 piece of X(heat-uptake), that means you loose every day 10%(heat-uptake-effiecents) in relation to your 10 Pieces.

What does LC14 do? They say, that the piece you lost every day is not 1, but 0.6.

So and if you look at Tabel 3 of LC14, dF= 1,98 for 1859–1882 compare to 1995-2011 and dQ(Table 2) for 1995-2011 alone is 0,51W/m^2 because they said, they have to adjust the summ of (0,47W/m^2/mm * thermal expansion based on Gregory et al. (2013)) but forgotten, the Model of Gregory et al. (2013) have to push down Forcing to match LC14 results, because the 0.47W/m^2/mm are based on recorded heat-uptake and if you hold Forcing constant, you cant get LC14 Values.

I have testing this on my own Model, i have got for 1995-2011: 0,8W/m^2 based on thermal expansion and 0.47W/m^2/mm. If i reduce by a factor of 0.6 i get 0,48W/m^2 and that very close to LC14, if you know that i used a own model. And this Model works, because i get close results to Gregory et al. (2013)) Thermal Expansion M (because my model is also just mean of)

or in compare:

Gregory et al. (2013)) Thermal Expansion M
1901-1990: 0,44mm/y
1901-2000: 0,47mm/y
1901-1970: 0,36mm/y
1971-2005: 0,92mm/y

My Model:
1901-1990: 0,47mm/y
1901-2000: 0,52mm/y
1901-1970: 0,38mm/y
1971-2005: 0,89mm/y

I dont understand, why reviewers havent seen this

251. Christian says:

And if anyone understand what i want to say, we can close the discuss of NC14.

252. Christian,
I’m not quite following your whole, but as I understand it you’re suggesting that their system heat uptake estimates are wrong. They certainly appear to be on the low side.

253. Rob Nicholls says:

Karsten and ATTP, thank you very much for your answers to my questions. Really helpful and much appreciated.

254. Actually, I’ve just realised that what Karsten has said and what Christian has suggested are probably consistent. Models with ECS values of around 3 degrees can match the measured system heat uptake rate.

255. Christian says:

Hi ATTP,

Yes, wrong because, there is a rate of heat-uptake to radiative imbalance depend on time. So LC14 use the Value of thermal expansion which is related to this rate of heat-uptake on time.

If you now say, we take this Heat-Uptake over a timescale (10y etc..) and lower this by a faktor of 0.6 you make a mistake, because the heat-uptake-rate is also depend to Forcing (how obove explain).

That means, your Value of thermal expansion you have got from Model is now not related to the forcing and the rate of heat-uptake on time, because you have implied by adjust, that heat-uptake over your timescale is not this what the model say. It is impossible, because your forcing is the same and the heat-uptake-rate is the same, the Model say it is not you Value, because if i take your value, i am unable to give to correct thermal expansion (it will reduced because you have reduced heat-uptake).

The Way where the Model is able to give you the correct (the value you have first used) thermal expension is, when decrease original forcing and at the same Time making thermal expansion more sensitiv to radiative Forcing. Then the thermal expansion is the same like the first you used and your downscaled heat-uptake is predict.

So what LC14 do? The Forcing is not reduced, the heat-uptake-rate is related to this, that means clearly that their “heat-Uptake” is not solid to their own used thermal expansion values, because “heat uptake rate” is not really a order of ECS but an order to radiative Imbalance and Time (Time-Delay)

And that is why LC14 have a low “heat-uptake” related to the thermal expansion of Gregory et al. (2013)

256. Eli Rabett says:

Pekka, now some, not Eli to be sure, might see a difference between someone who says, let’s see what the most optimistic assumptions we can make are to find a lower limit and another who makes the most optimistic assumptions and claims that they are middlin to pessimistic.

257. Right now I’m trying to figure out, what Christian is trying to tell. I checked also his postings at wzforum (I understand German), but even that didn’t make the details clear enough. There are too wide gaps for me to connect trough.

Relationships of the nature he’s discussing are not clear enough to me without equations, where all variables are well defined. I might need also links to justify those equations, if they are not all obvious without.

258. Some, not me to be sure, might argue that Eli has got a point there (hope the bunnies aren’t too annoyed that I am plagiarising the rabbit-style so shamelessly … will remain the exception, I promise :D)

Anders,
my point was indeed that you have to test what the pseudo-ECS in a GCM would be, assuming that you wouldn’t actually have the chance to run the model until it reaches equilibrium. All you’d know is current ocean heat uptake and temperature change which allows you nothing more than to estimate ECS in an extremely crude fashion. The model might well indicate a pseudo-ECS of 2K, but in fact it equilibrates at 3K after half a millennium. As long as GCMs get the current rate of ocean heat uptake (ballpark) right, they can’t be terribly wrong. One might test how OHC change varies from model to model based on ENSO behaviour. I don’t remember having seen any paper which investigated this issue though.
Btw, somewhere someone in this thread raised the volcano issue, related to the apparent gap between modelled and observed volcanic (short term) response, or in other words that the models seem to be tending to be too sensitive to volcanic forcing, which in turn led NL to conclude that a scaling factor (was it 0.6 again?) is required. Thing is, not only got the volcanic forcing slightly adjusted downwards, but you’ve got to pay attention to the ENSO phase during and after any eruption while investigating the volcanic response. If you regress ENSO, volcanoes and surface temperature, this is what you get:

Notice something? Right, the models aren’t much too sensitive after all. Well, just saying …

Pekka, Anders,

Christians point kind of relates to mine indeed. If you were to scale the model sensitivity (as NL did in his and Alex Otto’s paper), you can’t reproduce the observed thermal expansion anymore as it follows from applying Gregory’s method (which combines Forcing, OHC and thermal expansion). Note, however, that I didn’t reproduce either method, so there is still a chance for oversimplification on my part. But what I gather from Christians thoughts over on the WZ thingy, I’m probably not too far off.

259. KarSten,
Do I understand correctly that this argument tells that the approach chosen in LC paper to estimate heat uptake in the initial period is not self-consistent. That would leave them without any method to make the estimate.

As the period was probably both still influenced by LIA and preceded by more volcanic activity, it seems to me very likely that there was some heat uptake over that period, but this argument does not tell its strength. Even without more knowledge about the strength, the potential error that they could have made on that point seems to be limited.

They may have several factors that contribute in the same direction, and add up to a more significant total, but it does not seem clear how far that’s the case.

260. Well it looks as if I am finally banned at Climate Etc after several years of commenting.

Apparently Curry can’t take being called out on boneheaded mistakes, such as the incorrect use of Bose-Einstein statistics, and all her other recent flubs.

261. WHT,
Well, I guess it had to happen at some time 🙂

262. Michael says:

WHT,

Sometimes Judith does ‘permanent moderation’ – which is somewhat similar in effect.

263. If you can stand it, keep up the fight Michael.

264. Michael,
Indeed, keep it up if you can. In fact, I think I’ve linked to one your comments on Climate etc. I found it quite amusing.

265. Paul S says:

An addition to my earlier point about estimating aerosol forcing, rather than sensitivity, from the presumed inconsistency. Even if we were to assume the EBM is 100% correct about this inconsistency, why should we expect two highly uncertain, independently investigated factors to have consistent central estimates at any given moment in time? Isn’t this why we have uncertainty ranges?

There are actually a number of such inconsistencies in the report. In relation to heat uptake rates, the 1993-2010 sea level budget is closed in that uncertainty ranges overlap, but the central estimate from bottom-up accounting is short by 0.4mm/yr. If we assume all of that shortfall is due to underestimation of thermal expansion, which is possibly reasonable given known coverage biases in the observation network, 1993-2010 thermal expansion becomes 1.5mm/yr which is indicative of a TOA imbalance of about 0.7W/m2, quite a bit higher than the 0.51W/m2 used by Lewis and Curry for 1995-2011.

There is also what seems to me an inconsistency in method. In the aerosol-cloud forcing chapter a longwave adjustment of +0.2W/m2 is added to observational estimates, which seems to have played a significant role in reducing given total aerosol forcing to -0.9W/m2. Leaving aside whether or not this magnitude is justified, the longwave value comes from model fixedSST experiments in which a large part of the positive longwave TOA flux comes from the Planck response to aerosol cooling of land and atmosphere (which are allowed to adjust). It makes sense to add this adjustment to observational estimates for comparison to model estimates which intrinsically include it. However, this Planck response would occur with all other forcings as well, yet I can’t see that an equivalent adjustment has been applied to any of them. WMGHG forcings simply adopt the RF calculation as a central estimate and implement the adjustment concept as uncertainty. This surely amounts to a biasing of the central net forcing estimate.

266. Christian says:

Pekka,

I realize that you not the only one who cant make the connect to my point. Karsten is in near to this, because he recognized that Models are well produce measured Temperatur and OHC.

Why is this so? Ok, i try to close the gap.

At first, we have realize that equation of LC14 or Otto et al (2013) are linearities. That implies, that the rate of heat-uptake is linear to the Forcing. If we look at their equations:

1) ECS = F(2x)*ΔT/(ΔF-ΔQ)
2) TCR = F(2x)*ΔT/ΔF.

So, LC14 found based on their method this values: ( For Base-period vs. Final period (1995-2011 or look at Table 4 and 3)

ECS: 1.64K
TCR: 1.33K
ΔT: 0,71K
ΔF: 1,98W/m^2
ΔQ: 0,36W/m^2

So now we stress their method to ask, what is ΔT and ΔQ, when assume, that F(Forcing) at all times was double so high as we have measured and was only the half of as we have measured and that TCR=1.33K and ECS=1.64 is.

I think, for everyone is clear, that ΔT and ΔQ rise/drop proportional to that assume. So this should be the same in a OHC-Model, we can discover this:

So first i use the RCP-Historical Forcing and make 3x scenarios.

1) assume that Forcing is only the half of as we have measured
2) assume that Forcing is that we measured
3) assume that Forcing was double so high as we have measured

So i test this 3 scenarios for the Full Time (1765-2005) and look at the Heat-Uptake for the periode 1990-2005. Heat-Uptake is give by last Value minus First Value (2005 minus 1990)

1) ΔQ=0.27W/m^2
2) ΔQ=0.55W/m^2
3) ΔQ=1,11W/m^2

(the little error is because its rounded)

So now we can say, that equation is working. So and now can discover the Error of LC14. They Adjust now the “Heat-Uptake” by 0.6. which is given from CCSM4 model (as thermal expansion) and the sensivity of thermal expansion to Forcing (LC14= 0.47W/m^2/mm)

So at first, we look what the Forcing is for 1990-2005 by the 3 scenarios and then we downscale the “Heat-Uptake” by 0.6 and ask the Model, what is the Forcing for 1990-2005 in these 3 scenarios when “Heat-Uptake” is lowered by the factor of 0.6. (Forcing is gives as the mean of 1990-2005)

1) ΔF=0.83W/m^2
2) ΔF=1.68W/m^2
3) ΔF=3.36W/m^2

1) ΔF=0.50W/m^2
2) ΔF=1.01W/m^2
3) ΔF=2,02W/m^2

We you now realize, is pulling down “Heat-Uptake” by a adjust is incorrect, because “heat-uptake” is not a function of ECS, but of Time and Forcing.

Pekka, i hope that is helps you to connect, what i want to say.

267. I continue to have difficulties in understanding this issue.

As far as I understand we are discussing the point, where L&C use one single result from AOGCM, and that the particular result they use is an estimate of heat uptake of the initial period (1859-1882 in the base case). The calculation that they use is that of Gregory 2013. They convert the result expressed as change in sea level to heat uptake getting the value 0.26 W/m^2. If they would use that value their analysis had given even lower values for sensitivities, but they multiply the value by 0.6 getting 0.15 W/m^2 (with standard error 0.075).

Nothing else in their analysis uses AOGCM results as far as I have understood. Thus the question is, whether an AOGCM based value can be used here, and whether the value should be modified, when it’s known that their resulting sensitivities are lower than in the model used by Gregory. Their logic seems to be that the heat uptake is an outcome of the model given the forcings that are used in the calculation, and that the uptake would have been smaller if the model would have been consistent with their sensitivities.

I still fail to see, how the argument of Christian contradicts this logic. I also fail to see, what would have been a more appropriate choice, when we remember that this particular value is surely so small that it cannot make the final result very different.

This is just one detail. There are other issues that have presented about their analysis, but it should be possible to reach on agreement – or a well stated disagreement – on this point independently of the other issues.

268. Pekka,

the problem in doing so is, that if the same method (Equ.2 at the beginning of this posting) is being applied to the most recent period (1990-2005), you’d be left with an inconsistent “real world forcing” (if you were to plug in LC14’s final numbers after adjustment for ΔQ, ΔT and ECS). Trouble is, the thermal expansion in CCSM4 which eventually yields the ΔQ is in perfect agreement with observations (and so was for ΔT for the 1990-2005 period). Of course you can still ignore this fact by brushing it off as an unreliable AOGCM product, but that would be an extremely foolish thing to do given that the model is actually spot on for the period in question.

Paul S,

have you investigated the papers which they’ve cited in AR5 to justify the LW adjustment in more depth? From what I’ve gathered, there wasn’t much pointing into the desired direction (sure, there is a positive LW forcing, but rather small, that is). I assume that your thoughts (re Planck feedback) are based on the general model setup as outlined in some of these publications.

ATTP: Watch out RealClimate! They will have something up on the subject very soon 😉

269. KarSten,

I understand that it would lead to inconsistency, if it were used at both ends, but the idea is that it’s used only, when no alternative is available and only where the effect is rather small – and where it increases the calculated sensitivities.

270. Pekka,

I might add (in case it wasn’t obvious): By virtue of scaling the model derived ΔQ, all LC14 did is to invalidate the AOGCM. For the 1990-2005 period, the model would simply be inconsistent with observations … by a lot!

271. KarSten,
How far their sensitivities can be considered consistent with all evidence of different nature is a separate issue, and so is even the value of this particular approach. There are certainly many other climate scientists who consider this kind of analyses an useful addition to the set of methods to constrain sensitivities. Therefore this paper should be judged based on it’s merits in performing it’s stated objectives.

As so often a single research result is misused in policy related discussion. That’s unfortunate, but errors in that must be fought in those arenas using arguments relevant to them. If no real errors can be shown in the paper claiming otherwise with the help of misinterpreting the paper is not the right solution. So far I haven’t seen evidence on any real error in the paper. or evidence that it’s technical assumptions would be biased.

272. Pekka,

as I said earlier, for what the paper claims to be doing it is kind of sound.

BUT, for me personally there is no fact denying that (1) Cowtan & Way is superior to HadCRUT4 (btw, confirmed by BEST just recently), (2) the aerosol forcing in LC14 is underestimated, and (3) interhemispheric difference in aerosol forcing at least an issue, one which would most likely tend to increase TCR.

AND, ECS is just meaningless as far as any observationally based estimate is concerned, plus, it is reasonable to argue that ECS doesn’t play a role in decision making whatsoever anyways. All what counts in the medium term is the transient response since all relevant policy decision (framed in terms of Carbon emission targets) have got to be based on TCR or TCRE (see Allen and Stocker 2013), given that the 2°C temperature threshold is a function of the transient response.

So even if their low TCR of 1.3 should be true (unlikely as it is), the big picture isn’t change at all given that the confidence range has not changed in any noticeable way. That’s the take home message here. Even if you make the boldest and most optimistic assumptions (endorsed by evidently non-mainstream authors), you’ve got to deal with the medium term effects, may your target be reached 5 years later or not (which is the difference between mainstream and LC14 we are talking about).

273. Christian says:

Pekka,

LC14 use in all period the thermal expansion as reference to their heat-uptake, only not in 1971 to 2011, that can you see in their Table 2 of their Paper. If they wouldnt use it, i havent a chance to get their values by an OHC-Model, see again post from September 30, 2014 at 3:17 pm. And its have to noticed that the Model i have written is in very near to Gregory 2013.

You Tell:

“Their logic seems to be that the heat uptake is an outcome of the model given the forcings that are used in the calculation, and that the uptake would have been smaller if the model would have been consistent with their sensitivities.”

And that the fail. Because Forcing is nerly a Full output of Emissions in Models and the models Feedbacks (like co. Emissions like metan) makes what the Model say about radiative Forcing.

Or make simple Test:

We use CIMP5 RCP-8.5. That mean, all Model in this Scenario getting the same emissions, in RCP-8.5 is that a Co2-Forcing-eqivalent (means emissions like metan are inside). For RCP 8.5 is it a peak by arround 2500ppm in near to the year 2200.

So the next is we look at 2x Models with different warming and compare then their Suface-Downwelling-Longwave-Radiation(because it a good indicator of Forcing (R^2= 0,992 by 1861 to 2100)

I use two Models:

1) ACCESS1-0 rcp85 tas

2) FGOALS-g2 rcp85

And their increase of temperatur from 1850 to 2100:

1) 5,4K
2) 4,3K

It is clear, that ACCESS1-0 have a higher Sensivity.

1) 34W/m^2
2) 28W/m^2

If you dont belive, you can test it here: http://climexp.knmi.nl/selectfield_cmip5.cgi?id=someone@somewhere and http://www.pik-potsdam.de/~mmalte/rcps/

So that cleary imply what i have written obove, it means a lower Model ECS is caused by lower Forcing in the Model. Its a bit confusing, because we estimate the ECS by the Forcing of Co2 alone (arround 3.7W/m^2 by doubling).

In other words, we look for, how sensitiv is tthe climate climate to an Co2-Forcing by doubling (3.7W/m^2)

274. Anders,

given that RC has just put up a posting by Stefan, I guess the one I’ve been referring to a bit earlier (which wasn’t this one by Stefan … in order to avoid confusion) will not be up as soon as I thought it would. But it is in the pipeline …

275. WebHubTelescope says:

So far I haven’t seen evidence on any real error in the paper. or evidence that it’s technical assumptions would be biased.

You have got to be kidding us (?). Lewis and Curry essentially missed out completely on how to use knowledge of the ocean versus land warming as a discriminating factor. The fact that some portion of the ocean’s heat flux gets redirected to the land has huge implications for the analysis. So not only do they not understand how fat-tail thermal diffusion works but they completely overlook the biggest uncertainty in how to apportion the temperature change.

http://contextearth.com/2014/01/25/what-missing-heat/

Lewis and Curry essentially score an #OwnGoal by putting forward a bottom-range baseline which we know is higher because of the factors that they got totally wrong.

Curry is committing a similar #OwnGoal with her Stadium Wave thesis, not realizing that her multidecadal “wiggly-looking oscillation” is setting a baseline of +/- 0.1 C on top of the large secular warming trend.

We really need these people, eh (?)

276. Christian says:

When Surface-Downwelling-Longwave-Radiation is correlated to Forcing by R^2= 0,992 how can you say, that Forcing is untoucht, when you run the Model with lower climate Sensivity?

Or looking of the sensivtiy of both model to their response of Temperature and Surface-Downwelling-Longwave-Radiation

You got:

ACCESS1-0 : 0.158K/W/m^2
FGOALS-g2 : 0.153K/W/m^2

(there is a smal missmatch, becaue i have roundet the values before)

I hope you now got it

277. Christian,

If you direct the question to me, my answer is that they did not run any model beyond the defining formulas. That’s the nature of the paper. You may consider that of little value, but that’s the case and they tell it clearly. The paper was accepted for publication as such.

Similarly to WHUT. Scientists study restricted problems in each paper. Another study might look at the land-sea effects.

It could have happened that the reviewers and the journal had told that there’s too little of new scientific value in the paper, and to reject it. That didn’t happen.

The list of things that could have been included is long. Many other proposals have been made referring to Shindell, Cowtan&Way, etc. All that had been possible, but the choice was theirs and their choice was legitimate even, if others would have chosen differently.

278. Christian says:

Pekka,

You say:

“If you direct the question to me, my answer is that they did not run any model beyond the defining formulas.”

Thats true, but how you have say it before:

“The calculation that they use is that of Gregory 2013.”

And Gregory runs a Model and LC14 argue:

“However, the CCSM4 model has TCR and ECS values of 1.8 K and circa 3.0 K that are some 35–85% higher than thebest estimates for those parameters arrived at in this study. We therefore take only 60% of the base period heat uptake estimated from the Gregory et al. (2013) simulations..”

When you not can understand, that their Adjust cant be true, i dont know how to explain.

279. Christian says:

And not forgott, the do this also for their finale period!

280. Pekka,
of course, you can keep using your old iPhone 4 … but you’d rather not claim to have the better specs than those who’ve got the new iPhone 6 already.

281. KarSten,

I was serious, not only rhetorical when I wrote that the paper might have been rejected as containing too little new science. A paper that had included more on some of the issues listed might have been much better science.

282. WebHubTelescope says:

Similarly to WHUT. Scientists study restricted problems in each paper. Another study might look at the land-sea effects.

Are you being serious again (?)

What Lewis and Curry are doing is not a “restricted problem”. They are setting the standard (i.e. a dog whistle) for a lower climate sensitivity, which by definition has to include all known effects, otherwise what they are doing is completely misguided.

283. Pekka,
my comment wasn’t directed at you (only in reply to what you’ve said). Let’s rephrase: You (read: NL) might get the specs of your old iPhone 4 published in some very smart(-phonesk) Journal, but you’d rather not pretend to be on par with those who’ve happened to be in possession of the new iPhone 6 already. The geek squad might laugh at you (read: NL) for very good reason.

284. Frank says:

The equations used for calculating TCR and ECS are incomplete because they don’t take into account unforced temperature variability. Surface temperature can rise or fall during any period (El Nino, possibly around 1940) without an anthropogenic forcing being responsible. During the current hiatus, we may (or may not according to some) have an increase in forcing without a change in temperature, which due to unforced cooling. Expressing this mathematically, the observed change in T (dT) is the sum of the forced change in T (dTf) and the unforced change in T (dTu).

dT = dTf + dTu

TCR = F_2x * dTf / dF = F_2x * (dT- dTu) / dF

If only half of the warming since 1950 were due to man, then dTf = dTu = 0.5 * dT, which is mathematically equivalent to saying that TCR is about 0.65 degC (likely 0.5-0.9 degC). That may be a problem for Judith, but it is a worse problem for those who think the IPCC’s attribution statement implies something important about future climate change. Fortunately, Curry and Lewis explicitly state: “Both equations (1) and (2) assume constant linear feedbacks, and that [dT] is entirely externally forced.” ie dTu = 0.

So how can Curry disagree with the IPCC’s attribution statement and then calculate climate sensitivity assuming that all change is forced? Careful readers may remember than Judith only objects to the assigning at least half of all warming to anthropogenic forcing WITH >95% CERTAINTY and the IPCC derives their certainty from climate models with an average TCR of 1.8 degC, not the 1.3 degC calculated from energy balance models.

Both L&C (and Otto) take dTu into account by including it in the uncertainty of dT. They calculate dT from the average temperature over roughly two decades to minimize the impact of short-term unforced variability and do so for several different time periods, which would uncover large errors due to longer-term unforced variability.

The important question is whether the pdf for TCR in this paper multiplied by the pdf for forcing affords a pdf to anthropogenic warming that assigns more than 50% of observed warming since 1950 to humans with >95% certainty.

285. Paul S says:

Karsten,

have you investigated the papers which they’ve cited in AR5 to justify the LW adjustment in more depth?

No papers were cited, which makes that difficult 😉 They just mention an assessed model range of +0.2 to +0.6W/m2 and state that they use the lower end, presumably as a nod to scaling against the relatively small assessed shortwave forcing.

They have a table showing forcing results from a selection of what they believe to be the best models. Looking at the papers I can find total net longwave figures from 4 of those 7 studies, with a range of -0.32 to +0.31, mean +0.1 with a median of +0.2 probably being more representative. I think one or two of these results represent 1850-present forcing rather than 1750 so possibly a small underestimate compared to IPCC timescale.

Zelinka et al. 2014 shows LW and SW flux breakdowns for some CMIP5 models, with LW range -0.23 to +0.95, mean +0.2 and median +0.1.

So, in terms of all the model results listed in the chapter I can’t see any support for a +0.2 to +0.6 range. The average seems to be around +0.1 to +0.2 and that’s in combination with SW fluxes which are greater than found by the IPCC assessment, where there is a good correlation (or rather anti-correlation) between SW and LW magnitudes. Plotting total net longwave against shortwave indicates the best fit for the IPCC SW estimate (which effectively must be -1.1) is a net LW effect of about zero. This assumes the Ari estimate of -0.45 doesn’t already include a LW component, which I think technically it should do since it was assessed independently. If so, I make it that an observationally-scaled LW effect should be slightly negative.

As a guess the +0.2 to +0.6 range may come from the Wang et al. 2011 paper. The base CAM5 model produces a LW effect of +0.54 whereas the MMF version which is the focus of the paper produced +0.26. With some rough rounding this could be +0.2 to +0.6.

286. verytallguy says:

Frank,

careful readers may remember that Judith’s most likely attribution is 50% natural, a conclusion clearly not supported by even this paper, a low outlier.

Whether the IPPC statement is supported by any single study is not “the important question”; the important question on that is whether the statement is supported by the balance of all the evidence available.

287. Christian says:

Final comment on LC14,

At First,
Yesterday, i’ve oversimplify a bit, after reading my comment, i assume this can lead to confusions. So what i called Yesterday “Forcing” is more the internal radiation budget of the Models to a given scenario.

But makes my result not wrong, but is only to simplified.

To argue, that Model is to sensitive means for the Model, their internal radiation buget is incorrect in compare to Forcing. So if you push down sensitivity of Model to get a lower “heat-uptake” would also reduce the internal radiation, because heat-uptake is related to this. Or you can say, the Model cant give you now, the thermal expansion related to Forcing like before, because internal radiation is reduced.

So LC14 now have the problem, that their own used thermal expansions-values from Gregory et al. (2013) are inconsistent to their after adjust. So their Adjust would tell that OHC has risen since 1957 only 14*10^22 Joule and that is not their lower bound but their best estimate!

Thats my opinion to this and why i dont really accept that Paper in the way it is for now. Karsten also figure out that Models with ECS near to 3K can produce our measured increase of Ocean Heat Content, that also a reason to have doubts about their adjust.

288. Frank says:

VeryTallGuy: I was recalling Judith’s three part analysis of the AR4 attribution statement, which concludes:

“From this analysis, it seems that the AR4’s assessment of confidence at the very likely (90-99%) level cannot be objectively justified, even if the word “most” is interpreted to imply a number that is only slightly greater than 50%. http://judithcurry.com/2010/10/24/overconfidence-in-ipccs-detection-and-attribution-part-iii/

The 50-50 argument she made much more recently (which I have forgotten and appreciate being reminded of) certainly appears to be inconsistent with LC14.(http://judithcurry.com/2014/08/24/the-50-50-argument/) Her argument is based on the disagreement between energy balance models (TCR about 1.30 degC) and AOGCM (about 1.8 degC), a 28% discrepancy, which still is a lousy justification for saying that 33-66% of warming is likely to be anthropogenic. Unfortunately, Judith hasn’t taken into account that AOGCMs with an average TCR of 1.8 OVERESTIMATE the amount of warming since 1950 to the present (due to the hiatus) by about this amount, but this problems wasn’t a significant factor for the AR4 attribution statement (which may have covered only to 2000).

It certainly seems inconsistent for Judith to claim that the IPCC’s attribution statement could be flawed because unforced variability might be higher than models projections and then turn around and use a low value for unforced variability in this derivation of TCR. (I’ve tried to point this out several times at her blog in the past.) She should either: a) use a larger dTu in the uncertainty in dT for this calculation or b) multiply the pdf for the TCR in this paper by the pdf for forcing (for the attribution period) and get a pdf for forced warming. Unless there is something very unusual about the periods used for attribution, observed warming will be very close to forced warming. There will be some tail likelihood that forced warming was half or less the observed warming, but I suspect it will be less than 10%.

289. Paul S,

guess I was thinking of the two (as of AR5 draft version) Storelvmo (2008, 2010) and the Ghan (2012) papers which were mentioned in context of LW forcing in chapter 7.5.3. But no reference was provided re the actual number (other than mentioning that it is taken from modelled LW effects). IIRC, I’ve also tried to skim through all the modelling papers referenced in order to find some LW numbers, but I didn’t come up with much what would have support their range either (just like you).

Thanks btw for the Zelinka (2014) reference. I must have gone blind over summer to have missed that one. I truly didn’t see it before. That’s quite some useful resource.

Your summary of all the available numbers makes absolutely sense to me. I can live with their (AR5) central estimate though, albeit with reservations as to what the temporal evolution of the effective forcing might have looked like since 1950. The aforementioned Kuehn et al (2014) paper gives away some clues in that regard, except for (1) a potential minor scaling issue and for (2) the fact that they are only dealing with the brightening period after 1990.

Your conjecture with the Wang et al (2011) sounds plausible too (and to be honest, based on what I gathered from a few colleagues, our notion of a minimal degree of dodgieness re LW forcing doesn’t seem to be completely unfounded ;-))

290. I noticed that James Annan has now commented on Lewis and Curry. He presents views that I can easily accept (but I’m a bit worried that I may be too eager to accept those views).

He’s also discussing the possibility of using climate models to study the validity of the approach. That has been discussed here as well. I would imagine that what would be needed is a set of climate models (a set, because one may too easily be biased) tuned to be consistent on the warming trend over the last 150 years or so, and then used to determine ECS. To me ECS alone is not very interesting as I think that the rate of approach to the equilibrium value is of essential significance for the policy conclusions. As AR5 seems to indicate, TCRE may be the best single number to offer guidance for policies. That’s at least, how I see it. Seeing the rate of approach towards equilibrium helps in connecting TCR and ECS to TCRE.

291. Pekka,
I saw that. I thought the idea was to use the EBM method to estimate the ECS from climate model runs over the period 1880-2010 (or something like that) and to then compare that with the actual ECS of the model. I thought this was an interesting comment

And despite what some people might like to think, the slow warming has certainly been a surprise, as anyone who was paying attention at the time of the AR4 writing can attest. I remain deeply unimpressed by the way in which this embarrassment has been handled by the climate science insiders, and IPCC authors in particular. Their seemingly desperate attempts to denigrate anything that undermines their storyline (even though a few years ago the same people were using markedly inferior analyses of this very type to bolster it!) do them no credit.

I don’t really know the history well enough to have a sense of whether or not this is a fair comment. Given the typical tone of the debate, I have some sympathy with climate science insiders, but that probably just illustrates my own bias. I’m also slightly confused as I thought even AR4 acknowledged that there could be decadal variability, and so the surprise was the timing of the slowdown, rather than that it actually happened.

292. ATTP,

To the extent we use instrumental temperatures in the estimation of climate sensitivity every year that falls below the best earlier projection (taking into account known forcings like volcanism and TSI, but probably not ENSO as it’s not a forcing) lowers the estimate, and every year of higher temperature rises the estimate. What’s known about internal variability affects the size if this correction, but not its sign or presence.

How to handle ENSO in that is a bit complicated as it’s an indicator of short term variability but at the same time an outcome of longer term variability, which changes the relative weight of El Ninos and La Ninas.

293. verytallguy says:

ATTP,

the following box was in WG1 SPM

0.2°C per decade is projected for a range of SRES
emission scenarios. Even if the concentrations of
all greenhouse gases and aerosols had been kept
constant at year 2000 levels, a further warming of
10.7}

294. Tom Curtis says:

Anders, there is no doubt studies of this very type (eg energy balance estimates of ECS) where used to bolster the IPCC consensus value. They were not, however, “markedly inferior”. In general I would say they were superior to Lewis and Curry, although inferior to Otto et al 2012. Nor do I think any of the authors of those studies were rash enough to suggest that there study set an upper limit on ECS, nor to think that their methods rendered model based and paleo based estimates redundant.

295. Joshua says:

Anders –

As for this….

Their seemingly desperate attempts to denigrate anything that undermines their storyline (even though a few years ago the same people were using markedly inferior analyses of this very type to bolster it!) do them no credit.

I thought it was a bit vague and wondered if it wasn’t hyperbolic. For example, I’m not sure what he’s describing as “desperate attempts to denigrate” L&C14 from “climate science insiders.”

296. VTG,
Sure, I was just thinking of AR2 WGI where on pg329 it has a section starting with

So-called “inherent” low-frequency variability in the climate system on decadal and longer time-scales can, for example, be induced by external sources such as volcanic activity and perhaps solar activity, and from internal sources such as alterations o f the thermohaline circulation.

It was my understanding that the idea of decadal variability was accepted. The ability to precisely predict it, though, was not really possible at that time.

297. I would take as a hyperbole. The most that I have perceived is a tendency to promote papers that are pointing in one direction over those that are pointing in the opposite one, and even that perception may be wrong.

Active scientists may have met signs of such attitudes, when their own papers or other papers they know well are discussed while outsiders are ignorant of that.

298. BBD says:

I’m pretty sure that if there wasn’t a vociferous denial movement hammering the non-existent “pause” as if it overturned everything we know then “climate science insiders” may have been less defensive.

Over and over again I see this: scientists are abused and misrepresented by the deniers who then blame scientists for being defensive or blunt (eg. Mann).

It is a vile hypocrisy.

299. BBD,
I agree with your assessment. Many issues of climate science had probably been discussed differently without the pressure of the policy related fight.

There’s one factor that may have had an opposite effect. That’s the total extent of interest is climate science. With less interest there were also less public discussion, but the remaining would probably be more open.

300. Tom Curtis says:

anders, “decadal” variability can be taken to mean variability with a characteristic period of 10 years. The characteristic period of ENSO is about three years, and hence should probably be called sub-decadal. That is well accepted (I think). Anything above that has yet to be shown to both have a significant impact on global temperatures and to not be forced. However, I cannot comment as to the understanding in 1994-5 (AR2?).

301. verytallguy says:

ATTP,

yes, I wasn’t at all trying to suggest that internal variability wasn’t an accepted concept or uncertain in prediction.

Just saying that some parts of the AR4 SPM (such as that I quoted) can be read as claiming variability is not significant, even on a decadal scale. These sorts of statement are the ones JC has focussed on.

It doesn’t come across that way if you read the SPM in full, eg the wide ranges in table SPM.3 or the wiggles in fig SPM.5

302. Paul S,

one more thing perhaps, which I forgot to mention. If Kuehn et al 2014 are on the right track (which I think they are), I can’t see how this can be easily reconciled with Figure 8.20 in AR5 (radiative forcing estimates for 1980-2011). I consider it very likely, that the negative total aerosol ERF during this period was actually a slightly positive one (up until 2000 at least).

It’s also obvious from these figures why the accelerated temperature rise in the 1990s had to slow down a bit at some point. Granted, GMST trends can be perfectly explained by ENSO, solar and volcanic forcing variability alone (which is why I am indeed utterly unimpressed and certainly not surprised by a few years of less warming at all … apparently in contrast to some more senior colleagues for some reason), but there is probably more underlying forcing variability than most people have previously thought. Still others seem to think it’s all down to the Atlantic (*yawn*).

303. AR4 tells definitely that internal variability exists and it indicates that it’s not very weak, but what has taken place since was not predicted, and I’m pretty sure a proposal of development like we have seen would have been assessed as very unlikely if not virtually excluded.

I have been wondering even more recently, why many climate scientists want to predict that the turn up is very close. I would myself expect a clear turn in not too many years away, but to me it would be more prudent to emphasize that it might take even a little longer to be clearly visible. I think that it’s better to err a little to the low side in near term predictions than to the high side. Then it’s possible to say without stretching the facts that the actual warming has exceeded the predictions.

I do not believe that near term predictions get many people to act. Therefore they should be conservative in at least presenting low lower limits. Longer term projections should be in accordance with the best present understanding. For them the range comes naturally from the scenarios (or emission pathways).

304. Pekka,

who are you referring to who wants to “predict” a turn up? Temperatures will rise, so much is clear, and only a minority in the community thinks that it will take more than 5 or 10 years until we are going to see new records. But I am unaware of attempts to predict anything beyond that. The fact that we are now in the longest period without El Nino conditions since 1951 (i.e. since regular observations are available) should give away some clue as to why a temperature upshot can’t be that far away, unless we have messed up the system enough to have effectively changed ENSO behavior already. That would be a surprise indeed. But either way, the next El Nino will inevitably be coming at some point.

305. verytallguy says:

Pekka,

I’m pretty sure a proposal of development like we have seen would have been assessed as very unlikely if not virtually excluded. [since AR4]

Really?

Ar4 was 2007, so data to 2006 was available then.

Here’s the GISS temperatur anomolies since.
2006 59
2007 62
2008 49
2009 60
2010 67
2011 55
2012 57
2013 61
ie for 8 years data, we have a rising trend including a new record high

I don’t think anyone would have excluded this as a possibility.

306. I cannot tell exact quotes or give names, but it has definitely not been uncommon to see statements that get at least perceived in that way. Probably less often very recently but more often a year or two ago.

Much goes on based on perceptions, my comments are also often based on the perceptions that are left in my memory. They are real on that level, how real they are otherwise, I cannot guarantee.

307. Quiet Waters says:

Mojib Latif September 2009: “And then you see right away [that] it may well happen that you enter a decade or maybe even two, when the temperature cools relative to the present level. And then I know what’s going to happen. I will get millions of phone calls. “What’s going on? So is global warming disappearing? Have you l**d [to] us?””

deepclimate.org/2009/10/02/key-excerpts-from-mojib-latifs-wcc-presentation/

308. Paul S says:

I don’t think it’s controversial to say temperature evolution over the past several years has been a surprise compared to what people thought would happen circa 2007. The real question is, given the deep solar minimum, persistent La Nina pattern, should we be surprised that they’re surprised? 🙂

309. Paul,

That brings in the question. If the temperature is not surprising given the persistent La Nina pattern, should we be surprised about the persistent La Nina pattern?

I have linked before my extrapolation of the Foster & Rahmstorf comparison. The extrapolation includes TSI and MEI, but not volcanic forcing. That has fallen below the trend since 2011 (essentially as soon as their data stopped), but the distance has got smaller in the latest points.

310. Paul S says:

Pekka,

should we be surprised about the persistent La Nina pattern?

Well, that was actually my meaning, I took the temperature response as a given. Talking about it in simple terms of La Nina obviously becomes tenuous but studies like England et al. 2014 suggest the past 15 years are quite unusual in the context of the past century.

311. Paul S says:

suggest the past 15 years are quite unusual in the context of the past century.

‘quite’ seems an inappropriate word actually. England et al. 2014 suggests the period from about 2000-2012 was highly unusual in the context of the past century.

312. Robert Way says:

For me the reimplementation of their estimates and then testing with CW2014 and BEST would be useful. As will be the testing for use of alternative OHC values which I expect will be coming shortly at least for some of the recent era.

313. Steve Bloom says:

Pekka, broadly speaking we should expect, and do seem to be seeing various early signs of, significant circulation changes. Some are obvious, e.g. the poleward shift of the atmospheric circulation —

(Pause here for a moment to note that this shift and its implications not being on every front page and at the top of political priority lists is the clearest possible evidence of how badly skewed our collective responses are.)

— and its various components, but for poorly-understood things like ENSO we will simply have to await more data. From what I’ve learned, especially given the observed changes in the Walker circulation, I’d be very surprised if ENSO were somehow exempted.

314. Christian says:

Robert Way,

I would disagree, because the best known Heat-Uptake(1995-2011) from IPCC-Data( if you make their method) was 132 ZJoule, that is equal to a Forcing of arround 0.8W/m^2 not 0.51W/m^2.

And this Value of arround 0.8 have to be best estimate and best estimate and uncertainy have to be strictly separated.

Code and Data they used a here: http://niclewis.wordpress.com/the-implications-for-climate-sensitivity-of-ar5-forcing-and-heat-uptake-estimates/

315. On their code I can tell that I downloaded the file (by mistake) twice and noticed that the code had changed between the downloads. The newer code worked on R x64 3.1.1 immediately, when data was placed in the sub-directory “data”. So far I haven’t done anything more with the code.

316. Christian says:

Pekka,

I think, you havent use the code, because there is also a text.data of their used heat-uptake and simple to see, that there is much more heat-uptake then 0.51W/m^2 for the period 1995-2011 in best-estimate. Their Value of 0.51 is equal to 82.3 ZJoule or 8.23*10^22 Joule.

That is cleary to low for to be best estimate.

317. Christian says:

soory,

There have to be a “to” between havent and use.

318. I just got it running, but didn’t use it at all. Neither have I studied the code or input. Two graphs that the run produced seemed to be the same that I have seen in the paper and in a post of Nic Lewis, but even that I didn’t verify in detail.

319. I would interpret the term “decadal variability” as the variability of the decadal means. That could either be the decades themselves or a running mean. ENSO is an important factor that influences these decadal means and thus a source of decadal variability.

HR says: “Presumably given their association with SkS one could maybe uncontroversially suggest that Cowtan and Way have a bias in the opposite way. I wonder knowing this if you hold their work up to the same scrutiny using the same logic you use for Lewis and Curry. A link to a comment by you expreessing similar concern about C&W would be illuminating.

HR, what do you think of my post on Cowtan and Way? Is it sufficiently balanced?

320. Christian said:

that is equal to a Forcing of arround 0.8W/m^2 not 0.51W/m^2.

It makes you wonder if the duo forgot to apportion the heat between land and ocean properly. If they didn’t factor in the earth’s surface as ~30% land … whoa, ooops !

We worked this out here with a proportional land/ocean model (see the comment thread in particular)
http://contextearth.com/2014/01/25/what-missing-heat/

At the end, we decided on an OHC uptake of ~0.77 W/m^2, with an upper limit of 0.85 W/m^2.

One of the things that bothers me about Curry is the way she treats her “denizens” with hidden disdain. She acts like the blog and the commenting is a two-way street, but in the end she just drops these “papers” of hers on her readership fully hatched. That is not the way to do things with respect to social media. If you really want to collaborate with people that want to help, do it out in the open and get real feedback. Perhaps she wouldn’t be so embarrassed by all the bone-headed mistakes (?) she has been making — the Bose-Einstein statistics fiasco, the half-baked Stadium Wave theory, and now low-balling the TCR. The boomerang is coming back hard because she did not vet any of her work prior to publishing.

Do you think someone like Lewis or Curry would ever join us in the http://azimuth.mathforge.org project, contributing open source ideas to work out natural variability phenomena such as ENSO? Fat chance, I would say.

321. Christian says:

WHT,

Thats sounds more realistic for heat-uptake, that is very the same what i get from my own OHC-Model(0,78W/m^2) for 1995-2011. And this makes me have doubts about their heat-uptake in their best estimate.

To the other points, i am not allowed to have a opinion about NC or JC because i didn’t read their blogs or other stuff. I have pointed out here before, i not interested in talking about climate science and their interplay to the public, only can have opinions to what they have publish and in this way i cant accept their heat-uptake calculations. And to your last statement, if they have interest to minimize uncertainy the will join or work out on another places.

322. Frank says:

Chris, it seems to me that your view about heatuptake is similiar to this of Trenberth: http://www.theguardian.com/environment/climate-consensus-97-per-cent/2014/oct/02/global-warming-battle-for-evangelical-hearts-and-minds with this sentence: ” The result is that the Lewis and Curry estimates are perhaps 50% too low, and their uncertainties are much too low.” by Kevin.

323. Marco says:

Well, with all the wonderful discussion on this thread, I *strongly* recommend some people sit together and write a paper with estimates and the approach they consider more proper. And I mean this very seriously. I’ve seen some useful discussion, but let’s not fall in the same trap as most of the pseudoskeptics and keep this to a blog discussion.

324. Marco,
It’s an interesting idea. I would argue, though, that the difference between this and what I see on “skeptic” blogs is that here we’ve tried to discuss the paper constructively. I guess, it’s also probably true that most climate scientists understand the issues with these type of estimates anyway.

325. Christian says:

Frank,

I am not really bewildered that Kevin argues similiar to my opinion and for me there a two way to come down to LC14-Value:

1. Taking their Approach also for final periods (most llikely, because i can reproduce their values)
2. Playing with Uncertainy, but then have nothing to do with best estimate, more to their Uncertainy self (not so llikely, because i cant fully reproduce their values)

Anyway, thanks for the Link and i this way, have completed LC14 (for now)

326. Christian says:

Marco,

I think not everyone here is a scientist in Climate and in my opinion i dont see the benefits of to write a Paper, because i think most of scientist are smart enough to see the points we have here figured out.

And its would be a question of benefits to write a Paper to figure out what is clear for most of them. And i also read weaker Papers as LC14, they are also been published. In other words: Science wont die, if a few weaker Papers been published.

But its just my opinion

327. Marco says:

ATTP and Christian, fair enough, but sometimes this kind of stuff needs a ‘rebuttal’. Now Lewis & Curry can keep on pointing at their paper, Lewis a little bit more so (Curry suddenly gets trouble with her uncertainty), to the ‘general population’.

If it is all so clear, why didn’t anyone publish something similar before? Or rather, why did Otto et al remain mostly uncontested, and now L&C, too?

328. Marco,
As I see it, the way to do this would be to do the EBM method using all possible choices of data and using a variety of assumptions about internal variability and system heat uptake rate in the 1800s. Would be quite valuable, but a not inconsiderable amount of work.

I also think that the issue with the EBM type models is that the problems they might have (non-linearities, inhomogeneities) are things that we only know through models, so you’ll always have the problem with people saying “but models…”. In any sensible scenario, these issues would be recognised and acknowledged by all. We’re not operating in a sensible scenario.

329. Saulius,
Yes, very much so. I believe that that is suggesting quite a large underestimate in the rise in OHC over the last 35 years. If correct, then the $\Delta Q$ term in

$ECS = \frac{F_{CO_2 2x} \Delta T}{\Delta F - \Delta Q}$

would be much larger than the value used by LC14 and would bring the ECS estimate back up towards 2 degrees or even higher (considering also that Cowtan & Way also suggest that $\Delta T$ might be 10% higher than the value used by LC14).

330. Reading that comment of Trenberth I was left wondering, whether he referred by the 50% too low to the estimate of OHC increase or to the final results on sensitivity. No reference was given to see, whether the context would answer that.

331. Curry had an op-ed in the Wall Street Journal yesterday, touting her new paper
The Global Warming Statistical Meltdown
http://online.wsj.com/articles/judith-curry-the-global-warming-statistical-meltdown-1412901060

332. Steven,
Yes, I saw that. Was tempted to write about it, but am not sure I can really be bothered.

333. David Appell at Quark Soup has commented on the Curry WSJ Op-Ed.

334. Steve Bloom says:

Too much curry leads to indigestion.

335. Richard Erskine says:

In case you missed it, there was a piece on BBC Radio 4 this morning (1st Nov) around 0715 hrs, with the line that the sceptics are moving closer to the mainstream, and spent some time interviewing Nic Lewis. The closing line being … IF correct we should be worried but maybe we have more time to decarbonize. This may be worthy of a new blog piece 🙂 … Maybe even a call to the editors at Radio 4.

336. Richard Erskine says:

As someone with a deep science background but on a steep learning curve vis a vis climate science, your blog is a gold mine of insight. However, I feel there is a need for a ‘bridge’ to present the same insight in a form that, for example, the BBC science correspondent, can digest. My probably naive concern with the work criticised in this thread and a lot of journalist comment is (a) a lack of understanding that the system is not simply linear (b) climate sensitivity is a macro level measure of sensitivity at one point in time and state (as you say above) not a universal constant … A graph showing the variation of climate sensitivity might actually be useful, with projections of how it can go terribly non-linear under increased forcing including feedback (libido changes, methane reservoirs, etc.) (c) we need to also show how catastrophically non linear are the impacts post even 2 C (which for Africa would already be catastrophic).

337. Richard,
Thanks, I’ll try and listen to the segment.

we need to also show how catastrophically non linear are the impacts post even 2 C (which for Africa would already be catastrophic).

This is something I would like to know more of myself. I don’t really have a good sense of the likely impacts of more than 2 degrees of warming (regional, for example).

338. Steve Bloom says:

There’s nothing more terrible than a non-linear libido change. 🙂

Why would we expect BBC science correspondents to have a better grasp of stocks and flows than the MIT grad students surveyed by Sterman and colleagues? I would suggest that someone with a poor grasp is simply not going to be able to understand the climate problem.

339. Rachel M says:

There’s nothing more terrible than a non-linear libido change.

It depends which direction the change is in. It could be very positive 🙂

340. verytallguy says:

Nic Lewis?

Identified as a climate scientist with key insights by the BBC?

341. vtg,
Even though I think Nic Lewis deserves credit for doing research and publishing papers, his presence on another BBC programme is at least consistent with my hypothesis that the BBC charter includes all segments about climate science must include someone associated with the GWPF.

342. Richard Erskine says:

Albedo libido oops. Am I dyslexic or a closet Freudians? That’ll teach me to post before my morning cup of tea!

343. Richard,
I probably shouldn’t have read it while drinking my morning coffee 🙂

344. Rachel M says:

I can see the headline now “Global warming causing changes in libido. Morning coffee causing changes in albedo”.

Sadly, global warming is having an impact on our morning coffee

345. Richard Erskine says:

What have I started!?

346. Willard says:

> What have I started!?

A search for the missing heat.

347. ezra abrams says:

Cowtan and Ray
two guys who are not prof climate scientists; what would you say if they had asserted the earth is not warming ?
They say that the global temp datasets used by the climate community for the last 20 years have such serious errors that the models (Fyfe etal, Nat Climate sept 2013) are not reliable

the statistical method of interpolation that they use, to recover temperature from areas without instruments, is suspect

I mean, i’m prowarming, but when you make it easy, no wonder the denialists manage to convince people

348. two guys who are not prof climate scientists; what would you say if they had asserted the earth is not warming ?

So what? There are numerous people who aren’t climate scientists who point out the world isn’t warming.

I mean, i’m prowarming,

What does this even mean?

If you want to try and make a sensible comment, feel free.

349. OPatrick says:

two guys who are not prof climate scientists; what would you say if they had asserted the earth is not warming ?

I’m no expert but I would have said there was a considerable difference between ‘asserting…’ and ‘publishing a peer-reviewed article showing that…’.

350. Vinny Burgoo says:

Wotts: ‘If you want to try and make a sensible comment, feel free.’

Is now the right time to mention that the leader of the UK franchise of the Natural Law Party (AKA The Yogic Flyers) was a physicist?

Probably not. But I couldn’t sit on that factoid any longer. It was making me fidget uncomfortably. Possibly because of what my late quasi-stepfather told me about what such people do to neck cushions.

(The bouncy physicist now teaches at a well-regarded business school in India.)

351. The “Natural Law Party”. Who are they? I’m sure that you can find people with all sorts of qualifications who have rather odd ideas.

352. Joshua says:

I get the sense that ezra is concerned.

353. > What does this [pro-warming] even mean?

CO2 is plant food.

354. Jai Mitchell says:

Isn’t it interesting how confirmation bias works? I mean, lets set up a high baseline and low estimation of current OHC accumulations, infer linear feedbacks and produce an inherently flawed study that doesn’t really change the body of work but can be asserted as saying something it doesn’t by the denialist crowd!

We haven’t even begun to see the feedbacks and carbon cycle tipping points that our current TOA will induce over the next decade or so.

What if Durack et. al. 2014 proves that cloud effects are much greater negative forcers than currently modeled? whence then is TCR and ECS? Much less ESS?

We have likely already locked in over 2.5C above pre-industrial at 420 ppmv CO2e.

355. Jai,
I saw your comments on Mark Lynas’s blog. A bit more robust than mine 🙂 I meant to ask you about Nic Lewis’s “absorption windows to space” theory. I wasn’t aware that he proposed anything like that.

356. Jai,
Your comment reminded me that I once mentioned you in a post. 🙂

357. Jai Mitchell says:

Hmmm. . .On a further investigation, I appear to have misappropriated my IRE in this case. I confused his past work with Ferenc Miskolczi’s. I will have to post a correction, thanks for the follow up! This is the problem with trying to stay abreast of the Gish Gallop over at WUWT. It gets harder and harder to separate the subtle biases from the outright lies and pseudoscience!

358. Jai,
Depends, you could always leave it and give Nic Lewis something to nitpick. It is his forte.

359. Jai Mitchell says:

hmm, didn’t see that, thanks for the rec! 🙂

I pretty much stopped going over there when they deleted my posts showing how much money was being funneled to the Heritage Institute by Donor’s Trust and how Spencer was splicing his satellite records with regional warming series and then putting that on top of the GISP2 series to grossly underestimate regional warming (in his Senate testimony last year).

Thanks again for the follow up and for your work, I have learned a lot coming here over the years!

360. Jai Mitchell says:

Doh, Heritage = Heartland. . .