I thought I might briefly mention the recent paper on Quantifying underestimates of long-term upper-ocean warming, by Durack et al. (2014). The relevant result is stated in the abstract, which says
Using satellite altimetry observations and a large suite of climate models, we conclude that observed estimates of 0–700 dbar global ocean warming since 1970 are likely biased low. This underestimation is attributed to poor sampling of the Southern Hemisphere, ….. These adjustments yield large increases (2.2–7.1 × 1022 J 35 yr−1) to current global upper-ocean heat content change estimates, and have important implications for sea level, the planetary energy budget and climate sensitivity assessments.
The key figure is below which shows how this analysis influences the different estimates for the upper 700m Ocean Heat Content
One has to be careful of single study syndrome, but this is clearly relevant given the recent paper by Lewis & Curry (2014). They estimated the Equilibrium Climate Sensitivity (ECS – but really an Effective Sensitivity) using
and one of my main criticisms was that they seemed to minimise (the change in system heat uptake rate) by choosing a rate during the base period that seemed higher than other estimates, and a rate during the final period that was about as low as as it could reasonably be. This new paper would suggest that could be significantly higher than the value used by Lewis & Curry (2014), and would consequently increase their ECS estimate. I haven’t actually done the calculation, but a tweet from Gavin Schmidt suggests that this adjustment would reduce the difference between the Lewis & Curry range and the IPCC range (I’ve updated this post since, as you can see below, the preliminary calculation suggested a closer agreement than the later calculation gave).
This adjustment doesn’t influence the Transient Climate Response (TCR) since that does not depend on the change in system heat uptake rate, being determined using
The problem that I see with all of this is that it seems as though one can make assumptions and choose datasets that produce low estimates, and make assumptions and choose datasets that give higher estimates. Actually doing a single calculation that would convince most seems quite difficult. What might be useful would be a thorough study that consider all the reasonable assumptions and choices of datasets. One might then be able to produce a more reasonable range and best estimate, based on some combination of the results using these different possible assumptions and data choices. I will add, though, that if it is known that some dataset has an issue (for example, that HadCRUT4 has a samplying bias) then this should really be acknowledged and considered – something that, in my opinion, Lewis & Curry (2014) didn’t do particularly well.
There’s also the issue that these simple estimates cannot consider non-linearities in the feedbacks, inhomogeneities in the forcings, or slow feedbacks. Consequently, I’ve always felt that they’re quite useful as a basic check, but can’t really be regarded are more robust or reliable than other methods. That they seem to give ranges that are similar to the IPCC ranges would seem to be more a confirmation of the IPCC estimates, than a reason to argue against them.