Given that there’s been some discussion about internal variability in my previous post, and because there seems to have been interest elsewhere, I thought I would post some thoughts.A paper I was reading recently is Internal variability of Earth’s energy budget simulated by CMIP5 climate models by Palmer & McNeall (2014), which uses multi-century pre-industrial control simulations from the fifth phase of the Coupled Model Intercomparison Project (CMIP5) to investigate relationships between: net top-of-atmosphere radiation (TOA), globally averaged surface temperature (GST) ….. on decadal timescales. The interesting figure is probably the one of the right which shows the range of internally driven surface temperature trends and system heat uptake rates, plotted against time interval. For periods of about a decade or less, these can be quite substantial.
Such internally driven variations could have implications for energy balance calculations – in particular the transient calculation – since the internal forcings could have a substantial influence on the temperature change. As Tom Curtis points out, however, an assumption of the energy balance method is that the change in outgoing flux due to temperature changes resulting from internal variability match those due to temperature changes due to response to a forcing. If so, this wouldn’t influence the equilibrium calculation. However, as Pekka suggests, regional variations means that this may not always be the case. This appears to be consistent with this paper, which suggests that changes in temperature and system heat uptake rate only correlate on average – there is a large amount of variability.On a similar note, there was another recent paper – also including Palmer & McNeall – on quantifying the likelihood of a continued hiatus in global warming (Roberts et al. 2015). You can read more about it on Doug’s blog, but the core result is probably illustrated in the table on the left. It shows the probability of internal variability offsetting a trend of 0.2oC per decade, for different time intervals – dropping to less than 1% for periods exceeding 20 years. The interesting result is that the probability of it continuing to offset such a trend for an additional 5 years, is actually quite high if it has already done so for 15 years (although, I don’t think this is necessarily all that surprising).
There’s a related post on RealClimate called climate oscillations and the global warming faux-pause. It discusses a recent paper by Steinman, Mann & Miller called Atlantic and Pacific multidecadal oscillations and Northern Hemisphere temperatures. It
applied a semi-empirical approach that combines climate observations and model simulations to estimate Atlantic- and Pacific-based internal multidecadal variability (termed “AMO” and “PMO,” respectively).
and concluded that
the AMO and PMO are found to explain a large proportion of internal variability in Northern Hemisphere mean temperatures.
As Robert Way points out, however, there are probably also other contributing factors, such as updated forcings for volcanic activity and the weak solar cycle, and that using these updated forcings would [probably?] reduce the total role of multidecadal variability.
I was going to finish this rather convoluted post with a quick mention of a paper (H/T Kevin Anchukaitis) called spectral biases in tree-ring climate proxies. I did read the paper, but am not sure I quite got the significance, but it does say
We find that whereas an ensemble of different general circulation models represents patterns captured in instrumental measurements, such as land–ocean contrasts and enhanced low-frequency tropical variability, the tree-ring-dominated proxy collection does not…….temperature-sensitive proxies overestimate, on average, the ratio of low- to high-frequency variability. These spectral biases in the proxy records seem to propagate into multi-proxy climate reconstructions for which we observe an overestimation of low-frequency signals. Thus, a proper representation of the high- to low-frequency spectrum in proxy records is needed to reduce uncertainties in climate reconstruction efforts.
If I’ve understood this properly (and I might not have) this seems to be suggesting that multi-proxy climate reconstructions overestimate the ratio of low- to high-frequency variability and, hence, might be suggesting that it’s not capturing all the variability. If someone else understands the significance of this, it would be interesting to get it clarified.
Anyway, that’s all I was going to say. This is all rather longer and more jumbled than I had intended, but hopefully there’s something for everyone.