## Some thoughts on internal variability

Given that there’s been some discussion about internal variability in my previous post, and because there seems to have been interest elsewhere, I thought I would post some thoughts.

Figure 4 from Palmer & McNeall (2014) showing internally driven surface temperature trends and system heat uptake rates.

A paper I was reading recently is Internal variability of Earth’s energy budget simulated by CMIP5 climate models by Palmer & McNeall (2014), which uses multi-century pre-industrial control simulations from the fifth phase of the Coupled Model Intercomparison Project (CMIP5) to investigate relationships between: net top-of-atmosphere radiation (TOA), globally averaged surface temperature (GST) ….. on decadal timescales. The interesting figure is probably the one of the right which shows the range of internally driven surface temperature trends and system heat uptake rates, plotted against time interval. For periods of about a decade or less, these can be quite substantial.

Such internally driven variations could have implications for energy balance calculations – in particular the transient calculation – since the internal forcings could have a substantial influence on the temperature change. As Tom Curtis points out, however, an assumption of the energy balance method is that the change in outgoing flux due to temperature changes resulting from internal variability match those due to temperature changes due to response to a forcing. If so, this wouldn’t influence the equilibrium calculation. However, as Pekka suggests, regional variations means that this may not always be the case. This appears to be consistent with this paper, which suggests that changes in temperature and system heat uptake rate only correlate on average – there is a large amount of variability.

Credit : Roberts et al. (2015)

On a similar note, there was another recent paper – also including Palmer & McNeall – on quantifying the likelihood of a continued hiatus in global warming (Roberts et al. 2015). You can read more about it on Doug’s blog, but the core result is probably illustrated in the table on the left. It shows the probability of internal variability offsetting a trend of 0.2oC per decade, for different time intervals – dropping to less than 1% for periods exceeding 20 years. The interesting result is that the probability of it continuing to offset such a trend for an additional 5 years, is actually quite high if it has already done so for 15 years (although, I don’t think this is necessarily all that surprising).

There’s a related post on RealClimate called climate oscillations and the global warming faux-pause. It discusses a recent paper by Steinman, Mann & Miller called Atlantic and Pacific multidecadal oscillations and Northern Hemisphere temperatures. It

applied a semi-empirical approach that combines climate observations and model simulations to estimate Atlantic- and Pacific-based internal multidecadal variability (termed “AMO” and “PMO,” respectively).

and concluded that

the AMO and PMO are found to explain a large proportion of internal variability in Northern Hemisphere mean temperatures.

As Robert Way points out, however, there are probably also other contributing factors, such as updated forcings for volcanic activity and the weak solar cycle, and that using these updated forcings would [probably?] reduce the total role of multidecadal variability.

I was going to finish this rather convoluted post with a quick mention of a paper (H/T Kevin Anchukaitis) called spectral biases in tree-ring climate proxies. I did read the paper, but am not sure I quite got the significance, but it does say

We find that whereas an ensemble of different general circulation models represents patterns captured in instrumental measurements, such as land–ocean contrasts and enhanced low-frequency tropical variability, the tree-ring-dominated proxy collection does not…….temperature-sensitive proxies overestimate, on average, the ratio of low- to high-frequency variability. These spectral biases in the proxy records seem to propagate into multi-proxy climate reconstructions for which we observe an overestimation of low-frequency signals. Thus, a proper representation of the high- to low-frequency spectrum in proxy records is needed to reduce uncertainties in climate reconstruction efforts.

If I’ve understood this properly (and I might not have) this seems to be suggesting that multi-proxy climate reconstructions overestimate the ratio of low- to high-frequency variability and, hence, might be suggesting that it’s not capturing all the variability. If someone else understands the significance of this, it would be interesting to get it clarified.

Anyway, that’s all I was going to say. This is all rather longer and more jumbled than I had intended, but hopefully there’s something for everyone.

## Some thoughts on climate sensitivity

Semyorka posted a comment on my previous post that highlighted a paper that I had’t seen before. The paper is Global atmospheric downward longwave radiation over land surface under all-sky conditions from 1973 to 2008 which tries to determine (as the title might suggest) the change in downwelling longwavelength flux, over land.

The abstract concludes with

We found that daily Ld increased at an average rate of 2.2 W m-2 per decade from 1973 to 2008. The rising trend results from increases in air temperature, atmospheric water vapor, and CO2 concentration.

Ld is the global atmospheric downward longwave radiation and the observed trend (2.2Wm-2) suggests it increased by 7.7Wm-2 between 1973 and 2008. Initially I was somewhat confused by this (still am maybe :-) ) as it seemed rather high, but over the same time interval, land surface temperature increased by almost 1K (see the Skeptical Science trend calculator). This would increase the outgoing surface flux by

$dF = 4 \sigma T^3 dT = 5.5 Wm^{-2}$.

So, the outgoing surface flux over land has increased by about 5.5Wm-2, while the downward longwave flux has increased by about 7.7Wm-2. If you consider a typical forcing dataset, then the radiative forcing has increased by maybe as much as 1.5Wm-2 since the mid-1970s. If you do a simple transient temperature response calculation, that would suggest that the transient response over land is

$TCR = \dfrac{3.7 \Delta T}{\Delta F} = \dfrac{3.7 \times 1}{1.5} = 2.5 K,$

which is somewhat higher than the expected global value of slightly below 2K (okay, maybe it should be 3.44, rather than 3.7, but that won’t change this all that much. Also, the change in forcing I’ve used is probably a bit too high anyway.). It’s possible that the system is just too complex for such a calculation to be reasonable, but given the low thermal inertia of the land – compared to the oceans – it’s not that surprising that the land-only TCR is greater than the global TCR.

However, the equilibrium response (with fast feedbacks only) shouldn’t depend on the thermal inertia (it will just take longer to reach if the thermal inertia is high, than if it is low). Therefore, if the above calculation has some merit, that the downward longwave flux over land exceeds the outgoing flux (as the paper mentioned above suggest) could suggest that the equilibrium response has to exceed 2.5K.

Admittedly, I’m ignoring uncertainties and all sorts of caveats. It’s also possible that such a calculation doesn’t really make any sense given the complexity of the system. That’s why I thought I would write this post – someone can point out where I’ve gone wrong and why this hasn’t been suggested before (assuming that it hasn’t). In my experience, when you notice something apparently simple that noone has noticed before, it’s probably because it’s not as simple as you initially thought :D .

Update: I knew I was going to do something silly in this post. As Chris Colose points out on Twitter, you need to close the surface energy budget using non-radiative terms too; like evaporation and convection. That the downwelling flux exceeds the upgoing flux doesn’t mean that the surface is out of energy balance. So, the latter part of this post is probably slightly nonsensical, or – rather – you can’t really use this to argue for an ECS above 2.5K.

## CO2 forcing observed from surface

I thought I would post this video illustrating the first time that a change in CO2 forcing has been observed from the ground. The paper is an Observational determination of surface radiative forcing by CO2 from 2000 to 2010 by Feldman et al. (2015). I had a quick read and as I understand it, they observed from two different sites and measured the downwelling spectrum in the infrared band. They then had to use radiative transfer models to try and remove things like seasonal variations so as to extract the change in forcing due to changing atmospheric CO2 concentrations. They detect a trend of 0.2 ± 0.06 Wm-2 per decade.

Something to bear in mind, though, that this is not the first time that the radiative influence of increased atmospheric CO2 has been detected. Harries et al. (2001) measured – from space – a change in the outgoing spectrum. This is the first time that it’s been detected on the surface.

## Willie Soon saga

I was tempted to ignore the whole Willie Soon saga, but since everyone else is writing about it, I thought I would post something. Personally, I think academic freedom is extremely important. If someone can get funded to do research and can get their work published, good on them; that’s how it’s meant to work. There may be issues with peer review that could addressed and maybe we should be looking at how some journals operate, but none of that changes that people should be free to research whatever they want to (well, within the bounds of ethics). If, however, he didn’t disclose his funders and/or didn’t disclose possible conflicts of interest, that is a serious issue and should be addressed. I have a feeling, however, that this may reflect as badly on the Smithsonian as it does on Willie Soon himself.

One reason I don’t care greatly about this whole saga is that it’s fairly clear that Willie Soon’s research is mostly rubbish. I wrote about the recent Monckton, Soon, Legates and Briggs paper. Realclimate has a post pointing out the fallacy in some of his research. There’s the whole Soon and Baliunas controversy. There’s nothing fundamentally wrong with some people publishing rubbish. Willie Soon is almost certainly not alone in doing so.

The more worrying thing – which is what I think Adam Frank is getting at in this article – is how someone like Willie Soon has managed to get such a prominent public profile. Rubbish research would normally simply not get noticed and the researchers would disappear into obscurity. I have no problem with there being rubbish researchers (I may be one myself), but I do have a problem with rubbish researchers gaining prominence when their research is so obviously drivel. Academic freedom means that you have the freedom to do whatever research you want. It doesn’t mean that you get to do so and avoid criticism when it’s nonsense. I also see no reason why anyone who wants to be credible would be comfortable with Willie Soon’s research having the prominence it does, irrespective of their own views on global warming. Surely, we would all like the public and policy makers to be as well-informed as possible. We should all be comfortable with calling out obvious nonsense, irrespective of who is presenting it (and I do mean obvious nonsense, rather than what some think is nonsense, but others don’t).

Of course a more interesting issue is what this implies overall. On an earlier thread I ended up in a lengthy discussion – that I should probably have tried to stop – about how some people think that skeptics are prevented from getting funded, and prevented from getting their work published, because there is an active attempt to control what is funded and what is published. Given my own experiences, this seems highly implausible, but if Willie Soon is one of the leading climate skeptics, then this seems completely nonsensical. If he’s one of the best, then it would seem obvious that the reason skeptics might find it hard to get funded and published is because their work is rubbish, not because there is some active conspiracy to stop them.

Anyway, that’s my view, FWIW. I realise that this post has some thoughts about conspiracy ideation, but I have no great interest in lengthy discussions about possible conspiracy, especially as – in my view – the Willie Soon saga essentially shows that there isn’t one. Maybe people could bear that in mind when crafting their comments.

Posted in Climate change, ClimateBall, Science | | 112 Comments

## Black swans

Eli’s recent post about Black swans, and black cats, motivated me to look into what the whole Black Swan idea was all about. As I understand it, a Black Swan event is simply an unexpected event, that has a significant impact, and that – in retrospect – we regard as something that could have been predicted. Unless there’s some subtlety that I’m missing, this essentially seems to be equivalent to the uncertainty isn’t our friend and high risk, low probability events arguments that have been made in relation to climate change before.

In Eli’s post he touched on Nic Lewis’s Energy Balance approach, and I thought I might expand on this a bit here. Many people seem to use the basic energy balance calculations to argue that climate sensitivity is probably low and that climate models are over-estimating climate sensitivity. The energy balance approach is fairly simple: the transient climate response (TCR) can be estimated using

$TCR = \dfrac{F_{2x} \Delta T}{\Delta F},$

and the equilibrium climate sensitivity (ECS) can be estimated using

$ECS = \dfrac{F_{2x} \Delta T}{\Delta F - \Delta H},$

where $F_{2x}$ is the change in forcing after a doubling of CO2, $\Delta T$ is the change in temperature, $\Delta F$ is the estimated change in external forcing (typically from models), and $\Delta H$ is the change in system heat uptake rate.

If you carry out an energy balance-type calculation (see, for example Otto et al. 2013 and Lewis & Curry 2014) you do indeed find that the best estimates for climate sensitivity are lower than many other method would suggest, and the range is also shifted to lower values. A number of people are therefore using these results to argue that climate sensitivity is probably lower than previously thought, and that climate models are over-estimating climate sensitivity.

However, there is something that should be borne in mind when making such claims; energy balance models have a number of assumptions which – in my experience – are rarely acknowledged.

• Feedbacks are linear: There is a fundamental assumptions that the feedback response is linear; the feedback response in the future will be the same as it has been over the interval considered by the energy balance calculation.
• Polar amplification is negligible: A number of the temperature datasets suffer from coverage bias and may be underestimating the temperature change through not including sufficient coverage of the Arctic where warming may have been faster than the global average. One can compensate for this assumption by using a temperature dataset that tries to account for this coverage bias (Cowtan & Way, for example) but I don’t think any published estimates have done so.
• Internal variability is negligible: Given that energy balance models assume that the observed temperature change is all externally forced, they’re essentially assuming that internal variability has had no effect.
• Forcings are homgeneous: The forcings are assumed to be globally homogeneous. Given that there is more land mass in the northern hemisphere than the southern hemisphere, the north should warm faster than the south. Any inhomogeneity in the forcings, could therefore influence the global estimates.

So, given these assumptions, I don’t think one can really argue that energy balance calculations suggest that climate models are over-estimating climate sensitivity. They might be, but all I think one can say is that if the above assumptions are true, then energy balance calculations suggest that climate sensitivity might be lower than other estimates suggest. In fact, one might argue that all the above assumptions are probably wrong to some degree or another and that some could change the estimates by 5-10%. If so, it’s hard to really argue that energy balance estimates are evidence against the higher climate sensitivities that other methods suggest might be likely. I think it is this kind of issue that leads Nasim Taleb to say

Skepticism about climate models should lead to more precautionary policies in the presence of ruin. It is incoherent to doubt the mean while reducing the variance.

Maybe I’ve misunderstood the whole Black Swan idea, but it does seem that climate change could be ripe for a Black Swan event. Given how hard and how fast we’re pushing our climate, it wouldn’t be particularly surprising if something unexpected were to happen. Could it be something good? Possibly, but I think the parameter space for unexpected events having a damaging impact is significantly greater than the parameter space that would allow for positive impacts. Of course, if something were to happen, it would probably be followed by some people saying “we didn’t see that coming”, immediately followed by others responding with “we did!”

## Aurora

There have, apparently, been some impressive Auroral displays in the UK in the last few days. I managed to miss these, but it is meant to get better again in the next couple of days. The weather, however, is not, so I thought I would just post a picture I took quite some time ago. As you may guess, this isn’t the Aurora Borealis, and it also shows – in the foreground – a hut that I slept in for about a year. The hut is now (assuming it’s still there) on the roof of the Physics building in Durban.

Posted in Personal, Science | | 7 Comments

## Just grow up!

Amelia Sharman and Candice Howarth have a Conversation article about losing the climate debate labels. This is related to discussions we’ve had here and elsewhere.

Fundamentally, I agree with the basic premise: serious dialogue generally requires that you avoid labeling those with whom you wish to have a discussion. Labeling has a tendency to destroy dialogue and so if we wish the dialogue to improve, then labeling will have to be discouraged. However, there are a number things that I think those who are promoting this anti-labeling idea are failing to acknowledge, and I thought I would lay out some thoughts here.

• The level of scientific agreement: Whether people like it or not, there is a great deal of agreement within the scientific community about the basics of climate science. I think it’s important to acknowledge and recognise this. It doesn’t mean that it’s right and that people shouldn’t question the current understanding, but suggesting that it doesn’t exist – or ignoring its existence – would seem to be ignoring reality. I also don’t really see the point in promoting improved dialogue if that doesn’t also include an acknowledgement of this general level of scientific agreement.
• Science, policy or both: Many of these attempts to improve dialogue never seem to quite clarify if they mean with respect to the science, with respect to policy, or both. I think this is an important distinction to make. What we as a society might choose to do, given some scientific evidence, is something that we should be deciding democratically. What our climate might do in the presence of increasing anthropogenic forcings is not. The scientific evidence is not going to change, just because the implications are inconvenient. If people want improved dialogue about the science, then I think they should at least recognise that the scientific dissenters (or whatever word you want to use) are small in number. If people want improved dialogue about the policy implications, then it would seem to be worth recognising that people shouldn’t simply choose their preferred evidence.
• Who benefits? I’m trying to think of how best to put this. If we’re talking about climate science specifically (rather than climate policy) then climate scientists are the experts; they won’t specifically benefit from dialogue, especially if it is with someone who is likely to end up calling them a fraud or a liar. They’re also unlikely to learn anything from those who claim AGW is some kind of massive scam or conspiracy. They can simply go back to their offices and laboratories and keep doing their core job: research. We – the public – would certainly benefit from more climate scientists being involved, but I think we have to be willing to defend those who come under attack from others who find the scientific evidence inconvenient.
• Balance: A lot of the discussions about labeling has focused on the use of “denier” and, sometimes, “alarmist”. In my opinion, this ignores that some of the most offensive and insulting rhetoric is coming from one “side” more than the other (okay, I am biased). I found it particularly galling to see Andrew Montford say

Prof David Henderson suggested that “upholders” of the climate consensus and “dissenters” from it, were better, more neutral terms. I think I am the only participant in the climate debate who uses them much though.

Yes, it seems clear that Andrew is probably one of the only ones who uses these terms. The commenters on his blog prefer terms like liar, fraud, warmist, warmunist, alarmist…..

Okay, I’ve probably said enough. At the end of the day, I’m all in favour of improved dialogue. If I’m rather cynical about this whole idea it’s because I don’t think it’s all that difficult to achieve, if people actually wanted to do so. We’re all adults. We’ve all probably had contentious discussions with others that haven’t degenerated into name calling. Most of us probably stopped doing this when we entered adulthood. My personal view is that much of the dialogue would improve if people simply grew up and started behaving like adults.

Posted in Climate change, ClimateBall, Science | | 134 Comments