Oliver Geden is a climate/energy policy analyst at the German Institute for International and Security Affairs. I’ve written before about Oliver Geden’s views and have, typically, been rather unimpressed by what he presents. He’s even accussed me of misrepresenting him, but I’m still not quite sure how. A few days ago he published another comment in Nature Geoscience called An Actionable Climate Target.
The key message in his comment is:
In the future, the main focus should not be on temperature targets such as 2 or 1.5 °C, but on the target with the greatest potential to effectively guide policy: net zero emissions.
I think there is some merit to this, but there is still much with which I disagree. He also still seems incapable of avoiding having a dig at physical scientists, saying:
The problem-centred approach pursued by physical scientists assumes that appropriate policy action will follow from an accurate definition of DAI more or less automatically.
where DAI means Dangerous Anthropogenic Interference. I really think the above completely misrepresents what physical scientists actually assume. I don’t think that physical scientists believe that appropriate policy will automatically follow from an accurate definition of DAI. In this context, physical scientists are expected to inform, not influence. What they present should be based on the evidence available, not on what will most likely lead to what they (or others) think is the most appropriate policy action.
The information they provide should not change just because the resulting policy does not appear consistent (according to some) with the information presented. In fact, I think it would be wrong if physical scientists were to do so; that the information they present appears to not be being influenced by the resulting policy action is – if anything – indicative that they’re basing it more on the evidence available than on what would most likely influence policy makers.
He then goes on to criticise temperature targets, saying:
Temperature limits are problematic since they create an ‘either/or’ situation: a 2oC limit can be either hit or missed. If climate research showed that failure is likely, this would drastically reduce the motivation of policymakers, companies, non-governmental organisations and the public at large — and would force governments to adopt a less ambitious target immediately.
I guess it is true that either you achieve the target, or not, but it’s not clear that this is a good argument for not, at least, having it. I realise that these temperature targets are somewhat political, and are not really boundaries between everything fine and catastrophe. However, they are regarded as targets beyond which we’d expect the impacts to get increasingly severe, and where the negatives likely outweigh the positives. It’s also my understanding that the 2oC limit was also chosen as boundary beyond which we might pass tipping points where some of the changes would become essentially irreversible.
Hence, even if we are likely to miss these targets, there would still seem to be some value in at least maintaining them so as to remind policy makers that there is probably a vast difference between just missing them, and missing them by a lot. What’s also slightly ironic about Oliver Geden’s suggestion is that it would seem – as I’ll explain below – to be essentially be arguing for a less ambitious target, while claiming that this would be the result of maintaining temperature targets.
He then goes on to argue in favour of a zero emission target, rather than a temperature target:
In contrast to temperature targets, a target of zero emissions tells policymakers and the public precisely what has to be done, and it directly addresses problematic human activity.
Well, I think it is wrong to claim that this tells policy makers and the public precisely what has to be done. This also gives me an opportunity to mention that I went, yesterday, to hear Chris Rapley talking at the Edinburgh Science Festival. He said something that illustrates the problem – in my view – with Oliver Geden’s argument. He mentioned the Paris meeting at which it was agreed to hold the increase in the global average temperature to well below 2oC, while pursuing efforts to limit the temperature increase to 1.5oC. However, he then went on to point out that this requires getting emissions to zero.
In other words, a temperature target already includes that we need to get to zero emissions; stablising temperatures with respect to long-term anthropogenic warming requires that we eventually stop emitting CO2 into the atmosphere. The problem with Oliver Geden’s claim is that a zero emission target alone does not tell policymakers and the public precisely what needs to be done because it is not – by itself – associated with any kind of temperature target. A temperature target, however, is associated with a target of zero emissions.
You might argue that this isn’t clear from a temperature target alone. However, these temperature targets are normally associated with a carbon budget, which is intended to indicate how much more CO2 we can emit if we want a certain chance (normally 66%) of achieving the target. It’s doesn’t take much to realise that if there is a limit to how much more we can emit, that we eventually have to stop emitting (i.e., a carbon budget is explicitly associated with getting to zero emissions).
To be fair, I do think that it is good that Oliver Geden is stressing the need to get emissions to zero. However, I don’t think that this is, by itself, sufficient. The consequences of getting emissions to zero after emitting another 500GtC will likely be vastly different to doing so after emitting another 1500GtC. Admittedly, he does say
every country will have to reach zero in the second half of the century.
which would presumably constrain how much more can be emitted before reaching zero emissions. However, I still fail to see how focusing on zero emissions only is somehow preferable to some kind of temperature target that is then associated with a carbon budget and – as a consequence – a requirement to get to zero emissions. The problem I can see with a zero emissions only target is that it could lead to people thinking that all we need to do is eventually get emissions to zero, which is clearly insufficient. If we think that there is a level of warming beyond which there could be severe negative consequences, then we need to get to zero emissions AND limit how much CO2 we eventually emit.
Physics
One of the nice things about physics (well, I like it) is that you can often quantify things by making basic back-of-the-envelope calculations. Maybe a classic example of this is David MacKay’s book about renewable energy called Sustainable energy – without the hot air. It’s a masterclass in how to use simplifying assumptions and basic physics to try and understand various physical processes.
Credit: Trenberth et al. (2008).
We also know that although surface temperatures can vary (day/night, seasons,…), if we average across the whole globe and over a long enough time interval, it is pretty steady (well, until we started adding GHGs to the atmosphere, that is). This tells us that it must – on average – be receiving as much energy as it loses. Since it is only receiving about 160W-2 from the Sun, it must be receiving – on average – about another 330Wm-2 from somewhere else. This is essentially the greenhouse effect; radiatively active gases in the atmosphere block outgoing long-wavelength radiation, returning some energy to the surface, and causing the surface to warm up to a higher temperature than would be the case were there no such gases in the atmosphere (or, no atmosphere).
We also know that the planet as a whole is in approximate thermal equilibrium (well, again, before we started adding GHGs to the atmosphere) and that we absorb – on average – 240Wm-2 from the Sun. Therefore, we must be ultimately radiating 240Wm-2 back into space. Since it is the atmosphere that is blocking energy from being radiated directly from the surface to space, one way to think of this is that there is some effective radiating layer in the atmosphere from which we lose as much energy into space (240Wm-2) as we gain from the Sun. However, as illustrated by the Trenberth energy flux diagram, it’s not quite that simple; some does come directly from the surface and some from within the atmosphere. We also know that – in reality – more complex physical processes (such as convection and evaporation) play an important role in setting temperature gradients in the atmosphere. However, we can still get a good idea of what’s happening by considering these fairly simple illustrations and calculations.
We can also use this to understand what will happen if we add more greenhouse gases; it makes the atmosphere more opaque to outgoing radiation and raises the effective radiative layer to a higher altitude. This causes temperatures below this layer to increase so that the amount of energy being radiated back into space once again matches the amount of energy being received from the Sun. It is simply an enhanced greenhouse effect.
The above is actually a rather lengthy and convoluted way to introduce something I encountered recently. I came across a blog post that critiques Peter Ward’s ozone depletion theory. Peter Ward’s basic idea is that CO2-driven warming is wrong and that what is causing global warming is the depletion of ozone. His basic idea (which is wrong) is that ozone absorbs ultra-violet (UV) radiation, that there is much more energy in the UV than the infrared (IR), and therefore that the warming is driven by changes in the UV flux driven by changes in ozone. His basic error is that even though a UV photon has much more energy than an IR photon, this does not mean that there is much more energy in the UV than in the IR (you also need to account for the number of photons in each wavelength band)
What I found interesting is that Peter Ward made an appearance in the comments and we had a rather lengthy exchange of views. It was quite a pleasant exchange and only became somewhat tetchy towards the end. However, Peter Ward was completely unwilling to quantify his alternative theory and claimed that the standard methods for determining energy fluxes (as in the Trenberth-like energy flux diagram) are simply wrong – apparently because the energy of a photon is
(which he – incorrectly – kept claiming was the energy per square metre).
Although a little frustrating, I found this discussion quite fascinating. Someone is proposing an alternative to a well accepted theory, but won’t quantify their alternative and suggests that a lot of very basic physics is simply wrong; physics that has been extremely successful for a very long time. This is also physics that virtually every university in the world teaches its undergraduates and that has been used extensively in the development of advanced technologies that many of us use every day; are we just getting it right by chance?
To me, if you’re going to suggest an alternative to something that is well accepted, you have to be willing to actually show how it works quantitatively; you can’t just hand wave. This is especially true if your alternative requires that some very basic things, that are accepted by virtually everyone else, are fundamentally wrong. If you can’t – or won’t – quantify your alternative, then the chances of you being correct is pretty small. If your alternative idea also requires that well-accepted ideas that can quantitively match what we observe/measure are wrong, then the chances of you being correct becomes negligibly small. Given this, the conclusion of the post where I encountered Peter Ward’s ideas is almost certainly correct