Climate targets

Oliver Geden is a climate/energy policy analyst at the German Institute for International and Security Affairs. I’ve written before about Oliver Geden’s views and have, typically, been rather unimpressed by what he presents. He’s even accussed me of misrepresenting him, but I’m still not quite sure how. A few days ago he published another comment in Nature Geoscience called An Actionable Climate Target.

The key message in his comment is:

In the future, the main focus should not be on temperature targets such as 2 or 1.5 °C, but on the target with the greatest potential to effectively guide policy: net zero emissions.

I think there is some merit to this, but there is still much with which I disagree. He also still seems incapable of avoiding having a dig at physical scientists, saying:

The problem-centred approach pursued by physical scientists assumes that appropriate policy action will follow from an accurate definition of DAI more or less automatically.

where DAI means Dangerous Anthropogenic Interference. I really think the above completely misrepresents what physical scientists actually assume. I don’t think that physical scientists believe that appropriate policy will automatically follow from an accurate definition of DAI. In this context, physical scientists are expected to inform, not influence. What they present should be based on the evidence available, not on what will most likely lead to what they (or others) think is the most appropriate policy action.

The information they provide should not change just because the resulting policy does not appear consistent (according to some) with the information presented. In fact, I think it would be wrong if physical scientists were to do so; that the information they present appears to not be being influenced by the resulting policy action is – if anything – indicative that they’re basing it more on the evidence available than on what would most likely influence policy makers.

He then goes on to criticise temperature targets, saying:

Temperature limits are problematic since they create an ‘either/or’ situation: a 2oC limit can be either hit or missed. If climate research showed that failure is likely, this would drastically reduce the motivation of policymakers, companies, non-governmental organisations and the public at large — and would force governments to adopt a less ambitious target immediately.

I guess it is true that either you achieve the target, or not, but it’s not clear that this is a good argument for not, at least, having it. I realise that these temperature targets are somewhat political, and are not really boundaries between everything fine and catastrophe. However, they are regarded as targets beyond which we’d expect the impacts to get increasingly severe, and where the negatives likely outweigh the positives. It’s also my understanding that the 2oC limit was also chosen as boundary beyond which we might pass tipping points where some of the changes would become essentially irreversible.

Hence, even if we are likely to miss these targets, there would still seem to be some value in at least maintaining them so as to remind policy makers that there is probably a vast difference between just missing them, and missing them by a lot. What’s also slightly ironic about Oliver Geden’s suggestion is that it would seem – as I’ll explain below – to be essentially be arguing for a less ambitious target, while claiming that this would be the result of maintaining temperature targets.

He then goes on to argue in favour of a zero emission target, rather than a temperature target:

In contrast to temperature targets, a target of zero emissions tells policymakers and the public precisely what has to be done, and it directly addresses problematic human activity.

Well, I think it is wrong to claim that this tells policy makers and the public precisely what has to be done. This also gives me an opportunity to mention that I went, yesterday, to hear Chris Rapley talking at the Edinburgh Science Festival. He said something that illustrates the problem – in my view – with Oliver Geden’s argument. He mentioned the Paris meeting at which it was agreed to hold the increase in the global average temperature to well below 2oC, while pursuing efforts to limit the temperature increase to 1.5oC. However, he then went on to point out that this requires getting emissions to zero.

In other words, a temperature target already includes that we need to get to zero emissions; stablising temperatures with respect to long-term anthropogenic warming requires that we eventually stop emitting CO2 into the atmosphere. The problem with Oliver Geden’s claim is that a zero emission target alone does not tell policymakers and the public precisely what needs to be done because it is not – by itself – associated with any kind of temperature target. A temperature target, however, is associated with a target of zero emissions.

You might argue that this isn’t clear from a temperature target alone. However, these temperature targets are normally associated with a carbon budget, which is intended to indicate how much more CO2 we can emit if we want a certain chance (normally 66%) of achieving the target. It’s doesn’t take much to realise that if there is a limit to how much more we can emit, that we eventually have to stop emitting (i.e., a carbon budget is explicitly associated with getting to zero emissions).

To be fair, I do think that it is good that Oliver Geden is stressing the need to get emissions to zero. However, I don’t think that this is, by itself, sufficient. The consequences of getting emissions to zero after emitting another 500GtC will likely be vastly different to doing so after emitting another 1500GtC. Admittedly, he does say

every country will have to reach zero in the second half of the century.

which would presumably constrain how much more can be emitted before reaching zero emissions. However, I still fail to see how focusing on zero emissions only is somehow preferable to some kind of temperature target that is then associated with a carbon budget and – as a consequence – a requirement to get to zero emissions. The problem I can see with a zero emissions only target is that it could lead to people thinking that all we need to do is eventually get emissions to zero, which is clearly insufficient. If we think that there is a level of warming beyond which there could be severe negative consequences, then we need to get to zero emissions AND limit how much CO2 we eventually emit.

Posted in advocacy, Climate change, Global warming, Policy, Science | Tagged , , , , , , | 24 Comments

Physics

One of the nice things about physics (well, I like it) is that you can often quantify things by making basic back-of-the-envelope calculations. Maybe a classic example of this is David MacKay’s book about renewable energy called Sustainable energy – without the hot air. It’s a masterclass in how to use simplifying assumptions and basic physics to try and understand various physical processes.

Another example might be the standard Trenberth-like energy flux diagrams. It’s a nice simple illustration that tells us quite a lot about the basic greenhouse effect. If we consider the surface, we know that it receives something like 160Wm-2 from the Sun and has an average temperature of around 288K. Given this temperature, we know it must be radiating around 390Wm-2. We also know that it must lose some energy via non-radiative processes (thermals and evaporation); maybe another 100Wm-2. Hence it is losing – on average – a bit less than 500 Wm-2, but only receiving – on average – about 160Wm-2 from the Sun.

We also know that although surface temperatures can vary (day/night, seasons,…), if we average across the whole globe and over a long enough time interval, it is pretty steady (well, until we started adding GHGs to the atmosphere, that is). This tells us that it must – on average – be receiving as much energy as it loses. Since it is only receiving about 160W-2 from the Sun, it must be receiving – on average – about another 330Wm-2 from somewhere else. This is essentially the greenhouse effect; radiatively active gases in the atmosphere block outgoing long-wavelength radiation, returning some energy to the surface, and causing the surface to warm up to a higher temperature than would be the case were there no such gases in the atmosphere (or, no atmosphere).

We also know that the planet as a whole is in approximate thermal equilibrium (well, again, before we started adding GHGs to the atmosphere) and that we absorb – on average – 240Wm-2 from the Sun. Therefore, we must be ultimately radiating 240Wm-2 back into space. Since it is the atmosphere that is blocking energy from being radiated directly from the surface to space, one way to think of this is that there is some effective radiating layer in the atmosphere from which we lose as much energy into space (240Wm-2) as we gain from the Sun. However, as illustrated by the Trenberth energy flux diagram, it’s not quite that simple; some does come directly from the surface and some from within the atmosphere. We also know that – in reality – more complex physical processes (such as convection and evaporation) play an important role in setting temperature gradients in the atmosphere. However, we can still get a good idea of what’s happening by considering these fairly simple illustrations and calculations.

We can also use this to understand what will happen if we add more greenhouse gases; it makes the atmosphere more opaque to outgoing radiation and raises the effective radiative layer to a higher altitude. This causes temperatures below this layer to increase so that the amount of energy being radiated back into space once again matches the amount of energy being received from the Sun. It is simply an enhanced greenhouse effect.

The above is actually a rather lengthy and convoluted way to introduce something I encountered recently. I came across a blog post that critiques Peter Ward’s ozone depletion theory. Peter Ward’s basic idea is that CO2-driven warming is wrong and that what is causing global warming is the depletion of ozone. His basic idea (which is wrong) is that ozone absorbs ultra-violet (UV) radiation, that there is much more energy in the UV than the infrared (IR), and therefore that the warming is driven by changes in the UV flux driven by changes in ozone. His basic error is that even though a UV photon has much more energy than an IR photon, this does not mean that there is much more energy in the UV than in the IR (you also need to account for the number of photons in each wavelength band)

What I found interesting is that Peter Ward made an appearance in the comments and we had a rather lengthy exchange of views. It was quite a pleasant exchange and only became somewhat tetchy towards the end. However, Peter Ward was completely unwilling to quantify his alternative theory and claimed that the standard methods for determining energy fluxes (as in the Trenberth-like energy flux diagram) are simply wrong – apparently because the energy of a photon is h \nu (which he – incorrectly – kept claiming was the energy per square metre).

Although a little frustrating, I found this discussion quite fascinating. Someone is proposing an alternative to a well accepted theory, but won’t quantify their alternative and suggests that a lot of very basic physics is simply wrong; physics that has been extremely successful for a very long time. This is also physics that virtually every university in the world teaches its undergraduates and that has been used extensively in the development of advanced technologies that many of us use every day; are we just getting it right by chance?

To me, if you’re going to suggest an alternative to something that is well accepted, you have to be willing to actually show how it works quantitatively; you can’t just hand wave. This is especially true if your alternative requires that some very basic things, that are accepted by virtually everyone else, are fundamentally wrong. If you can’t – or won’t – quantify your alternative, then the chances of you being correct is pretty small. If your alternative idea also requires that well-accepted ideas that can quantitively match what we observe/measure are wrong, then the chances of you being correct becomes negligibly small. Given this, the conclusion of the post where I encountered Peter Ward’s ideas is almost certainly correct

his theory is garbage.

Posted in Climate change, ClimateBall, Global warming, Greenhouse effect, Science | Tagged , , , , , | 46 Comments

ECS too low?

I’ve regularly written about the Equilibrium Climate Sensitivity (ECS), in particular estimates by Nic Lewis and why they are probably a bit too low. For balance I should probably now mention a new paper called Observational constraints on mixed-phase clouds imply higher climate sensitivity by Tan, Storelvmo & Zelinka.

You can probably get the key result from the description, which finishes with

Tan et al. used satellite observations to constrain the radiative impact of mixed phase clouds. They conclude that ECS could be between 5.0° and 5.3°C—higher than suggested by most global climate models.

If you want a rather alarmist take on it, you can read the Guardian’s article. There is a more measured response by Chris Mooney.

The key point seems to be that they considered clouds feedbacks and suggest that cloud feedbacks might be much more positive than we currently think. They

show that the ECS can be up to 1.3°C higher in simulations where mixed-phase clouds consisting of ice crystals and supercooled liquid droplets are constrained by global satellite observations.

So, what’s the basic issue. As James Annan points out, an ECS of 5.3C is pretty hard to reconcile with our understanding of past climate changes. Bear in mind that the ECS is the equilibrium response considering fast feedbacks only, and so the equilibrium system sensitivity (ESS) – which includes slow feedbacks – could be higher. However, this study is considering clouds, which are fast feedbacks.

Also, even if energy balance estimates might be on the low side, they’re still a reasonable way to get a ballpark figure. If they’re wrong by a factor of two or more, it either means that natural variability is masking a lot of attributable warming, or the response is highly non-linear. Both are possible, but it seems unlikely that the difference can be quite this large.

A number of fairly high-profile scientists also seem rather skeptical; Kevin Trenberth in the Guardian article and Gavin Schmidt on Twitter and in Chris Mooney’s article. It’s clear that clouds are one of the major uncertainties with respect to climate sensitivity. However, just as some of the energy balance models seem to produce results that appear a bit too low, this study seems to be suggesting results that are somewhat on the high side. Also, promoting it as they have is – I would argue – somewhat sub-optimal. It’s one thing to present controversial results, but claiming that

global climate models have significantly underestimated how much the Earth’s surface temperature will rise if greenhouse gas emissions continue to increase as expected

is much stronger than is justified.

Posted in Climate change, Climate sensitivity, Gavin Schmidt, Science | Tagged , , , , , , , | 22 Comments

The value of social science

Since I’ve written about Science and Technology Studies (STS) before, I thought I would comment on the formation of a new group called Association for Studies in Innovation Science and Technology. I encountered it in a Nature column called Recognise the value of social science.

I should start by stressing that I think that the social sciences are extremely valuable. I also think that doing social sciences well is extremely difficult; typically social scientists consider systems that are much more complex than those considered by physical scientists. I do, however, have a number of reservations about this new organisations and the motivations behind its formation. The article starts with

If the science community is serious about integrating social science into its thinking and operations — and statements by everyone from Nature and the UK government to Paul Nurse, former president of the Royal Society, indicate that it is — then we social scientists must do more to make this happen.

Seems fine, although I’m not entirely sure that I understand what is meant by integrating social science into its thinking and operations. Being aware of the society in which we operate seems quite reasonable. Being conscious of the society in which we operate when actually carrying out research may not be (at least not in the sense of it influencing how we interpret our research).

The article then says

And we want to make clear that social science — especially science, technology and innovation studies (STIS) — should be integral to science and does not merely handle external issues, for example by addressing ‘public acceptance’.

What does integral to science mean? Something that I think most physical scientists believe is that they’re typically studying systems that obey fundamental laws of nature. Our understanding of such systems is therefore expected to be independent of societal values or political ideologies. Ideally, our understanding should not depend on who does the research, or where it is done. Of course, in reality this may not be entirely true, but the goal is for the evidence to constrain our understanding in such a way that societal/political influences are minimised. So, how do the social sciences fit into such a framework?

One of the key goals for this new organisation is

to lobby for social-scientist involvement in the earliest stages of research projects, when emerging ideas are most open to discussion.

Why? In what way would including a social scientist help to improve how we interpret research results in the physical sciences? To be clear, it might, but it’s not obvious how. Also, if a social scientist has some interest in some other research area, they can presumably try to get involved and see if they can contribute something. Maybe they’d bring some new perspective, but it would still be in the context of trying to understand some physical system; not trying to impose some kind of social science context on our understanding of that system.

Apparently

Science and society are not discrete, as some researchers seem to assume. Knowledge — about the impacts of climate change, for example — gets its value and usefulness only when rooted in particular contexts.

Our perception of the significance of the impacts of climate change might depend on the context, but whether or not certain impacts will occur does not depend on the context. How much we will warm if we emit a further 1000GtC does not depend on how society perceives the knowledge about the impacts of climate change.

The above comment is followed by

This makes it diverse and contested.

Yes it is diverse and contested, but why is this and what does it imply? Does it imply something with respect to the science itself? I can’t really see why. Should researchers really bear this in mind when interpreting their results? Again, I can’t see why. I don’t think the interpretation should depend on whether or not the result is likely to be controversial (okay, there may well be scenarios where we would want to double check potentially controversial results, but only because we might want to be sure, not because we might want to interpret the results differently).

I was going to say a bit more, but this is getting a bit long. The article ends with

To make the most of science, we must know how science operates, and understand the factors that influence it. Social scientists in the United Kingdom and elsewhere have been studying that for more than 50 years. We are ready and able to help.

This may explain why I’m slightly negative about this. My exposure to Science and Technology Studies (STS) might be a little limited, but if they have spent 50 years studying how science operates and the factors that influence it, there are a number who appear to still not understand it very well.

Certainly my impression is that they want to impose societal and political factors into the research process in a way that would make most physical scientists very uncomfortable. Rather than playing a role in helping to define how scientific evidence may influence society and – in particular – policy making, they seem to want allow societal and political factors to influence how we do research and how we interpret our results.

Maybe the above is an unfair characterisation of STS and maybe I simply don’t quite understand what they’re suggesting. However, I think I’m not alone in this and it doesn’t help if STS researchers don’t even appear to appreciate why physical scientists might have some reservations about what they seem to be suggesting. Similarly, if STS researchers think they’re somehow the ideal people to sit at some interface between science/technology and society, it’s hard to see why if they can’t even clearly explain their role to other groups of researchers.

I also think that research is about studying things and producing and presenting results. If there are a group of researchers in the social sciences who would like to study role of science in society and in policy making, go ahead. You don’t need some kind of buy in from physical scientists; just get out there, do some research, and let the results speak for themselves. However, this comment in the article makes me think that their goal is something more than simply being researchers who investigate science and technology in society

We want to work at national and regional levels, from the UK government and research-funding councils to professional science bodies and the devolved governments in the four UK nations, which are experimenting with science and technology policies.

Are these researchers, or lobbyists?

Posted in Climate change, Global warming, Science | Tagged , , , | 183 Comments

On transparency

A while ago I wrote a post about a Nature commentary on Research Integrity and transparency, by Stephan Lewandowsky and Dorothy Bishop. Warren Pearce and colleagues wrote a brief response and have since expanded on this in a blog post.

Something that struck me when I read the Pearce et al. response is that I’d forgotten how antagonistic (okay, maybe too strong, but I can’t think of another term) debates can be in this context; I don’t think that the Pearce et al. response has anything positive to say about the Lewandowsky and Bishop article. This is a little surprising given that one of the Pearce et al. criticisms seems to be that one should be careful of over-simplifying what is a complex issue. My interpretation of the Lewandowsky and Bishop article is that we should indeed be careful of simplistically thinking that transparency in research is some kind of panacea. We shouldn’t think that more transparency will mean that science will somehow be free from attack, or that it will somehow make people more trusting of science in general. So Pearce et al. are correct that we should be careful of over-simplifying this issue, but it’s not clear in what way Lewandosky and Bishop did so.

Pearce et al. then discuss the issue of experts, saying:

the fundamental question is who counts as an expert, and under what conditions?

I realise that there might not be some kind of clear boundary between expert and non-expert, but surely it’s pretty easy to establish if someone qualifies as an expert, or not. Of course, it is quite possible for someone who isn’t formally an expert to make a positive contribution to a field, either by bringing some kind of fresh perspective, or by engaging with scientists who are then forced to think more about what they’re presenting, how they’d doing so, and the significance of their research. None of this, however, suggests that making all of one’s data and codes available is likely to lead to some layperson noticing something obvious that experts have missed. I agree that this is not an argument against doing so, but – similarly – doesn’t seem like a particularly good argument for doing so.

Pearce et al. then discuss the role of the public and suggest that [a] more fruitful approach to addressing public doubts about science was proposed by David Demeritt in 2001 who said:

“The proper response to public doubts is not to increase the public’s technical knowledge about and therefore belief in the scientific facts of global warming. Rather, it should be to increase public understanding of and therefore trust in the social process through which those facts are scientifically determined. Science does not offer the final word, and its public authority should not be based on the myth that it does, because such an understanding of science ignores the ongoing process of organized skepticism that is, in fact, the secret of its epistemic success. Instead scientific knowledge should be presented more conditionally as the best that we can do for the moment. Though perhaps less authoritative, such a reflexive understanding of science in the making provides an answer to the climate skeptics and their attempts to refute global warming as merely a social construction.”

I actually don’t find much to disagree with in the above quote; I think that a better understanding of the scientific process would be of benefit. However, it seems that my interpretation of the above is somewhat different to that of Pearce et al. One thing that concerns me about the drive for greater transparency is that it could lead to a greater trust in individual studies. I don’t think that this is necessarily a good thing. Typically we start to trust our understanding of a scientific topic when results from different studies by different people/groups start to converge in some way.

We don’t increase our trust in a particular study simply because we can’t find an error and – similarly – we shouldn’t necessarily dismiss a study because someone finds a mistake. Similarly, we shouldn’t trust something more because the authors have been completely transparent, and shouldn’t necessarily dismiss a study because the authors have not been as transparent as we might have like; we trust the overall scientific method, not individual studies or individual researchers. That to me is the social process through which those facts are scientifically determined. So, I don’t necessarily see anything in the above quote that is at odds with what was being suggested by Lewandowsky and Bishop.

Pearce et al. finish their post by saying

What is noticeable is how little these social sciences critiques have cut through to those in the natural sciences.

Well, there may be some reasons for this. From what I’ve seen, some of the social science critiques appear to be coming from those who seem to think that they’re in some kind of special position where they can observe and critique the natural/physical sciences. Why? We’re all researchers. Most of us work in the same environments with the same pressures and incentives. We all potentially suffer from biases. We can all make mistakes. We’re all expected to engage with the public and, potentially, with policy makers. So, maybe some natural scientists just don’t really see why they should pay much attention to some social scientists who seem to think they’re in some kind of position to critique how natural/physicists undertake their research. It also doesn’t help when some social scientists make it fairly clear that they don’t really understand what’s being presented by natural/physical scientists.

In my view, if social scientists want their critiques to be taken seriously by natural/physical scientists, they should put more effort into engaging with them directly, rather than appearing to be standing back and simply observing. On the other hand, I do remember a post in which some in the comments suggested that I simply didn’t understand what was being presented by the social scientists. This may be true, but – if so – I was not the only physicist to misunderstand what was being presented, and might suggest that social scientists are not putting sufficient effort into making their critiques understandable.

I’ll end by quoting their final two sentences

To be clear, there is no excuse for ignoring the existing evidence base. However, we believe that social scientists must be more proactive in using that evidence base in order to lead the debate from a position of strength.

What evidence and why should social scientists be aiming to lead the debate from a position of strength? I don’t really understand what is being suggested here, but maybe this is an opportune moment to re-highlight Michael Tobis’s post about swimming in your own lane.

Posted in Climate change, ClimateBall, Global warming, Science, Universities | Tagged , , , , , , | 40 Comments

Free speech in academia

I wrote a post a while ago about the newly formed Heterodox Academy. The basic motivation of the Heterodox Academy is [t]o increase viewpoint diversity in the academy, with a special focus on the social sciences. The basic idea being that there is too little political diversity in some academic areas that and this creates an environment with little viewpoint diversity and in which certain ideas become orthodoxy.

I had two main criticisms of the basic premise of the Heterodox Academy. One was simply that if there are biases in academia that can lead to poor scholarship, then the solution – in my view – is to promote good scholarship, rather than simply introducing a new set of biases. The other criticism I had was that even though I am a huge fan of diversity, there is – again, in my view – a difference between a diversity of intrinsic characteristics, and a diversity of viewpoints. I think it quite reasonable (in fact, essential) that people should not be personally challenged because of their characteristics; people of all races, genders, sexual orientations,…. should feel welcome in an academic environment.

Should this also be true for people of all viewpoints? I can’t see why. The right to express one’s views (within the law) is a fundamental part of a free and open society. Universities are intended to be sites of Academic Freedom, where people are free to do research that might produce inconvenient results, and to express views that may be uncomfortable to some. That doing so may create an environment in which people with certain viewpoints may not feel comfortable, or welcome, is not a reason to discourage people from doing so. Of course, doing so in a classroom setting may not be appropriate, but we’re talking here about viewpoint diversity amongst academics, rather than between academics and students. Which brings me to the reason why I’m writing this post now.

Something that has struck me about this whole topic is that it is often framed in terms of being a “free speech issue” and yet some of the proposed solutions (or some of the criticisms of the current system) appear to be suggesting that we attempt to constrain what others can say. This, to me, seems rather ironic, and is the topic of a recent article called The Free Speech Fallacy that discusses this in the context of the Heterodox Academy (H/T Joshua).

The basic point is that arguing against what others have said on the basis that what they’re promoting would impinge free speech, seems to essentially be doing what you’re critising others for supposedly doing. You’re not actually addressing what they’re saying; you’re simply trying to deligitimise it on the basis of it violating something we hold as fundamental to our societies. Even if you have a point, your argument is not really any better than the argument you’re criticising. Of course, given the existence of free speech, one is quite entitled to make such an argument, but then it’s hard to believe that you’re doing so because you greatly value free speech; it would seem more likely that you just don’t like what the other parties have said.

Posted in ClimateBall, Personal | Tagged , , , , | 353 Comments

Grant troughers?

I wandered over to Paul Homewood’s site, and discovered his most recent post was about Royal Society Fellowship. He remarks that:

It won’t come as any surprise to learn that there are many other climate scientists at the trough.

and then lists those climate scientists who hold Royal Society Fellowships and how much their Fellowship is worth. I left a comment that (when I last looked) hadn’t yet appeared. I don’t really care, but I do find describing these people as being at the trough rather insulting. On the off chance that Paul Homewood is simply ignorant, rather than nasty and intolerant, I’ll explain something.

The most common Royal Society Fellowship is the University Research Fellowship (URF). It’s aimed at researchers who don’t yet have permanent jobs and is extremely competitive; I think the success rate is around 5%. The Fellowship typically starts off with a 5-year term, renewable to 8, and sometimes 10. The money covers everything; the researcher’s salary, pension and national insurance contributions, travel and computing costs, and university administration costs.

Mosts of the researchers will be Grade 8 or Grade 9, which means their salaries will be somewhere in the range of £40k, not a bad salary, but remember these are thought to be some of the strongest young researchers in the country. If Paul Homewood thinks this is them at the trough, pigging out on public money, he’s got a strange idea of what being at the trough means.

Posted in Personal, Science | Tagged , , , , , | 37 Comments