Watt about breaking the ‘pal review’ glass ceiling

Pat Frank has a guest post on WUWT about breaking the ‘pal review’ glass ceiling in climate modeling. It’s essentially about a paper of his that he has been trying to get published and that has now been rejected 6 times. As you can imagine, this means that there is some kind of massive conspiracy preventing him from publishing his ground breaking work that would fundamentally damage the underpinnings of climate modelling.

In fact, we discussed Pat Frank’s paper here, which was based around a video that Patrick Brown produced to discuss the problems with Pat Frank’s analysis.

I’m going to briefly try and explain it again (mainly based on a part of Patrick Brown’s video, which I will include again at the end of this post). You could consider a simple climate model as being a combination of incoming, and outgoing, fluxes. The key ones would be the incoming short-wavelength flux, the outgoing short-wavelength flux (both clear-sky, and cloud), the outgoing long-wavelength flux (also both clear-sky and cloud) and a flux into the deep ocean. How the temperature changes will then depend on the net flux and the heat capacity, C (i.e., how much energy it takes to increase the temperature by some amount). This is illustrated in the equation below.

\dfrac{dT}{dt} = \dfrac{[incoming \ SW] - Cloud \ SW - Clear \ SW - Cloud \ LW - Clear \ LW - Q}{C}

So, what has Pat Frank done? He’s considered one of the terms in the above equation (the Cloud \ LW term) and found that there is a discrepancy between what climate models suggest it should be and what it is observed to be, with some having quite a large discrepancy (although, the multi-model mean is actually quite close to the observations). It turns out that the root-mean-square-error between models and observations is about ± 4 Wm-2. Pat Frank assumes that this error should then be propagated at every time step so as to determine the uncertainty in the temperature projection. This then produces an uncertainty that grows with time, becoming very large within only a few decades.

There are a number of ways to explain why this is wrong. One is simply that you should really consider the uncertainties on all of the terms, not just one. A more crucial one, though, is that the error in the cloud longwavelength forcing is really a base-state error, not a response error. We don’t expect it to vary randomly at every timestep, with a standard deviation of 4Wm-2; it is simply that some models have estimates for the longwavelength forcing that is quite a bit different to what it is observed to be.

So, what is the impact of this potential discrepancy? Consider the equation above, and imagine that all the terms are close to what we would expect from observations. Consider running the model from some initial state and assume that the incoming short-wavelength flux, and the atmospheric compostion, are constant. Also, bear in mind that some of the fluxes depend on temperature, T. If we run the simulation long enough, we’d expect the system to settle to an equilibrium state in which all the fluxes balance, and in which the temperature is constant (i.e., dT/dt = 0).

Now, consider rerunning the simulation, but with a slightly different longwavelength cloud forcing. Again, if we run it long enough, it will settle to an equilibrium state, in which the fluxes balance, and the temperature is constant. However, since the longwavelength cloud forcing is different, some of the other fluxes will also be different, and the equilibrium temperature will, consequently, also be different. There will be an offset, compared to the first simulation, but it won’t grow with time simply because one simulation had a different longwavelength cloud forcing compared to the other.

So, that there is a discrepancy between the longwavelength cloud forcing and observations does not mean that this implies an error that should be propagated at every timestep (as Pat Frank claims). It mainly implies an offset, in the sense that the magnitude of this discrepancy will impact the equilibrium state to which the models will tend. Anyway, I’ve said more than I intended. Patrick Brown’s view – which addresses Pat Frank’s error propagation suggestion – is below, and goes into this in much more detail than I’ve done here.

Posted in Anthony Watts, Climate change, ClimateBall, Gavin Schmidt, Research, Watts Up With That | Tagged , , , , , | 83 Comments

Infrared absorption of atmospheric carbon dioxide

Geoff Price made me aware of a paper, by an apparently highly published physicist, that considers the infrared absorption of atmospheric carbon. It concludes that

CO2 is a very weak greenhouse gas and cannot be accepted as the main driver of climate change.

You might suggest that I should just ignore such a clearly nonsensical paper, but it is being promoted on notrickszone (or, as I like to call it, fulloftrickszone) and it’s sometimes useful to try and understand what they’ve done wrong.

Essentially what this paper concludes is that, relative to a baseline CO2 concentration of 400ppm, the change in radiative forcing, \Delta F, if we change the atmospheric CO2 to a new concentration, C_{\rm CO2}, is

\Delta F = 1.881 \ln \left( \dfrac{C_{\rm CO2}}{400} \right),

which is considerably smaller than suggested by other analyses:

\Delta F = 5.35 \ln \left( \dfrac{C_{\rm CO2}}{400} \right).

In others words, this new analysis suggests that doubling atmospheric CO2 should only produce a change in forcing of 1.3 Wm-2, rather than 3.7 Wm-2.

So, what’s wrong with this new analysis? Let me try and explain using the figure on the right, which I produced using MODTRAN. The left-hand panel shows an example of a spectrum that you might measure if you were observing the Earth from space. The right-hand panel is the associated atmospheric temperature profile.

The coloured curves in the left-hand panel are example blackbody spectra at different temperatures. What this shows is that in some wavelength ranges, the spectrum we would observe comes from regions that are quite warm, and in other wavlength ranges, from regions that are quite cool. This is because, in some wavelength bands, the surface can emit directly to space, while in others, its coming from within the atmosphere (in some cases, even from the stratosphere, but I’ll mostly ignore that). Since the temperature drops with altitude (in the troposphere, at least) the emission from within the atmosphere is coming from a region that is colder than the emission coming directly from the surface.

If we then increase atmospheric CO2, while leaving everything else unchanged, that will act to block some of the outgoing flux. What essentially happens is that some of the flux will end up coming from higher in the atmosphere that it did when atmospheric CO2 was lower. Since the temperature drops with altitude (in the troposphere) this means that it will now be coming from regions that are cooler and that, hence, emit less. Therefore, the outgoing flux goes down and the system will have to warm to return to energy balance. As already pointed out, doubling atmospheric CO2 is estimated to reduce the outgoing flux by about 3.7Wm-2.

So, what is wrong with this more recent analysis? I think the answer is on page 5, where it says

we consider an isothermal atmosphere of T = 288 K.

Well, if the atmosphere is isothermal (constant temperature – 288K) then it doesn’t matter where the emission is coming from; it will always look like a 288K blackbody. It could all be coming from the surface, some from the surface and some from within the atmosphere, or all from within the atmosphere; it will make no difference. Similarly, if you change the atmospheric CO2 concentration, then you may change where the emission is coming from, but you won’t change the outgoing spectrum; it will still look like a 288K blackbody.

Therefore, I don’t even really know what the paper has actually calculated, but it almost certainly isn’t what the author thinks it is, and it isn’t a representation of the change in forcing due to a change in atmospheric CO2. Unless I’m missing something, estimating the change in forcing due to a change in atmospheric CO2 requires taking the temperature profile of the atmosphere into account. That this paper did no such thing would seem to immediately mean that what it presents is clearly not representative of the change in forcing due to a change in atmospheric CO2. Essentially, its no great surprise that it gets a result that is inconsistent with other analyses, since it doesn’t even seem to be doing an appropriate calculation.

Posted in Climate sensitivity, physicists, Pseudoscience, Research, The scientific method | Tagged , , , , , , | 40 Comments

Sound Science

By some serendipity, I noticed and responded to a tweet where Kevin Folta was trying to ridicule the accusation that he was “pro-GMO”:

I rather like the “pro-biotech” label as it seems more precise than “pro-GMO.”

Not Kevin:

OK. That last tweet may not have been the most diplomatic one from my part. Still, it should be obvious that one can be pro something while keeping a critical eye on it.  Instead of tripling down on the victim playing, Kevin switched to the honest broker dance:

At that moment, I could not expect the following tweet, nor could he expect my response:

Finding an example of Kevin’s advocacy wasn’t hard. I mean, the guy is running a pro-biotech podcast powered by a foundation. Kevin’s quite transparent when asking for donation:

Funding for my outreach program comes from individuals and charities to support biotech literacy.


I’m cool with that. Instead of acknowledging the indubitable, Kevin goes for bragging about having studied bio-techs for 30 years and liking the interventions of one of his fans, who tried to waste my time by playing the hard of reading. This did not stop me from driving my point home:

Again, instead of owning his advocacy, Kevin goes with “I did nothin’ wrong”:

At this point some kind of truce with Kevin’s fan over Nassim Taleb Speedo Science. It did not last long, and the kerfuffle rekindled when I underlined how bio-tech was being sold as a way to reduce poverty. This returned us to my main point:

Around that time, Kevin linked to a short video showing the benefit of Bt Eggplant, but I can’t find it back. In any event, Kevin continued to strawman my position as anti-biotech:

Then our exchange officially reached diminishing returns:

I started the exchange with the belief that Kevin Folta was kinda cool.

The false openness it revealed now makes me doubt.

Sound science ought to start by owning’s one’s schtick, right?

Posted in advocacy, Science, Scientists | Tagged , , , | 58 Comments

Bruno Latour

I came across an interesting interview with Bruno Latour, a sociologist with an interest in Science and Techology Studies (STS), who was involved with what has been called the “science wars”. I actually found much of what he said in the interview quite sensible. For example, he suggested that the earlier “science wars” were more a dispute than a war, but suggested that

[w]e’re in a totally different situation now. We are indeed at war. This war is run by a mix of big corporations and some scientists who deny climate change. They have a strong interest in the issue and a large influence on the population.

Part of my confusion about STS (which I’ve written about before) is that I had initially assumed that a key aspect of STS was about understanding how science and society could deal with science denial and with those whose agendas were mainly to undermine our scientific understanding. However, some of what I’ve seen from STS is more akin to enabling denial, than countering it. I have been told that this is more due a vocal minority, than some reflection of STS as a whole. If so, maybe the silent majority should really become a bit noiser, as Bruno Latour may be trying to do.

However, there were some parts of the interview that I found a little odd. For example, when discussing the “science wars”, Bruno Latour says

It was a dispute, caused by social scientists studying how science is done and being critical of this process. Our analyses triggered a reaction of people with an idealistic and unsustainable view of science who thought they were under attack.

I’ve been doing scientific research now for more than 25 years (I published my first scientific article in 1992). Until I started writing this blog a few years ago, I had never heard of STS. If STS researchers did indeed come up with valid criticisms of how science is done, I don’t think many who did scientific research took any notice. So, either they highlighted valid issues that have been ignored, or their critiques weren’t particularly compelling.

In fact, Bruno Latour goes on to say

Some of the critique was indeed ridiculous, and I was associated with that postmodern relativist stuff, I was put into that crowd by others. I certainly was not antiscience, although I must admit it felt good to put scientists down a little. There was some juvenile enthusiasm in my style.

Well, yes, and this does seem to be a key issue (which Latour seems to mostly ignore). A good deal of what I’ve seen from STS has indeed been ridiculous, and it does indeed appear to be partly motivated by a desire to cut scientists down to size.

It is, of course, perfectly normal for a discipline to go through a phase where some of what is presented turns out to be ridiculous. However, the ideal is that as more and more information is collected, the more ridiculous ideas are rejected and the dicipline converges towards “emergent truths”. From what I’ve seen, the ridiculous elements of STS are still there. It’s not clear, to me at least, that they have somehow converged on some “emergent truths” about science and society.

It’s possible that this is not representative of the majority of STS and that the underlying principles are actually sound. However, how are we meant to know this if the ridiculous elements are over-represented in the public sphere? Maybe someone could do some kind of consensus study that highlights the majority view, and then communicates this to society and to the broader scientific community. The problem with this, of couse, is that according to some STS researchers consensus messaging is polarising and ineffective.

However, I did find the interview with Bruno Latour quite interesting and it is worth reading (it’s not very long). It may well be that STS does have a constructive and positive role to play. If they do, then – in my view – they will have to do a much better job than they’ve done to date.

Posted in Pseudoscience, Scientists, The philosophy of science, The scientific method | Tagged , , , , | 106 Comments

Economics and Values

Michael Tobis has a post in which he argues that what we are doing to the climate will persist for many generations and, consequently, that it is immoral to continue what were’e doing and that we should address this as soon as possible (at least, that is my interpretation, but MT can correct me if I’m wrong). Stoat, unsurprisingly, disagrees and seems to argue that we should treat global warming as an economic, rather than moral issue.

The problem I have with Stoat’s post is not that I necessarily disagree, it’s that I don’t even really understand what he’s actually suggesting. As this response says

posing potential solutions as economics versus ethics is profoundly misleading, mostly because they are inextricably intertwined.

I have no economic experise, nor do I claim any. However, my understanding of economics as a discipline, is that the goal is to understand aspects of the world/society, and to use that understanding to develop models/theories that allow us to potentially influence society. By itself, however, economics does not tell us if we should do something. That depends on our judgement as to whether or not there is some kind of problem to solve and the implications of the various possible options. I don’t claim that this is a complete description of the motivation behind economics, but I think it’s roughly right (feel free to disagree, if you wish). Essentially, I don’t really see how – in the real world – one can separate economic decisions from value judgements.

Credit: Nordhaus, 2016

Maybe one could argue that we can develop as objective as possible an economic framework and that the result of analyses using this framework would be the objectively optimal solutions. However, I’m not even convinced that this is really possible. Let me try and give an example.

There was a recent paper by William Nordhaus about climate change projections and uncertainties. It includes various possible scenarios, a number of which are illustrated in the figure on the right. They include a baseline scenario, a less than 2.5oC scenario, and a cost-benefit optimum scenario.

My understanding of the cost-benefit optimum scenario (which I never seem to explain properly) is essentially that you determine a social cost of carbon by estimating the future damages and discounting them to today. This is then implemented as a carbon tax. A carbon tax would be expected to reduce emissions, and therefore reduce damages due to climate change (i.e., a cost leading to a benefit). You can then project forward, but also continually adjust the carbon tax until the incremental benefit would no longer be larger than the incremental increase in carbon tax (or, the marginal benefit matches the marginal cost). Many might argue that this is, therefore, the pathway that we should aim to follow.

Credit: Schellnhuber et al. (2016)

Here’s where I have a problem, though. The optimal scenario leads to a temperature change of about 3.5oC in 2100 (mean) with a standard deviation of about 0.7oC (you need to look at Table A-2 in the appendix, since it looks like Table 4 in the paper is wrong). If you consider the figure on the left, though, this would suggest that we would almost certainly pass summer Arctic sea ice, Greenland, Alpine glaciers, and coral reef tipping points. We might also pass a tipping point for the West Antarctic Ice Sheet. Given the uncertainty, we also can’t rule out that we might also cross other tipping points (Amazon rain forest, Boreal forests, etc).

What I’m getting at is that this optimal pathway has the potential to lead to quite specific changes that are irreversible and very uncertain. Therefore, there seems nothing wrong with people objecting to this as an outcome. This would presumably mean that they value some of the things that we might lose, more highly than was assumed in this analysis. Others could well argue that these valuations are the best representations of how society actually values these systems, but that still doesn’t mean that everyone has to accept this. People are perfectly entitled to argue for a different set of societal values; there’s no guarantee that others will agree, but we certainly do change our values with time.

The above, however, doesn’t mean not using economics to resolve these issues (I’m not even sure that it’s possible to do something that doesn’t qualify as economics). However, accepting the premise of an economic analysis doesn’t mean accepting all that it implies, if the outcome would be something that some regard as morally objectionable. This doesn’t necessarily even mean that one disagrees with the underlying economic framework; it may simply mean disagreeing with some of the assumptions that were used to produce the analysis.

Anyway, I’m going to stop there. Maybe someone can try to convince me that we can indeed separate economics and values. I’m not sure they’ll succeed, but I have a suspicion that some may indeed try, and I’m open to being convinced.

Posted in advocacy, Climate change, economics, ethics, Research | Tagged , , , , , | 170 Comments

The Virial Theorem

I had another brief Twitter discussion with Ned Nikolov, whose paper I discussed in this post. Ned seems to think that there is no atmospheric greenhouse effect and that the enhanced surface temperature is due to atmospheric pressure somehow enhancing the energy provided by the Sun. Well, this is wrong, and I thought I would try to illustrate why by explaining something that I find interesting.

I’m currently teaching our core Astrophysics course. A big part of what I’m doing in this course is deriving the equations of stellar structure, which includes the equation of hydrostatic equilibrium. A star, like the Sun, will settle into a state of hydrostatic equilibrium, in which the inward gravitational force is balanced by an outward pressure force. Any self-gravitating system (by which I mean something for which its own gravity is important) in a state of hydrostatic equilibrium satisfies something called the Virial Theorem. This is essentially that the gravitational potential energy of the system is about the same as its thermal, or kinetic, energy (in fact, it is that the magnitude of the gravitational potential energy is twice the thermal/kinetic energy).

We know the mass, M, and radius, R, of the Sun and – hence – can estimate its gravitational potential energy; it will be of order GM^2/R. From the Virial Theorem we also then know the total thermal energy of the Sun. We also know the Sun’s luminosity (how much energy it is losing per second). This means that we can estimate how long it would live if it was simply radiating its thermal energy into space. The answer is that it would live for only a few tens of millions of years.

The idea that the Sun might simply be radiating thermal energy into space was first suggested by Kelvin and Helmholtz in the 19th century. However, at that time it was also known that the Earth (and, hence, the Sun) was probably billions of years old, rather than only a few tens of millions of years old. This meant that the Sun’s energy source could not simply be gravitational potential energy being converted into thermal energy as it slowly contracted, because that would imply a much, much younger Sun than geological, and fossil, evidence suggested.

This paradox was resolved with the discovery of nuclear reactions, specifically nuclear fusion. In the core of the Sun, protons combine to form Helium, and this process releases energy (Helium has a lower mass than the total mass of 4 protons, and this mass deficit is released as energy – E = mc^2). It is this that allows the Sun to remain in a roughly steady state for billions of years, rather than for only a few tens of millions of years.

The above isn’t strictly relevant to the Earth’s atmosphere, because the gravity of the atmosphere itself isn’t all that important; the Earth’s atmosphere is in hydrostatic equilibrium because the outward pressure force is balancing the gravitational force from the central, rocky planet. It is, however, relevant for big gas giant planets, like Jupiter and Saturn. However, we can still consider much of the same basic physics.

If the Earth’s atmospheric pressure is to contribute to the enhanced surface temperature, then that would mean that the atmosphere would need to continually provide energy to the surface. It could only do this through the conversion of gravitational potential energy to thermal energy. This would then require the continual contraction of the Earth’s atmosphere. However, we can work out how much energy is available in the Earth’s atmosphere and there is far, far too little to explain the enhanced surface temperature.

As many already know, the enhanced surface temperature is due to radiatively active gases in the atmosphere that act to reduce the outgoing energy flux, causing the surface to warm until the amount of energy we’re losing into space matches the amount we’re receiving from the Sun. It is not simply a consequence of atmospheric pressure. Those who argue that it is due to atmospheric compression are essentially failing to understand something that was well understood by physicists in the 19th century.

As pointed out in this comment I’ve probably somewhat over-stated the discrepancy, in the 19th century, between geological and fossil evidence for the Earth’s age, and how long the Sun could live if it were simply radiating away thermal energy (10s of millions of years). At the time of Kelvin the estimated age of the Earth was probably more like 100s of millions of years, rather than billions. Today, however, we would estimate the Earth to be about 4.56 billion years old.

Posted in Climate change, ClimateBall, Comedy, Global warming, physicists, Pseudoscience | Tagged , , , , | 127 Comments

A bit more about clouds

A few years ago I posted a video by Andrew Dessler that was discussing whether or not Equilibrium Climate Sensitivity could be less than 3oC. The bottom line was that the best estimate for ECS is about 3oC. Given that we’re quite confident about water vapour feedback, lapse rate feedback, and ice albedo feedback, the main way in which ECS could be much lower than this (say < 2oC) is if there were a strongly negative cloud feedback. Cloud feedbacks are probably the feedbacks about which there is the greatest uncertainty, and so this is not necessarily impossible.

More recently, however, I highlighted a TED talk by Kate Marvel that discussed her work on clouds. The basic conclusion was that the observations are pointing towards clouds acting to intensify the warming – they’re a positive feedback. In fact, Kate Marvel indicates that there is no observational evidence that clouds will substantially slow down global warming.

Credit: Zelinka et al., Nature, 2017.

The reason I’m writing this is because there is a new Nature Commentary called Clearing Clouds of Uncertainty by Mark Zelinka, David Randall, Mark Webb and Steven Klein. Their commentary is really a summary of our recent understanding and – as illustrated by the figure on the right – they conclude that the evidence is converging on the cloud feedback likely being positive. The circles indicate the multi-model average feedback, and the coloured lines show the across model standard deviation. The thin grey lines extend to the model extrema. Essentially, the total cloud feedback is probably positive and has a likely range from about 0.2 Wm-2K-1 to about 0.7 Wm-2K-1.

The implication of this – as Andrew Dessler highlighted – is that it is unlikely that the ECS can be less than 2oC. We have a pretty good understanding of the other feedback processes (water vapour, lapse rate, and ice albedo) and the cloud feedback being positive strongly implies an ECS > 2oC. Some energy balance estimates of climate sensitivity suggest that the ECS is more likely less than 2oC, than above 2oC and – as I think I may have suggested before – I do think that those who promotes these results should put some effort into explaining how this is possible.

If water vaour, lapse rate and ice albedo, by themselves, suggest an ECS > 2oC and if cloud feedbacks probably amplify this, then how can the ECS be less than 2oC? My view is that this is simply because these energy balance estimates are a bit too simple and don’t necessarily capture all the relevant processes. I would, however, be happy to hear some kind of physically motivated argument for ECS < 2oC.

Posted in Climate sensitivity, ClimateBall, Global warming, Research, Science, The scientific method | Tagged , , , , , , , , | 32 Comments