## High emission scenarios

I thought I might briefly reflect, again, on the whole RCP8.5 discussion. In case anyone missed it, there has been a lengthy online discussion about RCP8.5, which is a concentration/forcing pathway that leads to a change in forcing of 8.5 W m-2 by 2100 and is associated with high emission pathways. The criticism is that this pathway is seen as unrealistic by energy modellers. For example, here is a paper that suggests that a vast expansion in 21st century coal use is implausible. Here is another paper that suggests that re-carbonisation is unlikely.

So, why do we continue to see the use of RCP8.5 in climate modelling and climate impact studies? One reason, as this Carbon Brief explainer highlights, may be that the concentration pathways used as inputs to climate models were finalised before the associated socio-economic pathways had been fully developed. There may also have been a communication breakdown between those who develop these socio-economic models and those who run climate models and do impact studies.

Ultimately, it’s great that energy modellers seem to be in agreement that the socio-economic pathways underpinning the high emission pathways are unrealistic. I think this is a positive outcome. However, I do think that one should be careful of then concluding that the more extreme climate outcomes are no longer possible. Ultimately, there are a lot of steps between what we do as a society and the resulting climate impacts.

Even if it seems unlikely that we will suddenly start to re-carbonise, can we actually rule out that someone won’t develop a clever way of extracting fossil fuels that we had thought were not recoverable? Can we be sure that some future event won’t lead us to decide to increase our fossil fuels use? In addition to that, there isn’t a simple relationship between emissions and atmospheric concentrations. The latter is really what determines how much our climate is going to change. Given uncertainties in how the natural sinks will respond, and the possibility of some sinks becoming carbon sources, we can’t rule out that even if we follow a lower emission pathway, we won’t end up on a higher concentration pathway.

Finally, what we’d really like to understand is the impact of climate change. If climate sensitivity turns out to be on the high side, then the impacts could be severe even if do follow a lower emission pathway. Similarly, some of the impacts could be more severe than we expect even if climate sensitivity doesn’t turn out to be on the high side; Great Barrier Reef, Arctic sea ice, Greenland, West Antarctic ice shelf. So, even if the emission pathways associated with RCP8.5 are very unlikely, we still can’t rule out that we won’t experience some of the impacts typically associated with this high concentration pathway.

I guess my point is that even though energy modellers seem to think the energy pathways associated with RCP8.5 are unrealistic doesn’t mean that we should then conclude that the more severe impacts of climate are also unrealistic. There are many uncertainties in the chain that takes us from what we do as a society to the resulting climate impacts. In some respects this doesn’t substantially change what we would should probably do – reduce emissions. We should, in my view, simply be careful of becoming complacent because energy modellers regards the socio-economic pathways associated with RCP8.5 as being unrealistic.

## This. Is. Not. Science’s. Job.

My title is a paraphrase of something Michael Tobis said during the marathon Twitter discussion about RCP8.5, which I thought I would use to discuss something about science communication that I’ve mentioned a number of times before. During the RCP8.5 discussion, someone highlighted an article they’d written about how to best communicate about climate change. Specifically, their argument was to tell a positive story. I largely agree.

However, their article also included the following:

The time is long overdue for scientists to learn to tell as compelling a story about energy, climate change and resource scarcity as do advertisers or lobbyists.

This is where I disagree. As the title of my post suggest, this is not science’s job. There’s nothing wrong with scientists thinking of ways to communicate about science effectively, but science communication is not really about persuasion. In a formal sense, science communicators should not be thinking like lobbyists or advertisers; they’re not trying to sell a particular idea, they’re simply trying to provide information.

This is not to say that others shouldn’t be thinking about how to craft a convincing message, or even that a scientist shouldn’t become an activist/lobbyist. However, expecting scientists to do this in general seems to completely miss what a scientist’s job is; it’s to do research so as to understand some system and – ideally – to then communicate that research, in the scientific literature, at conferences, and – if appropriate – to the public and to policy makers. Being able to do so effectively is indeed a benefit, but the focus should be on making the information accessible, not on how to make the presentation more persuasive.

However, there also seems to be an element of irony in these kind of suggestions. As I mentioned, this article was highlighted during the somewhat contentious discussion about RCP8.5, which included claims that it was mainly being used to generate headlines and to scare gullible people. This illustrates the other problem with scientists being encouraged to generate as compelling a story as advertisers or lobbyists do; they have to do so while also appearing to satisfy all the expected norms of science.

My impression is that when people suggest that the scientists develop compelling stories, they have a pretty good idea of what kind of compelling story they mean. They want compelling stories that suit their narrative, not any old compelling story. This is the other problem. Scientists aren’t necessarily experts at how to deal with something like climate change, nor are they ones who should be making these decisions. How can they know what they should be persuading people to do, or accept?

I’m completely in favour of scientists thinking about how to communicate effectively and there are a number who are excellent communicators. However, science communication should – in my opinion – be based on trying to make the information as accessible as possible, not on how to make the message most persuasive. There’s nothing wrong with activists, or anyone else with an explicit agenda, thinking about how to persuade people to accept their arguments, but that’s not the role for science communicators. If we fail to adequately address climate change, it’s not going to be because scientists were insufficiently persuasive, it’s going to be because people who should have been able to understand the information, failed to take it seriously enough.

## A thin bench

A Nature Communications paper came out yesterday called Discrepancy in scientific authority and media visibility of climate change scientists and contrarians. It generates a list of what they call climate change contrarians and a list of climate change scientists and shows that contrarians are given disproportionate representation in the media.

The result seems pretty self-evident. Those who hold contrarian views about climate change seem to have much more visibility in the media when compared to how visible these views are in the scientific literature. However, I do feel a bit uncomfortable about a paper that labels individuals; I certainly wouldn’t be too happy if it happened to me (okay, it might depend on the label 🙂 ).

The list of climate change contrarians was generated from the Heartland Institute, DeSmogblog’s database, and signatories of the NIPCC. This immediately created one issue, because some included, such as Scott Denning, are clearly not climate change contrarians. The list of climate change scientists was generated from the most highly cited in the Web of Science database. This is the bit that I found interesting. Some of those already included in the list of contrarians also ended up being the amongst the most highly cited. They were then removed from the list of climate change scientists, and the list was then topped with the next most highly cited researchers. They then ended up with a list of 386 climate change contrarians (their term) and a list of 386 climate change scientists.

However, their Supplementary Information suggests there were only 8 people who were in the list of climate change contrarians and also – initially – in the list of the most highly cited researchers; R. Bradley, J. Clark, J. Curry, C. Johnson, R. Pielke (Jr + Sr), J. Taylor, and R. Tol. At this stage, 8 out of the 386 most highly cited researchers are also listed as climate change contrarians (2.2%).

However, the R. Bradley in the climate change contrarian list is probably someone called Rob Bradley, while the highly cited R. Bradley is probably Ray Bradley (from Mann, Bradley & Hughes). So, at least one is probably mis-identified. I’m not familiar with all of the other names, but I am aware of the work of J. Curry, R. Pielke (Jr and Sr) and R. Tol. As far as I’m concerned, there is no way you could describe their research as contrarian; it’s pretty mainstream (this doesn’t necessarily mean that what they say in public would be regarded as mainstream, but their research doesn’t appear particularly contrarian). There are others in the list of climate change contrarians who publish papers disputing key aspects of mainstream climate science (W. Soon, N. Shaviv, H. Svensmark) but none of them make it into the list of highly cited researchers.

Whatever you think of the merits of the paper, it does seem to nicely illustrate that in a relatively long list of highly cited researchers, there are virtually none who publish papers that substantively dispute our basic understanding of climate change. There’s a pretty thin bench of climate change contrarians who would also be regarded as leading researchers. So, maybe it’s worth acknowledging what was being suggested (climate scientists should be more visible in the public discourse) even if one doesn’t particularly like the idea of publishing papers in which people are labelled in some way.

Update (16/08/2019):

It now seems that the C. Johnson mentioned above is Claes Johnson (see comments) who is highly cited in Mathematics, but not in climate (they appear to think there is no such thing as a planetary greenhouse effect). The J. Taylor is James Taylor (Heartland) but the highly cited J. Taylor is probably John Taylor from CISRO. The J. Clark is John Clark, but the highly cited J. Clark is probably Jorie Clark from Oregon. The highly cited R. Pielke is probably only Roger Pielke Sr. Hence, they seem to have mis-identified 4 of the 7 highly cited researchers who they claim are also in the climate change contrarian list. This means that maybe only 3 in the climate change contrarian list were also initially in the highly cited researcher list. Also – as I said in the post – I think many would regard their publications as pretty mainstream (well, in the sense of not substantially criticising our understanding of AGW).

## Sigh

There’s been a rather contentious Twitter thread about RCP8.5, a concentration/forcing pathway I’ve discussed before. It started with a claim that it was “bollox” followed by a suggestion that it was mainly used for generating headlines, scaring gullible folk and children, and giving climate contrarians a reason to ignore the need for urgent action on emission mitigation.

A number of us pointed out that there were still valid reasons for using an RCP8.5 concentration/forcing pathway and that suggesting that it was mainly used for generating headlines and scaring gullible people was just promoting a denialist conspiracy theory. Fortunately, some other sensible people also chipped in and pointed out that we probably couldn’t yet rule out an RCP8.5 concentration pathway, that it was still useful for inter-model comparison, and that it was a useful pathway for impact studies because of the large signal to noise. None of this means that there aren’t some valid criticisms, but claiming it’s “bollox” and simply used to scare gullible people is just nonsense.

What I found frustrating is that I think this is an interesting/important issue and it would be worthwhile to be able to discuss it sensibly. However, I particularly dislike suggestions that the reason we haven’t effectively implemented climate policy is because of the behaviour of climate scientists, and so I failed to hide my frustration as well as I probably should have. I did learn some things from some of the comments, but the overall discussion was unfortunate and I think it ultimately created some artificial divisions between people who probably mostly agree.

Unfortunately, I think this is becoming all too common. My impression is that we’re now in a position where people who probably mostly agree about the issues, are in conflict over details that probably don’t really matter. Fundamentally, whether we use RCP8.5 in climate models, or not, the basic message is the same; we need to start reducing emissions soon. It’s possible that some contrarians will use that climate scientists use RCP8.5 to argue that they’re intentionally exaggerating the risks. However, if climate scientists stopped using RCP8.5 the same people would simply find something else to criticise. The idea that scientists should stop doing something in order to counter those who are clearly engaging in bad faith doesn’t make any sense to me.

I really do wish it were possible to have these nuanced discussions without it turning contentious; that it were possible to have a discussion where maybe people didn’t end up agreeing, but still learned something. Probably human nature that it’s the exception rather than the rule, but it’s still unfortunate. Does make me wonder if we’ll ever really get into a position where we can implement any kind of effective climate policy. Hopefully, we’ll either overcome this, or that what we do end up managing to do is enough to avoid the more serious consequences.

## A little knowledge

There is apparently a paper from a couple of years ago that is currently doing the rounds and that argues that the Molar Mass Version of the Ideal Gas Law Points to a Very Low Climate Sensitivity. The suggestion is that it is difficult to determine the surface atmospheric temperature but that it can be done with

a gas constant and the knowledge of only three gas parameters; the average near-surface atmospheric pressure, the average near surface atmospheric density and the average mean molar mass of the near-surface atmosphere.

and, given this, no one gas has an anomalous effect on atmospheric temperatures that is significantly more than any other gas and there can be no 33°C ‘greenhouse effect’ on Earth, or any significant ‘greenhouse effect’ on any other planetary body with an atmosphere of >10kPa.

The problem is that the method the paper is applying is the ideal gas law. It’s simply a relationship between pressure, density, temperature and molar mass that essentially applies anywhere in an Earth-like planet’s atmosphere. It’s a truism; if you know the values for three of these terms, then you can determine the fourth. It doesn’t tell you anything about why these terms have these values. That you can use it to determine the surface atmospheric temperature from the density, pressure and molar mass doesn’t imply that there is no greenhouse effect, because the greenhouse effect doesn’t imply that the Earth’s atmosphere would no longer satisfy the ideal gas law.

In the case of an Earth-like planet’s atmosphere, the surface atmospheric pressure is essentially the weight of the atmospheric column; this is fixed. The molar mass depends on the composition, so is also fixed. The only two that can vary are the density and temperature. So, should we regard the temperature as depending on the density, or the density as depending on the temperature? The ideal gas law – by itself – can’t tell us, but we can consider other physics.

The density profile in the atmosphere depends on the scale height, which is the vertical distance over which the density decreases by a factor of $e$ ($e = 2.718$). The scale height is set by the atmospheric temperature; if the atmospheric temperature is high, the scale height will be large, the atmosphere will extend to large heights, and the density (mass per unit volume) will be low. If the temperature is low, the scale height will be low, the atmosphere will be compressed near the surface, and the density will be high.

So, if the surface temperature is low, the atmospheric density will be high, and if the surface temperature is high, the atmospheric density will be low. This is simply a consequence of the ideal gas law; we still haven’t determined why an atmosphere has a certain set of properties. For example, why is the surface atmospheric temperature on the Earth around 288K (15oC)? Well, that’s a consequence of the greenhouse effect.

In the absence of an atmosphere, energy balance would require that surface temperature were 255K; the presence of an atmosphere enhances this by about 33K. Having done so, the atmosphere still satisfies the ideal gas law, so if you know the surface pressure, density, and molar mass, you can certainly determine the surface atmospheric temperature. This does not mean, though, that there is no greenhouse effect, or that climate sensivity is low.

I thought I would end with the relevant part of the poem that gave us the phrase a little knowledge is a dangerous thing:

A little learning is a dang’rous thing;
Drink deep, or taste not the Pierian spring:
There shallow draughts intoxicate the brain,
And drinking largely sobers us again.

## The Popper Ratio

I hereby propose the Popper Ratio (n.) Unit obtained by calculating the number of “popper” in a long-form text compared to the number of times Sir Karl is really cited. By “really cited” I mean (a) a quote and (b) a reference. No mere mention. No handwaving. Proper quote and citation.

The Ratio is inspired by a SpeedoScience fight between Nassim and Claire. Both misrepresent our curmudgeon. Why? The simplest hypothesis is that paying lip service to authors makes one forget to pay due diligence to their points.

As a proof of concept, here are the results of a simple search at Claire’s. It may not be an exhausive list. It sure exhausted me. Numbers are involved. Caveat emptor.

* * *

Confusion About -Isms is Compounding Schisms has a 2:0 ratio, 4:0 if we add the author’s comments. In it we learn that “Popper formulates fallibilism as a core principle in liberalism.” The Open Society and its Enemies gets a mention.

Intersectionality and Popper’s Paradox has 3:1. The quote is partial, “unlimited tolerance must lead to the disappearance of tolerance” from Open Society.

The Poverty of Cosmopolitan Historicism gets a 7:1 ratio. The quote:

In The Open Society and its Enemies, Karl Popper wrote that “we may become the makers of our fate when we have ceased to pose as its prophets”. In Popper’s view, historicism was defined by its simplistic understanding of history, viewed as an unfolding of inexorable iron laws.

Who’s Afraid of Tribalism has a 9:0 ratio, again Open Society. The Unconstrained Vision of David Deutsch has a 7:0 ratio, yet Hayek and Hobbes get long quotes.

A ratio of 1:0 indicates a cameo appearance, like in Giving the Devil His Due, this time about Conjecture and Refutation. A sad 2:0 for Remain vs Leave: Elite Technocracy vs Liberal Democracy. Open Society, again. What the Alt-Right Gets Wrong About Jew gets a 3:0 ratio. The authors discuss falsification:

Any legitimate scientific theory, he said, should specify some state of the world which, if it is observed, would make us logically compelled to reject the theory. One of the problems with [the] criterion is that there is no such thing as falsification in the strong sense that he envisaged. Any theory can be salvaged in the face of any evidence, though this may require some fanciful theorizing.

The falsificationnist-in-chief is well aware of that problem. See below.

A perfect 1:1 for a review of Identity, Islam, and the Twilight of Liberal Values by Terri Murray. Again, tolerance:

[I]f we extend unlimited tolerance even to those who are intolerant, if we are not prepared to defend a tolerant society against the onslaught of the intolerant, then the tolerant will be destroyed and tolerance with them.

There are a two other pieces (2:0 and 1:0), but you get the drift. There are two main reasons to namedrop Pop – Open Society and falsification. Let’s check the books.

* * *

There are 31 occurences of “tolera” in Open Society. The most relevant place, in note 6 of chapter 7 (The Principle of Leadership), introduces three paradoxes – Freedom (to cause injustice), Democracy (i.e. choosing tyranny), and Tolerance. Here’s the whole presentation on tolerance, a paradox that can be traced back to Voltaire:

Less well known is the paradox of tolerance : Unlimited tolerance must lead to the disappearance of tolerance. If we extend unlimited tolerance even to those who are intolerant, if we are not prepared to defend a tolerant society against the onslaught of the intolerant, then the tolerant will be destroyed, and tolerance with them. In this formulation, I do not imply, for instance, that we should always suppress the utterance of intolerant philosophies ; as long as we can counter them by rational argument and keep them in check by public opinion, suppression would certainly be most unwise. But we should claim the right even to suppress them, for it may easily turn out that they are not prepared to meet us on the level of rational argument, but begin by denouncing all argument ; they may forbid their followers to listen to anything as deceptive as rational argument, and teach them to answer arguments by the use of their fists. We should therefore claim, in the name of tolerance, the right not to tolerate the intolerant. We should claim that any movement preaching intolerance places itself outside the law, and we should consider incitement to intolerance and persecution as criminal, exactly as we should consider incitement to murder, or to kidnapping ; or as we should consider incitement to the revival of the slave trade.

Some limitation to freedom of speech is being acknowledged. Its discussion lies outside our remit. The paragraph that follows offers a general solution:

All these paradoxes can be easily avoided if we frame our political demands in some such manner as this. We demand a government that rules according to the principles of equalitarianism and protectionism ; that tolerates all who are prepared to reciprocate, i.e. who are tolerant ; that is controlled by, and accountable to, the public. And we may add that some form of majority vote, together with institutions for keeping the public well informed, is the best, though not infallible, means of controlling such a government. (No infallible means exist.) Cp. also chapter 6, the last four paragraphs in the text prior to note 42 ; text to note 20 to chapter 17 ; note 7 (4), to chapter 24 ; and note 6 to the present chapter.

Assuming equality and reciprocation ought to solve the paradox. Hard to see how it would escape a kingdom of blindness, but let’s not ask too much. This is only an endnote. Any parent or moderator could have told you something similar.

The solution matters more than the paradox itself. For instance, “protectionnism” is opposed to laissez-faire in Chapter 6. No wonder the paradox remains an empty battle cry for Freedom Fighter self-defense against Claire’s scapegoats.

* * *

Many quotes could be presented to illuminate the concept of falsifiability. Since it may boost our ratio let’s stick to one, from the Logic of Scientific Discovery:

It might be said that even if the asymmetry [between verification and falsification] is admitted, it is still impossible, for various reasons, that any theoretical system should ever be conclusively falsified. For it is always possible to find some way of evading falsification, for example by introducing ad hoc an auxiliary hypothesis, or by changing ad hoc a definition. It is even possible without logical inconsistency to adopt the position of simply refusing to acknowledge any falsifying experience whatsoever. Admittedly, scientists do not usually proceed in this way, but logically such procedure is possible; and this fact, it might be claimed, makes the logical value of my proposed criterion of demarcation dubious, to say the least. I must admit the justice of this criticism; but I need not therefore withdraw my proposal to adopt falsifiability as a criterion of demarcation. For I am going to propose (in sections 20 f.) that the empirical method shall be characterized as a method that excludes precisely those ways of evading falsification which, as my imaginary critic rightly insists, are logically possible. According to my proposal, what characterizes the empirical method is its manner of exposing to falsification, in every conceivable way, the system to be tested. Its aim is not to save the lives of untenable systems but, on the contrary, to select the one which is by comparison the fittest, by exposing them all to the fiercest struggle for survival.

Here’s why the quote in the 3:0 paper was incorrect. The authors were right to say that judgment calls were required. They were incorrect in saying this confuted falsificationnism. A jump from possibility to necessity appears to be the culprit.

According to my calculations, this piece has a 7:7 ratio. Not bad.

## Climate sensitivity and decadal temperature variability

There are some who argue that natural/internal variability can play a role in driving long-term warming, and – hence – could explain a substantial fraction of recent warming. This, however, creates a bit of a paradox; if the system responds strongly to internally-driven warming, then it should also respond strongly to externally-driven warming. Consequently, we’d expect climate sensitivity to be high which would then make it difficult for a large part of our recent warming to be due to natural/internal variability.

Credit: Figure 1b from Nijsse et al. (2019).

The reason I thought I’d mention this again is that I came across a recent paper by Femke Nijsse and colleagues that consider this issues. The paper is called Decadal global temperature variability increases strongly with climate sensitivity and the title pretty much gives away the punch-line. The paper shows that models that are more sensitive to GHGs emissions (that is, higher equilibrium climate sensitivity (ECS)) also have higher temperature variability on timescales of several years to several decades, which is illustrated in the figure on the right.

The paper also points out that

high-sensitivity climates, as well as having a higher chance of rapid decadal warming, are also more likely to have had historical ‘hiatus’ periods than lower-sensitivity climates.

and consequently, that

the slowdown in global warming during the period 2002–2012 was more likely in a high-ECS world.

So, rather than the supposed slowdown being an indication of a low climate sensitivity, it could well imply the opposite.

The paper then concludes that

[a]chieving a better consensus on the risk that we live in a high-ECS climate is therefore of critical importance to both the climate mitigation challenge and also to inform efforts to build resilience to climate variability.