Cancel culture?

The talking point in social media at the moment (in my bubble, at least) seems to be the letter on justice and open debate, signed by 150 luminaries. It’s not been universally well-received. There was some quite measured comments in this article, and somewhat blunter ones in this article.

I find this quite a confusing issue. This is partly because people whose views I generally respect seem to disagree quite strongly about this, and make some compelling arguments both for, and against. I certainly agree that there are some serious problems with current public discourse; it would certainly be nice if it were easier to have good faith discussions about contentious issues. On the other hand, I’m not convinced that there is some major problem that we might describe as a “cancel culture”.

To be quite honest, I’m not even quite sure what “cancel culture” is, or even if it has been clearly defined. Where do you draw the line between a society threatening “cancel culture” and robust disagreements that might have gone further than we might like? How do we distinguish between someone justifiably objecting to what another person is promoting, and them trying to unacceptably silence/cancel the other person? When is it okay for an organisation to penalise one of their members for what they’ve said publicly and when should we expect organisations to defend their members in the interests of free speech, even if they also object to what was said?

My issue with this narrative is partly based on my experiences in the public climate debate. Most of those who complain about censorship, or being silenced/cancelled, seem to be those who say things that deserve to be criticised and simply don’t want to engage with their critics; it’s more about deligitimising one’s critics, than defending free speech. My understanding is that a number of those who signed the letter have similar reputations.

This, of course, doesn’t mean that one shouldn’t be concerned about attacks on free speech. It doesn’t mean that some of what is highlighted in the context “cancel culture” aren’t things that decent people should object to. However, we should also be careful of dealing with things like this in ways that end up deligitimising valid criticisms, and underming valid social movements. In fact, I can’t quite see how we can deal with some kind of “cancel culture” (however defined) that doesn’t end up doing the very thing we’re trying to avoid.

Of course, I may well misunderstand many aspects of this; it is clearly a complex issue. I had intended to make this a bit of an open thread but, as usual, have written too much. I’d certainly be interested to hear what others think about this issue.

Posted in advocacy, Personal, Philosophy for Bloggers | Tagged , , , , | 129 Comments

Extreme precipitation events

This post is partly motivated by something I think I either heard Michael Shellenberger say, or write, but I can’t find it anymore. I have tried reading some of the articles again, and listening to some of the podcasts again, including the Heartland Institute one – where Micheal Shellenberger thanks them for what they’re doing – and the one with Alex Epstein, where Michael Shellenberger suggests that one of the Chapters in the book was motivated by some of what Alex Epstein promotes. If you’re not sure why I’m highlighting this, it might be worth looking up the Heartland Institute and Alex Epstein.

My memory is that Shellenberger was suggesting that precipitation changes would be modest, so it would be relatively straighforward to develop adaptation strategies to cope with these changes. I may have remembered incorrectly, but it is still a topic worth highlighting.

Depending on one’s definition of modest, it is probably true that the overall change in precipitation will be modest; the change in mean precipitation is estimated to be around 2% per K. The problem, though, as this paper points out, is that

the intensity of extreme precipitation increases more strongly with global mean surface temperature than mean precipitation

and that

[g]lobally, the observed intensity in daily heavy precipitation events, i.e. the rainfall per unit time, increases with surface temperature at a rate similar to that of vapour pressure (6–7% per K).

In other words, even though mean precipitation will only increase by about 2% per K, the intensity of extreme precipitation events will increase by more than this, largely (as I understand it) because the vapour pressure increases by 6-7% per K.

Credit: Myhre et al. (2019)

The paper then goes on to point out that it’s not just the intensity of extreme events that changes, but also their frequency. For example, the figure on the right suggests that the most extreme precipitation events could double with every degree of warming, while their intensity also increases by more than 10% per degree of warming.

So, the changes in the frequency and intensity of the most extreme precipitation events are certainly not small. What’s more, this also means that – as we warm – we’ll shift the distribution of precipitation towards more extreme events; i.e., more and more precipitation will occur in what we, today, would regard as an extreme event.

Given that the events that impact us the most are the extreme events, it seems a bit overly optimistic to think that we can easily deal with quite large changes in both the intensity and frequency of such events. As the paper itself even says:

Such large increases are not taken into account by adaptation management, and our findings imply that society may not be adequately prepared for the coming changes in extreme rainfall.

We will, of course, have to adapt to some of the changes we’re facing. However, there is a difference between recognising that some amount of adaptation is unavoidable, and suggesting that adaptation will be sufficient to effectively cope with any possible change we might experience. At the end of the day, it’s going to be a combination of mitigation, adaptation and suffering, and

we’re going to do some of each. The question is what the mix is going to be. The more mitigation we do, the less adaptation will be required and the less suffering there will be.

Links:
Frequency of extreme precipitation increases extensively with event rareness under global warming, paper by Myhre et al. (2019).

Posted in Climate change, ClimateBall, Environmental change, Global warming | Tagged , , , , , | 36 Comments

Apocalypse never?

I guess the current entertainment in the climate world relates to Michael Shellenberger’s new book, Apocolypse Never, which is due to come out next month and is already doing well on Amazon. In a somewhat amusing twist, Michael wrote a Forbes article to promote his book, which was fairly quickly removed for reasons that are not entirely clear. What was slightly more amusing was the article itself, which Michael chose to frame as an apology, on behalf of environmentalists, for the climate scare. This is now being framed as a reformed climate activist condemning alarmism.

The problem is that nothing I’ve seen being presented by Michael Shellenberger in this context, is particularly different to what I’ve seen him present before. One of the chapters in his book is called Greed saved the Whales, not Greenpeace. The title would suggest that it’s just a variant of what he presented in his 2015 TED talk about how to save nature, that I discussed in this post. The basic argument is essentially that we didn’t save the whales, we simply stopped needing them. Not only is it somewhat disturbing to think that we shouldn’t explicitly try to save nature, the argument is apparently also wrong.

Michael Shellenberger is also an author of the Ecomodernist Manifesto, which Eli dissects quite nicely here. When he and Ted Nordhaus came to the UK to promote this in 2015, they invited Owen Paterson and Matt Ridley to join them at the launch event. Neither are typically regarded as Environmentalists, and Owen Paterson even used the event to bash what he calls the green blob .

If you go back even further, Shellenberger’s 2004 book with Ted Nordhaus is called The Death of Environmentalism: Global Warming Politics in a Post-Environmental World. There’s a 2007 book called Break Through: From the Death of Environmentalism to the Politics of Possibility. There’s even a recent paper on the origin and evolution of post-environmentalism that focuses on the Breakthrough Insitute, formed by Shellenberger and Nordhaus in 2003. Shellenberger did leave the Breakthrough Institute a few years ago, though.

If Michael Shellenberger was ever what would be regarded as a climate activist and ever an environmentalist, as is commonly understood, it doesn’t seem like it was recently. Apologising on behalf of environmentalists, for the climate scare, would then seem a rather bizarre thing to do. On the other hand, it’s very clever. It certainly gets the media’s attention. It also seems to make some people think that – if Shellenberger is changing his mind – maybe the climate scare is overblown. Not many seem to be actually considering whether or not he really is a reformed climate activist. Essentially, he’s managed to undermine a movement he’s trying to challenge, by apologising on their behalf, while also getting lots of coverage for his book.

Although this all seems rather cynical, and disingenuous, you do have to give Shellenberger credit for his ability to get media attention. If this wasn’t such a serious topic, it might even be quite funny.

Posted in Climate change, ClimateBall, Environmental change, ethics | Tagged , , , , | 194 Comments

The Neoclassical Economics of Climate Change

I thought I would advertise a post by Steve Keen, that may be of interest to some of my regular readers. It’s about Neoclassical Economics of Climate Change and is extremely criticial of the assumptions used to drive Integrated Assessment Models (IAMs). I’ve written about these a number of times myself, and have also been rather critical.

For example, IAMs don’t self-consistently model the evolution of our economies. They typically assume our economies continue growing and then estimate – using a pre-defined damage function – how much damage is done by climate change. Additionally, even though the damage functions are non-linear, they still produce modest levels of damage even for large changes in global temperature. For example, this paper points out that the damage function in DICE (William Nordhaus’s model) estimates damage due to climate change at 4.7% of world output at 6oC of warming, and that it would only halve world output at 19oC. In a sense, these models would conclude that the impact of climate change is modest, by construction.

On a similar note, in a recent paper, William Nordhaus (who won the Nobel Memorial Prize in Economics) estimated that the economically optimal pathway would lead to 3.5oC of warming (with an standard deviation of 0.75oC). So, a level of warming that many scientists would regard as potentially leading to catastrophic impacts is optimal, according to an economic model.

Steve Keen’s post highlights a few other things that I hadn’t entirely appreciated. It seems that these models don’t take into account how inter-connected our economies are. They seem to assume that there will be some activities that will be largely unaffected by climate change, and that the economic impact of climate change depends largely on the economic value of the activities that are directly impacted. But if climate change severely damages agriculture, it’s not just the agriculture sector that will be adversely affected. If climate change severely impacts regions of the world that aren’t particularly wealthy (as seems likely) it’s not only these regions that will be adversely affected.

Something else I hadn’t appreciated is that the damage function used in IAMs is calibrated by considering how economic activity today varies with temperature. In other words, how economic activity varies with climate now, is used to estimate how global warming will impact economic activity. It may be a reasonable first guess, but global warming doesn’t simply mean that as a region warms it will simply become equivalent to a similarly warm region today. Not only is this clearly too simplistic, there are also many other impacts. You might think that economists clearly don’t hold such simplistic views, but Steve Keen’s post reminded me that we’d engaged in a discussion with Richard Tol in which he appeared to express exactly this view.

Anyway, I’ve already said too much. I encourage people to read Steve’s post. I’ve put another link below, plus links to some other articles that may also be relevant. I should also probably clarify that there are two quite distinct types of IAMs. There are some that are used to actually model the evolution of energy systems, and others that are used to do cost-benefit analyses. The criticisms discussed above refer to the latter, rather than the former. I’m also not an expert at this, so if I do misunderstand how these models work, I’m more than happy to be corrected.

Links:
The Appallingly Bad Neoclassical Economics of Climate Change – Steve Keen’s post about IAMs.
IAMs – other posts I’ve written about IAMs.
Fat tails, exponents, extreme uncertainty: Simulating catastrophe in DICE – paper by Ackerman, Stanton and Bueno about the DICE damage function.
Projections and Uncertainties About Climate Change in an Era of Minimal Climate Policies – William Nordhaus’s paper suggesting that the optimal pathway would lead to about 3.5oC of warming.
Limitations of integrated assessment models of climate change – paper by Ackerman et al.
Climate Change Policy: What Do the Models Tell Us? – paper by Robert Pindyck suggesting that IAMs have fundamental flaws.

Posted in Climate change, economics, The scientific method | Tagged , , , , | 39 Comments

A modelling manifesto?

There’s a recent Nature comment lead by Andrea Saltelli called Five ways to ensure that models serve society: a manifesto. Gavin Schmnidt has already posted a Twitter thread about it. I largerly agree with Gavin’s points and thought I would expand on this a bit here.

The manifesto makes some perfectly reasonable suggestions. We should be honest about model assumptions. We should acknowledge that there are almost certainly some unknown factors that models might not capture. We should be careful of suggesting that model results are more accurate, and precise, than is actually warranted. We should be careful of thinking that a complex model is somehow better than a simple model. Essentially, we should be completely open and honest about a model’s strengths and weaknesses.

However, the manifesto has some rather odd suggestions and comes across as being written by people who’ve never really done any modelling. For example, it says

Modellers must not be permitted to project more certainty than their models deserve; and politicians must not be allowed to offload accountability to models of their choosing.

How can the above possibly be implemented? Who would get to decide if a modeller projected more certainty than their model deserved and what would happen if they were deemed to have done so? Similarly, how would we prevent politicians from offloading accountability to models of their choosing? It’s not that I disagree with the basic idea; I just don’t see how it’s possible to realistically enforce it.

The manifesto also discusses global uncertainty and sensitivity analyses, and says

Anyone turning to a model for insight should demand that such analyses be conducted, and their results be described adequately and made accessible.

Certainly a worthwhile aspiration, but it can be completely unrealistic in practice. If researchers get better resources, they often use this to improve the model. A consequence of this is typically that there is then a limit to how fully one can explore the parameter space. A researcher can, of course, choose to make a model simpler so that it is possible to do a global uncertainty and sensitivity analysis, but this may require leaving out things that might be regarded as imortant, or reducing the model resolution. This is a judgement that modellers need to make; do they focus on updating the model now that the available resources allow for this, or do they focus on doing global uncertainty and sensitivity analyses? There isn’t always a simple answer to this.

We could, of course, insist that policy makers only consider results from models that have undergone a full uncertainty and sensitivity analysis. The problem I can see here is that if policy makers ignore a model for this reason, and it turns out that maybe they should have considered it, I don’t think the public will be particularly satisfied with “but it hadn’t undergone a full uncertainty and sensitivity analysis” as a justification for this decision.

I don’t disagree with the basic suggestions in the manifesto, but I do think that some of what they propose just doesn’t really make sense. Also, the bottom line seems to be that modellers should be completely open and honest about their models and should be upfront about their model’s strengths and weaknesses. Absolutely. However, this shouldn’t just apply to modellers, it should really apply to anyone who is in a position where they’re providing information that may be used to make societally relevant decisions. I don’t think hubris is something that only afflicts modellers.

Posted in ClimateBall, Philosophy for Bloggers, Research, Scientists, Sound Science (tm), The philosophy of science, The scientific method | Tagged , , , , | 73 Comments

Extreme event attribution and the nature-culture duality

I’ve been reading a paper by Shannon Osaka and Rob Bellamy called Weather in the Anthropocene: Extreme event attribution and a modelled nature–culture divide. I’ve written about event attribution before, and I’m largely in favour of the storyline approach; given that an event has occured, how might climate change have influenced this event?

This new paper is somewhat critical of extreme event attribution. There are a number of aspects that are considered, some of which I broadly agree with (although, some I agree with because they’re true, not because they’re all that relevant – yes, event attributions relies on models, so what?). However, there was one in particular that I found very confusing. The claim is that in trying to separate the human influence from the natural variability of weather, extreme event attribution creates a new nature-culture divide.

Okay, but this is kind of the whole point of event attribution; we’ve gone from living in a world where atmospheric CO2 had been at around 280ppm for a very long time, to a world where it’s at 410ppm, and rising. We’d like to understand how this is impacting, and will continue to impact, climatic events. If you don’t like the terminology, you could change the framing, but the reason that atmospheric CO2 has gone from 280ppm to 410ppm is because we’ve been dumping CO2, and other greenhouse gases, into the atmosphere through our use of fossil fuels.

The paper suggests that

The danger of such approaches is that they might obscure, elide, or distract from the many other forms of causality: for example, human influence that cannot be modelled as greenhouse gas emissions, but which is instead enacted along axes of vulnerability, inequality, and other socio-political dimensions.

The problem here is that extreme event attribution typically tries to understand how the event might be different because of anthropogenic-driven climate change, rather than trying to understand how the actual impact of that event might be different. The above does complicate some events, such as flooding, but – as far as I’m aware – this is acknowledged. I’ll add that anthropogenically-driven climate change isn’t only due to greenhouse gas emissions, but this is the dominant factor.

The paper goes on to suggest that

Modellers could attempt to incorporate other aspects of risk, to make visible those aspects of causality that are currently elided.

I think it would be very good to consider how these other factors influence the impact of extreme weather events, but putting them into models (at least the ones used for event attribution) is extremely difficult. I also think that this is a slightly different question. Event attribution is typically trying to understand how the properties of a climatic event has been influenced by anthropogenically-driven global warming. How this then impacts communities, how we might respond, and the influence of other socio-political factors is a related, but different, issue.

These other factors are clearly important, but it’s not clear why they should really be considered by those doing event attribution studies. I will agree that those who do these studies should think a little about the implications of what they present, but – at the same time – we should be cautious of suggesting that scientists should take socio-political considerations into account. There is a difference between being careful about how you present your results because of socio-political sensitivities, and letting these socio-political sensitivities influence how you do your research.

One obvious concern with what is suggested in this paper is that if we don’t distinguish between natural and anthropogenic influences, how do you then avoid people simply concluding that it’s natural, or using this to argue that it’s natural? It’s bad enough that some already use the complexity of attribution studies to suggest that we still don’t know if climate change is influencing extreme weather events, without also then blurring the distinction between natural and anthropogenic.

Of course, maybe I really just misunderstand what’s being suggested in this paper. If so, I’d be more than happy to have it clarified.

Posted in Climate change, Environmental change, Global warming, Philosophy for Bloggers, Severe Events, The philosophy of science, The scientific method | Tagged , , , , , | 138 Comments

Can climate sensitivity be really high?

The answer to the question in my post title is – unfortunately – yes. The generally accepted likely range for equilibrium climate sensitivity (ECS) is 2oC – 4.5oC. This doesn’t mean that it has to fall within this range, it means that it probably falls within this range. There is still a chance that it could be below 2oC and a chance that it could be greater than 4.5oC. However, there is a difference between it possibly being higher than 4.5o and this being likely.

There’s been quite a lot of recent coverage of studies suggesting that the ECS may be higher than 5oC. My understanding is that one reason for this increase in the ECS in some climate models is an enhanced short-wavelength cloud feedback in these models. There are also some indications that these high-sensitivity models do a better job of representing some of the cloud processes than was the case for the earlier generation of models.

However, there are also indications that the high-sensitivity models struggle to fit the historical temperature record, and that lower sensitivity models (at least in terms of the transient climate responses) better match some observational constraints. As I understand it, it’s also difficult to reconcile these very high climate sensitivity estimates with paleoclimatological constraints.

So, I think it is interesting, and somewhat concerning, that some of the newest generation of climate models are suggesting that the equilibrium climate sensitivity (ECS) could be higher than 5oC. That these newer models seem to also represent some relevant processes better than the previous generation does provide some indications that it could indeed be that high. However, it’s also possible that these models are poorly representing some other processes that may be unrealistically inflating their ECS values.

Given that there are also other lines of evidence suggesting that the ECS is unlikely to be as high as 5oC makes me think that we should be cautious of accepting these high ECS estimates just yet. I do think it’s worth being aware that it could be this high, but I don’t think it’s yet time to change that the ECS is likely to fall between 2oC and 4.5oC, with it probably lying somewhere near 3oC.

Links:

Climate worst-case scenarios may not go far enough, cloud data shows – Guardian article about the new high climate sensitivity studies.
Short-term tests validate long-term estimates of climate change – Nature article about a recent study that tested one of these high climate sensitivity models.
CMIP6 – some of my recent posts about the newest generation of climate models.

Posted in Climate change, Climate sensitivity, Research, Science, The philosophy of science | Tagged , , , , | 36 Comments

Mitigation, adaptation, suffering

I’ve been struggling, more than usual, to find things to write about. Everything seems to just be a bit of a mess. The pandemic itself, how it’s been handled in some cases, and the protests in the USA, especially how the protestors are being treated by the police. I just don’t feel that I really have the words to describe what’s currently happening in a way that would do it justice.

However, given that it’s been rather quiet here, I thought I would just highlight one paper that I found interesting, and useful. The lead author is Flavio Lehner, and the paper is called Partitioning climate projection uncertainty with multiple large ensembles and CMIP5/6. The paper seems to be open access, so I don’t need to say too much. Essentially, it ues ensembles of models to estimate the sources of uncertainty and their magnitudes.

One of the key figures is below. It shows 3 different model ensembles, their global surface temperature projections for different scenarios, and – finally – how the fractional contribution to total uncertainty. The key results are that for long-timescales (many decades) internal variability contributes little to the total uncertainty (essentially, it averages out). The largest source of uncertainty is scenarion uncertainty (i.e., how much are we going to emit). A similar result is obtained if you consider changes in global mean precipitation.

Credit: Lehner et al. (2020)

Although the model uncertainty (defined as uncertainty in the forced response and structural differences between models) is not negligible, it’s clear that a dominant source of uncertainty essentially relates to what we do (i.e., how much do we emit). I realise that using “we” is a bit simplistic, since a small fraction of the world’s population dominate the emissions budget, but it’s still clear that future climate change, and what we will have to deal with, depends mostly on future emissions. This is something that we can influence, even if determining how we do so is not a trivial. I also realise that some might argue that the scenario uncertainty is somewhat smaller than indicated in this paper, since some of the scenarios are much less lilely than others. Although true, I don’t think it really changes the basic message.

I have noticed some discussion about how we tend to ignore adaptation over emission reductions. There’s some truth to this, and we will certainly have to develop some adaptation strategies to deal with the changes that are now unavoidable. However, until we get net emissions to ~zero, the climate will continue to change, as will our adaptation strategies. I still think that John Holdren’s comment that [w]e basically have three choices: mitigation, adaptation and suffering. We’re going to do some of each. The question is what the mix is going to be. The more mitigation we do, the less adaptation will be required and the less suffering there will be, is worth bearing in mind.

Links:
Partitioning climate projection uncertainty with multiple large ensembles and CMIP5/6, paper by Flavio Lehner et al.
Mitigation, adaptation and suffering – short post by the late Andy Skuce, where I found the John Holdren quote.

Posted in advocacy, Climate change, Climate sensitivity, Philosophy for Bloggers | Tagged , , | 41 Comments

Across the lines

I haven’t really come across anything to write about recently. I’ve been thinking a bit about models and how they are used to inform decision making. I’ve been thinking a bit about the use of scientific advice. I also had an interesting discussion on Twitter with Jean Goodwin about what to expect from scientists who engage publicly. I may write a bit more about this at some stage, but my concern with this is that we should – in my view – be careful of constructing a narrative that then allows people to blame scientists if the decisions we end up making aren’t regarded as optimal (I will admit that the current situation has confused me somewhat).

For some reason, I’ve been listening to quite a lot of Tracy Chapman recently. I remember listening to her music a lot many years ago, and only came across it again recently. The song below seems quite apt, unfortunately.

Posted in advocacy, Personal, Philosophy for Bloggers, Policy, Politics | Tagged , | 20 Comments

The Imperial College code

The Imperial College code, the results from which are thought to have changed the UK government’s coronavirus policy, has been available for a while now on github. Since being made available, it’s received criticism from some quarters, as discussed by Stoat in this post. The main criticism seems to be that if you run the model twice, you don’t get exactly repeatable results.

As Stoat, points out, this could simply be due to parallelisation; when you repeat a simulation the processors won’t necessarily return their results in the same order as before. However, it could also be due to other factors, like not quite using the same random number seed. These simulations are intended to be stochastic. The code uses random numbers to represent the probability of an outcome given some event (for example, a susceptible person contracting the virus if encountering an infected person). Different runs won’t produce precisely the same results, but the general picture should be roughly the same (just like the difference between weather and climate in GCMs).

For a while now I’ve been playing around with the Imperial College code. I should be clear that I’m not an epidemiologist and I haven’t delved into the details of the code. All I’ve been doing is seeing if I can largely reproduce the results they presented in the first paper. The paper gives much more detail about the code than I intend to reproduce here. However, it is an individual-based model in which individuals reside in areas defined by high-resolution population density data. Census data were used to define the age and household distribution size, and contacts with other individuals in the population are made within the household, at school, in the workplace and in the wider community.

I’ve run a whole suite of simulations, the results of which are shown on the right. It shows the critical care beds, per 100000 of the population, occupied under different scenarios. If you’ve downloaded the paper, you should see that this largely reproduces their Figure 2, although I did have to adjust some of the parameters to get a reasonable match. The different scenarios are Do nothing, Case Isolation (CI), Case Isolation plus Household Quarantine (CI + HQ), Case Isolation, Household Quarantine plus Social Distancing of the over 70s (CI + HQ + SD70), and Place Closures (PC). To give a sense of the severity, the UK has just under 10 ICU beds per 100000 of population.

I’ve also included (dashed line) the scenario where you impose Case Isolation, Place Closure (Schools and Universities) and general Social Distancing for 150 days (which they show in their Figure 3). As you can see, this really suppresses the infection initially, but there is a large second peak when the interventions are lifted. This is what, of course, is concerning people at the moment; will the lifting of the lockdown in some parts of the UK lead to a second wave?

So, I seem to be able to largely reproduced what they presented in the paper. This doesn’t really say anything about the whether or not the results are reasonable respresentations of what might have been expected, but it’s a reasonable basic test. I will add, though, that there are a large number of parameters and I can’t quite work out how to implement the somewhat more dynamic intervention strategies.

Something else I wanted to add is that I’ve also played around with some other codes, including a simple SIR code, a SEIR code, and one that included an age distribution and a contact matrix. Whatever you might think of the Imperial College code, all of the models seem to suggest that without some kind substantive intervention, we would have overrun the health service.

Posted in Policy, Research, Scientists, The scientific method | Tagged , , , , , , | 609 Comments