Should climate scientists admit failure?

Hopefully my readers will recognise Betteridge’s Law. James Dyke highlighted an article on Twitter that suggested that [t]he climate crisis demands new ways of thinking – scientists should be first to admit failure and move on. The suggestion is that …

… in some strange way, and despite the warnings over the past decades of many individuals such as Roger Revelle, Jim Hansen, Kevin Anderson, to name but a few,–– it appears the latest generation of protesters, from Fridays for Future to Extinction Rebellion – have done far more to hammer home the real message that climate crisis cannot be taken lightly, and is urgently and ultimately a most horrifying question of life and death.

As much as the above seems to be true, it was never really the job of scientists to hammer home the real message that climate crisis cannot be taken lightly. It was always going to require some other group (which may also have included scientists) to do this. It’s maybe a pity that it’s taken this long, and a pity that we’re relying on school children, rather than stepping forward ourselves, but I don’t think this is really the fault of scientists.

Scientists have been speaking out for a long time, and there have been numerous reports highlighting the issue. In many respects, this has been remarkably successful; most countries of the world have accepted the science of climate change and the need to act. That this hasn’t produced any particularly meaningful action is hardly the fault of those who’ve been presenting the information.

I don’t, however, disagree with the article entirely. I’m sure there are things that could have been more effective. However, I suspect those are likely to have been marginal; I really don’t think the position we’re in today is because scientists didn’t speak out forcefully enough. I also think there is some pressure for scientists to rationalise things and to try to not sound alarmed or concerned. Some of this can be healthy scientific reticence, but maybe this does go too far in some cases.

I also broadly agree with the suggestion that we should rely more on the expertise of social anthropologists, historians, psychologists, and political and social activists. However, I don’t think that the reason we’re in the position we’re in today is because we haven’t done so sufficiently yet. Our lack of substantive action is not because we haven’t had a good idea of what we should do, it’s mostly – in my view – because we haven’t really wanted to do anything. I think it would be good to broaden the range of expertise involved in looking at this issue, but I don’t think this will be some kind of panacea.

Although I partly agree with the general suggestions that climate scientists shouldn’t be seen as having the superior expertise, I don’t think that this is why we have been so stunningly unable to react to climate change. That’s because of the powerful unfluences who have actively worked against climate action, and because of policy makers who were not really willing to take things as seriously as they probably should have. I think things are starting to change, but I do think we have to be really careful of suggesting that the reason we’ve reacted so poorly to climate change is because climate scientists have failed.

Advertisements
Posted in Climate change, Policy, Politics, Scientists | Tagged , , , , | 196 Comments

IAMs – Open Thread

There’s been an interesting debate about IAMs. IAMs are Integrated Assessment Models that are used to develop mitigation pathways. In this article, Kevin Anderson argues that IAMs are simply the wrong tools for the job, while Jessica Jewel clarifies the role of IAMs and suggests that IAMs play a central part in the climate debate. Given that things have been somewhat quiet here, I thought it would be interesting to see if we could get a discussion going about this issue.

I wanted to just highlight two comments that caught my eye. Kevin Anderson says

The algorithms embedded in these models assume marginal changes near economic equilibrium, and are heavily reliant on small variations in demand that result from marginal changes in prices.

As I understand it, models of socio-economic systems are largely empirical; the system is too complicated to describe via fundamental equations, and so the models are based on what has happened in the past. I think it’s possible to run these models under scenarios that differ greatly from what we’ve experienced before, but I suspect they’re mostly valid when considering how we, and our economies, would respond to small perturbations. Given that what would be required to achieve some of our goals, it’s not really clear that such models really are all that useful.

Jessica Jewel, however, argues that IAMs to answer what if questions about the future consequences of decisions or developments. I think this is a perfectly valid argument. All models are essentially wrong, but can still be very useful for understanding how a system might evolve.

However, Jessica Jewel also says

The challenge of evaluating feasibility is that the pathways are constrained not only by economic costs and technical complexity, but also by socio-political acceptability.

This is something I’ve always struggled with. Of course, we do need to be aware of socio-political reality, but physical reality doesn’t really care about what we regard as socio-politically acceptable. To achieve some of our climate targets would require doing things that are essentially unprecedented (for example, changing our entire energy infrastructure in a matter of decades). How do we do so if we limit ourselves to doing things that we’re comfortable doing? Of course, I’m not suggesting that we consider doing things that would be regarded as morally unacceptable, but it seems likely that we may have to consider pathways that will change our lifestyles in ways that we may not be entirely happy about.

Of course, maybe socio-politically acceptable includes pathways that we would regard as socio-politically uncomfortable, but this never seems clear to me. It’s also quite possible that I simply don’t really understand IAMs and that they are a perfectly suitable tool for helping us determine mitigation pathways. However, there just seems to be a disconnect between economists suggesting optimal pathways that would lead to around 3.5oC of warming, and scientists who are suggesting that it’s imperative that we limit warming to 1.5oC.

As usual, I’ve written too much; the idea was to try and provoke a discussion. I’ve also tried to be slightly provocative, so feel free to challenge this. It would certainly be interesting to hear from some who thinks IAMs really are useful tools for developing mitigation pathways.

Update:
I should probably have made clearer that there are essentially two types of IAMs (which I have found rather confusing, and may still not fully understand). The ones being discussed in the debate between Kevin Anderson and Jessica Jewel are – I think – ones that try to actually model the evolution of the energy system and other GHG-emitting systems and are (I think) coupled to a simple climate model. The other type has simplified relationships between the various factors (economic growth, costs of mitigation, damages due to climate change, etc) and are used for cost-benefit analyses. The one that suggested an optimal pathway that would lead to warming of 3.5oC was one of the latter, and was the DICE model developed by William Nordhaus. Thanks to Steve Forden for pointing this out.

Links:
Debating the bedrock of climate-change mitigation scenarios – Nature article with the debate between Kevin Anderson and Jessica Jewel.
The human imperative of stabilizing global climate change at 1.5°C – article by some of the authors of the IPCC SR1.5 report.
How Two Wrongs Make a Half-Right – 2013 post by Michael Tobis highlighting some potential issues with IAMs.
A Review of Criticisms of Integrated Assessment Models and Proposed Approaches to Address These, through the Lens of BECCS – a paper reviewing the criticisms of IAMs.

Posted in Carbon tax, Climate change, economics, GRRRROWTH, Philosophy for Bloggers, Uncategorized | Tagged , , , , , , , | 102 Comments

Tortoise ThinkIn

I’m just back from a Tortoise ThinkIn. If you don’t know what that is, I didn’t either until this evening, and I’m still not sure I quite get it. According to this, it’s about building a different kind of newsroom. There are events, called ThinkIns, in which people can discuss a specific topic, which then – as I understand it – determine future news items. It’s meant to be slow news; they take time to see the fuller picture, to make sense of the forces shaping our future, to investigate what’s unseen.

The topic of the event I went to was [w]hat can this generation do about global warming? There was a panel, that included someone from the Green party, someone from Greenpeace, an engineer, and – I think – one of the editors from Tortoise.

Even though there was a panel, they’re mainly meant to guide the discussion, rather than dominate the discussion. The audience is meant to participate and they’re not meant to ask questions; they’re meant to express their views about the topic. On the screen behind the panel is the tortoise logo that moves from left to right. When it gets to the right-hand side, the discussion is meant to wrap up.

The event I went to was okay, but – as is probably usual – the discussion was dominated by a few, quite vocal, people. Nobody said anything that I really disagreed with, but noone said anything particularly enlightening. I left feeling rather underwhelmed.

It also turns out that the business model is that people become members of the organisation, which entitles them to attend a number of these ThinkIns, access to a slow news feed, editorial briefing emails, and a quarterly printed copy of the journal. Although I realise that we do need to find a way to properly fund the news media, this seemed rather expensive. I realise, though, that there is merit to thinking about innovative, new media outlets and also how the public might engage with the media. I’m not convinced, though, that this is it. What do I know, though, I just write a blog 🙂 . Maybe others have had better experiences than I had.

Posted in Environmental change, Personal | Tagged , , , , | 5 Comments

Moderation at The Conversation

The Conversation has a new set of moderation policies which is motivated by a desire to improve [their] climate change coverage. It involves a zero tolerance approach to moderating climate change deniers, and sceptics. Not only will their comments be removed, but their accounts will be locked.

I have to admit a slight negative bias towards moderation at The Conversation. This is mostly because I had my comments removed and my account locked. This was – I think – because I was violating one of their comment policies by not using a real name. Fair enough, it is one of their policies. However, it did happen just as I added my actual name to my Twitter profile (which was how I was logging into my The Conversation account) and also happened at about the same time as I was involved in a discussion with a well-known “skeptic” who was using multiple identities in a single comment thread. It also came without any warning and appears to be final.

The motivation behind requiring real names is to maintain a transparent forum. The problem with this is that anyone can make up a real name; it all seems rather pointless if there isn’t some way to actually check that this really is someone’s actual name. An identifiable pseudonym seems as transparent as someone with virtually no online profile who happens to be using a real name (their own, or otherwise). Anyway, their site, their rules.

As far as their new moderations rules go, I’m somewhat uncertain as to whether or not these are a good idea. I’m all for strong moderation and I think the help I’ve had on this site has improved the comment threads. However, I don’t simply delete comments and ban people because they’re climate change deniers/sceptics. It normally requires a series of comments that don’t confirm to the moderation policy, or comments policy.

If the plan is to simply delete the comments, and lock the accounts, of climate change deniers/sceptics, who gets to decide? How do you avoid banning those who really are trying to engage in good faith, but are saying things that make them sound like they’re “sceptics”? I know it’s unlikely, but maybe some people are actually trying to learn something, rather than be disruptive. Also, where do you draw the line? I think there is a difference between someone who disputes that CO2 is a greenhouse gas, and someone who thinks climate sensitivity is probably going to be low. It’s also possible to agree about the science and disagree about what we should do. How are they defining a climate change denier/sceptic?

I think arguing about the science is often pointless, but there are certainly some things worth discussing, and we should be able to discuss the possible policy responses, even with those with whom we largely disagree. So, I tend to think that this new The Conversation moderation policy is not well thought out and will probably backfire. On the other hand, having spent a number of years running quite an active blog, I’m well aware how difficult moderation can be. So, maybe it is worth trying something like this. Wait and see, I guess. Of course, since I can’t comment there, I’m not going to be too bothered one way or the other 🙂

Posted in Climate change, ClimateBall, Personal, Sound Science (tm) | Tagged , , , , , | 61 Comments

Potentially habitable?

The exciting news in astronomy is the discovery of water in the atmosphere of a relatively small planet, known as K2-18b, that happens to lie in what we often to as the habitable zone of its parent star. The result was reported in this Nature Astronomy paper, and also in this arXiv paper that appeared on the same day. I know some of the authors of both papers, have published with some of the authors of the second paper and am currently working with one of the authors of the first.

The reason why this is such a fascinating result is that it’s the detection of water in the atmosphere of an exoplanet, the exoplanet itself is actually quite small, and it lies in a region often referred to as the habitable zone, but that really simply means that conditions could be suitable for the existence of liquid water.

Credit: Rice et al. (2019)

However, some of the coverage has been less than satisfactory. It’s being presented as a planet that could support life. However, this paper reports that it has a mass of M_p = 8.4 \pm 1.4 Earth masses and a radius of R_p = 2.37 \pm 0.22 Earth radii. You can actually put this onto a mass-radius diagram, which I discovered I’d essentially already done. The figure on the right is a mass-radius diagram with all known exoplanets with masses below 20 Earth masses and radii below 2.75 Earth radii. You can clearly see the location of K2-18b, which I’ve also highlighted.

The dashed curves are composition curves. Those shown in the figure include that for a planet that would be 100% iron, one for a planet that would have an Earth-like composition (~25% iron, ~75% silicates), one for a planet that would be entirely silicates, and also includes composition curves for planets in which water would make up a substantial fraction of their mass, and one for a planet with a hydrogen-rich atmosphere that would be about 1% of the planet mass. You can see that K2-18b lies somewhere in the region where it would either have a substantial amount of water (~50%) or it would have a reasonably substantial (~1%, or more) hydrogen-rich atmosphere. In neither case would this be a planet that would typically be regarded as habitable.

Furthermore, the observations are transit observations at different wavelengths. In other words, what we’re observing is the planet as it passes between us and its host star, and comparing the amount blocked at different wavelengths. Given that these differences must be due the atmosphere blocking different amounts of light at different wavelengths, we can use this to say something about the atmospheric composition. However, the signal depends strongly on the scaleheight of the atmosphere, which is larger for a lighter atmosphere (one that contains substantial amounts of hydrogen) than it is for a heavier atmosphere (one that was pre-dominantly water vapour, for example). The paper itself makes clear that a non-negligible fraction of the atmosphere must be hydrogen and helium.

Additionally, a paper published in early 2019 has suggested that the radius of K2-18b is actually R_p = 2.711 \pm 0.065 Earth radii, which would put K2-18b into a region of mass-radius space that suggests it almost certainly retains a substantial hydrogen-rich atmosphere; essentially, it’s a mini-Neptune. Maybe life could exist in the upper regions of such a planet’s atmosphere, but this is almost certainly not what most people would think of when they hear that a planet is potentially habitable.

You might think that this is mostly a problem with the media over-hyping a news story. But, no, it appears to be the narrative presented in the university press release and in the lead author’s The Conversation article. It would be good if the media were to talk to researchers not involved in the actual study, but it’s hard to blame them for presenting a story that is similar to what is being presented by the researchers themselves.

I think that this is all rather unfortunate. This kind of result is very exciting without needing to present a narrative that is probably not true. We also live in an era where it would seem important to not provide more ammunition for those who would like to undermine the public’s trust in experts.

Unfortunately, though, I suspect that these kind of over-hyped stories are likely to continue happening. In the context of habitability on an exoplanet, the public and the media should be dubious of any such claims for at least the next few years, if not longer. These kind of observations are very difficult and although the result presented here is impressive, we can still only do this for planets that are quite a bit larger than the Earth. This doesn’t mean that such planets can’t be habitable, but it’s almost certainly not going to be the kind of life that people think of when they hear such claims. I think it’s important to be clear about this when presenting such results.

Posted in Astronomy, ethics, Research, Scientists, The philosophy of science, The scientific method | Tagged , , , , , | 60 Comments

Propagation of nonsense – part II

I thought I would look again at Pat Frank’s paper that we discussed in the previous post. Essentially Pat Frank argues that the surface temperature evolution under a change in forcing can be described as

\Delta T(K) = f_{CO2} \times 33K \times \left[ \left( F_o + \sum_i \Delta F_i\right) / F_o \right] + a,

where f_{CO2} = 0.42 is an enhancement factor that amplifies the GHG-driven warming, F_o = 33.946 W m^{-2} is the total greenhouse gas forcing, \Delta F_i is the incremental change in forcing, and a is the unperturbed temperature (which I’ve taken to be 0).

Pat Frank then assumes that there is an uncertainty, \pm u_i, that can be propagated in the following way

u_i = f_{CO2} \times 33K \times 4 Wm^{-2}/F_o,

which assumes an uncertainty in each time step of 4 Wm^{-2} and leads to an overall uncertainty that grows with time, reaching very large values within a few decades.

Since I’m just a simple computational physicist (who is clearly has nothing better to do than work through silly papers) I thought I would code this up. That way I can simply run the simulation many times to try and determine the uncertainty. Since it’s not quite clear which term the uncertainty applies to, I thought I would start by assuming that it applies to F_o. However, F_o is constant in each simulation, so I simply randomly varied F_o by \pm 4 Wm^{-2}, assuming that this variation was normally distributed. I also assumed that the change in forcing at every step was \Delta F_i = 0.04 Wm^{-2}.

The result is shown in the figure on the upper right. I ran a total of 300 simulations, and there is clearly a range that increases with time, but it’s nothing like what is presented in Pat Frank’s paper. This range is also really a consequence of the variation in F_o ultimately being a variation in climate sensitivity.

The next thing I can do is assume that the \pm 4 Wm^{-2} applies to \Delta F_i. So, I repeated the simulations, but added an uncertainty to \Delta F_i at every step by randomly drawing from a normal distribution with a standard deviation of 4 Wm^{-2}. The result is shown on the left and is much more like what Pat Frank presented; an ever growing envelope of uncertainty that produces a spread with a range of \sim 40 K after 100 years.

Given that in any realistic scenario, the annual change in radiative forcing is going to be much less than 1 Wm^{-2}, Pat Frank is essentially assuming that the uncertainty in this term is much larger than the term itself. I also extracted 3 of the simulation results, which I plot on the right. Remember, that in each of these simulations the radiative forcing is increasing by 0.04 Wm^{-2} per year. However, according to Pat Frank’s analysis, the uncertainty is large enough that even if the radiative forcing increases by 4 Wm^{-2} in a century, the surface temperature could go down substantially.

Pat Frank’s analysis essentially suggests that adding energy to the system could lead to cooling. I’m pretty sure that this is physically impossible. Anyway, I think we all probably know that Pat Frank’s analysis is nonsense. Hopefully this makes that a little more obvious.

Posted in Climate sensitivity, ClimateBall, physicists, Research, Satire, Scientists | Tagged , , , , | 122 Comments

Propagation of nonsense

A couple of years ago, I had a guest post about Pat Frank’s suggestion that the propagation of errors invalidate climate model projections.. The guest post was mainy highlighting a very nice video that Patrick Brown had produced so as to explain the problems with Pat Frank’s suggestion. You can watch the video in my post, or on Patrick Brown’s post.

Pat Frank has, after many rejections, managed to get his paper published. If you want to understand the problems with this paper, I suggest you watch Patrick Brown’s video, and read the comments on my post and on Patrick’s post. Nick Stokes also has a new post about this that is also worth reading.

However, I’ll briefly summarise what I think is the key problem with the paper. Pat Frank argues that there is an uncertainty in the cloud forcing that should be propagated through the calculation and which then leads to a very large, and continually growing, uncertainty in future temperature projections. The problem, though, is that this is essentially a base state error, not a response error. This error essentially means that we can’t accurately determine the base state; there is a range of base states that would be consistent with our knowledge of the conditions that lead to this state. However, this range doesn’t grow with time because of these base state errors.

As Gavin Schmidt pointed out when this idea first surfaced in 2008, it’s like assuming that if a clock is off by about a minute today, that tomorrow it will be off by two minutes, and in a year off by 365 minutes. In reality, the errors over a long time are completely unconnected with the offset today.

Maybe the most surprising thing about the publication of this paper is that the reviewers (who are named) both seem to be quite reasonable choices. It seems highly unlikely that they missed the obvious issues with this paper. Did it get published despite their criticisms? Did they eventually just give up and decide it wasn’t worth arguing anymore? Or, did someone decide that this was something that should play out in the literature? I think the latter can sometimes be a reasonable outcome, but only if the paper has something that’s actually interesting, even if it is wrong. Pat Frank’s paper really doesn’t qualify; it’s simply wrong, and not even in an interesting way.

Posted in Climate change, ClimateBall, Comedy, Pseudoscience, The philosophy of science, Watts Up With That | Tagged , , , , , | 50 Comments