Outgoing longwave radiation

Something that often strikes me is that when I think I understand something quite well, there often turns out to be an aspect that I haven’t understood particularly well. I sometimes think that this is can be an important thing to realise; just because you think you have a good understanding of something doesn’t always make it so.

The context here is that a typical way to explain global warming is to point out that adding a greenhouse gas to the atmosphere reduces the outgoing longwave (OLR) flux, and that warming of the surface and troposphere then returns the system to energy balance. This tends to suggest that the OLR will go down when the greenhouse gases are added, and then slowly recover as the system returns to equilibrium.

Credit: Dewitte & Clerbaux (2018)

A particularly persistent climate “skeptic” has, on a number of occasions, promoted the figure on the right, which is from this paper. Because it shows a larger increase in OLR than would be expected if global warming simply involved the OLR recovering, it has been suggested that this disproves anthropogenic global warming (AGW).

As indicated at the beginning of this post, I was also somewhat confused by this. However, thanks to a comment from Chris Colose on this Realcimate post, I’ve cleared up my confusion. Chris highlights this paper which points out that the response to an initial external perturbation (such as increasing atmospheric CO2) involves both longwave and shortwave feedbacks.

Credit: Donohue et al. (2014)

In fact, if we increase atmospheric CO2, the expectation is that the shortwave feedback will lead to an increase in absorbed shortwave radiation (ASR). As illustrated by the figure on the left, this means that the OLR will end up increasing to above the level it had prior to the increase in atmospheric CO2. In fact, it potentially recovers in less than 20 years, which means that the subsequent warming is due to the increased ASR. To be clear, this does not mean that global warming isn’t due to increased atmospheric CO2, since the increased ASR is a response (feedback) to this increased atmospheric CO2.

I always find it quite fun to solve what was a bit of a puzzle (to me, at least). I would also argue that if someone thinks that they’ve found some supposedly obvious reason why a large number of other experts are wrong, it’s often best to take a step back and consider that there might be something that they’ve missed.

Links:
Realclimate comment – Chris Colose.
Shortwave and longwave radiative contributions to global warming under increasing CO2 – Donohoe et al. (2014).
Global warming due to increasing absorbed solar radiation – Trenberth & Fasullo (2009).

Posted in Global warming, Greenhouse effect, Philosophy for Bloggers, The philosophy of science, The scientific method, Uncategorized | Tagged , , , , | 37 Comments

Scenario use in climate research

If you’ve been following the blog for a while, you will be aware that I’ve commented on a number of occasions about the whole RCP8.5 issue. You may also be aware that one of the chief protagonists in that whole discussion is Roger Pielke Jr, whose work I’ve also discussed from time to time. If so, you may be interested in his latest attempt to police the climate science community.

It’s a working paper on the Systemic Misuse of Scenarios in Climate Research and Assessment. The other author is Justin Ritchie who, in the past, wrote quite a sensible article with Zeke Hausfather.

I don’t really want to say too much about the new article, as I’m wary of incuring the wrath of Roger. One could, however, play a reasonable game of climate bingo with it. It includes Climategate, cites Grundmann (2013) unironically, of course discusses RCP8.5, implies a lack of research integrity amongst the climate research community, discusses problems with the IPCC and the US National Climate Assessment, and explains how climate research can get back on track (and avoid a growing credibility crisis).

I won’t say much more, as my main goal was to simply highlight the paper for those who might be interested. I’ll end, though, with a response from Nico Bauer, who is an Integrated Assessment Modeller at the Potsdam Institute for Climate Impact Research.

Links:
Systemic Misuse of Scenarios in Climate Research and Assessment – new working paper by Pielke & Ritchie.
https://www.carbonbrief.org/analysis-how-carbon-cycle-feedbacks-could-make-global-warming-worse – new Carbon Brief article by Zeke Hausfather and Richard Betts. I didn’t mention this in the post, but it does seem relevant to the whole RCP8.5/scenarios debate.

Posted in ClimateBall, Contrarian Matrix, Philosophy for Bloggers, Roger Pielke Jr, Scientists, The philosophy of science | Tagged , , , , , | 44 Comments

Chanting to the choir?

Before I head off to the office (or, more correctly, go from watching the news in the living room, to the dining room table) I thought I would briefly mention a recent paper that has analysed blog comments. It’s by Jenni Metcalfe and is about Chanting to the choir: the dialogical failure of antithetical climate change blogs.

The basic premise of the study was to assess the comments on two prominent climate blogs. The analysis suggests that

both blogsites were dominated by a small number of commenters who used contractive dialogue to promote their own views to like-minded commenters. Such blogsites are consolidating their own polarised publics rather than deliberately engaging them in climate change science.

The problem is that the two sites were Skeptical Science and JoNova. They are vastly different in terms of credibility. Skeptical Science mostly uses the peer-reviewed literature to rebut climate myths, is regularly highlighted as a reliable source by climate scientists, and has been given numerous awards for their climate communication. JoNova, on the other hand, largely promotes pseudoscience.

Of course, this is a topic that is of interest to researchers, and there is nothing wrong with asking a research question, and trying to determine the answer. I’m just not sure what this particular analysis tells us that those involved in blogging haven’t already known for some time. The views about this topic tend to be rather polarised. If you run a mainstream climate blog, and don’t want your comment threads to degenerate into a stream of abuse, you end up moderating in ways that will tend to discourage a typical JoNova commenter from commenting.

Also, if you accept that anthropogenic global warming is real and presents risks that we should be taking seriously, and you don’t enjoy being verbally abused, you’ll tend to avoid commenting on a site like JoNova’s. So, it’s not a surprise that the comment threads end up with like-minded people.

In some sense the paper has simply pointed out something that few who are familiar with climate blogging will be surprised about. On the other hand, it has implied that there is some equivalence between Skeptical Science and JoNova, which is rather sub-optimal, given that Skeptical Science focuses on debunking myths, and JoNova mainly promotes them. This will, unfortunately, promote a sense of false balance.

To be fair, the author does defend this in this Making Science Public post, and ends with

we clearly need to find ways other than blogs to engage laypeople in credible climate science which leads to political and individual action.

which I was going to say that I agree with, but I’m not sure I do. Clearly there are many different ways to engage with the public; it certainly shouldn’t just be blogs. However, that climate blog comment threads tend to become filled with like-minded people doesn’t mean that blogs can’t play a useful role.

Not only have many people put a lot of effort into providing reliable scientific information on blogs (Skeptical Science, Realclimate, Tamino, etc), the comment threads aren’t necessarily a good reflection of the readership. For example, I have many more unique visitors, than I have unique commenters. So, I do think that one has to be really careful of using the comment threads to assess the value of climate blogs. I’m pretty sure mine would be far less reliable if I hadn’t a strong moderation policy when I started (which, to be fair, was mostly Rachel and Willard, rather than me).

Conflict of interest
I should acknowledge an association with Skeptical Science, which may introduce a bias. On the other hand, one reason I have an a association with them is because I regard them as a reliable source of information about climate change.

Links:
Chanting to the choir: the dialogical failure of antithetical climate change blogs – paper by Jenni Metcalfe.
Chanting to the choir: The dialogical failure of antithetical climate change blogs – Making Science Public guest post by Jenni Metcalfe.

Posted in Climate change, ClimateBall, Global warming, Philosophy for Bloggers, Scientists | Tagged , , , , , | 70 Comments

Seven years

Once again, WordPress has reminded me that this is the anniversary of me starting this blog. It’s been going for seven years now. If you’re interested in numbers, I’ve written about 1080 post. There have also been about 20 Guest posts, from people such as Richard Betts, Collin Maessen, Steven Mosher, John Russell, An Oil Man, Very Tall Guy, Brigitte Nerlich and Warren Pearce, Richard Erskine, and Zeke Hausfather (apologies if I’ve missed anyone). There have also been contributions from guest authors, such as Willard, Rachel, Michael Tobis and Lawrence Hamilton. It’s remained reasonably active, but I have found it more and more difficult to find things to write about. I realise that effective communication can involve repeating oneself, but it does get tedious after a while.

I’m also finding it tricky to write posts at the moment, giving the current pandemic crisis. I find I’m most comfortable writing about topics where either various opinions can be valid, or where I think I have sufficient expertise to express a view. I don’t think I have sufficient expertise to express a scientific view about the coronavirus pandemic, and I certainly don’t want to express views that might misinform. I do think there may come a time when it may be interesting to look back and and reflect on how science advice influenced our response to this crisis. Now is probably not yet that time.

Anyway, I just thought I’d highlight the seventh anniversary of this blog. I hope some have found what’s been written here useful and that it’s made a constructive contribution to the public discussion about climate change and the general role of science in society. I can’t remember if I’ve made this offer before, but since I am finding it more and more difficult to find the time to write posts, if anyone would like to contribute a guest post, do get in touch. I hope everyone keeps well and manages to successfully navigate what is likely to be a challenging few months, or maybe even longer.

Posted in Personal, Philosophy for Bloggers | Tagged , | 14 Comments

Models

I have a feeling that our response to this pandemic may lead to some reflections on the role of scientific models in the decision making process. I would normally err on the side of defending scientific advisors, but I have a sense that they might face some justified criticism. I, of course, don’t know all the details of what information was presented, how it was presented, what pressures the scientific advisors faced, and how the decisions were made. However, it does seem as though many – who should probably have known better – failed to recognise the strengths and weaknesses of the scientific models that were being used.

Scientific models typically allow us to ask “what if” questions: What will happen if we do nothing? What will happen if we encourage social distancing? What will happen if we enforce a partial lockdown? What about a full lockdown? What will happen if we wait a week before doing something, rather than starting now? etc. They’re typically more properly presenting projections, rather than predictions; they’re predictions that are conditional on us actually following the scenario that was modelled. There’s also always some level of uncertainty, so the questions should maybe be more properly phrased as “what could happen if….?”

However, we seem to be treating these models as predictions without always making clear that there is quite a lot of uncertainty involved, both in terms of how they model the infection itself and in terms of how they’re handling the various possible societal scenarios. Of course, it’s important to test models against real world data and if there is a good match, and if there is confidence that the assumptions in the model closely match what happened in reality, then one can be confident that the model is capturing many of the important processes. However, it’s still important to remember that all scientific models are simplified representations of reality that can never really capture all the complexity.

Another important aspect of using scientific models is to sanity check the results; do they make sense? It’s not clear that this has been done particularly well in this current context. James has been highlighing this in a number of his posts. Specifically, some of the leading researchers were still presenting numbers that no longer seemed reasonable. For example, suggesting that the lower limit to the number of deaths might be around 7000, when we were already pretty close to getting there [edit: see update at bottom of post.].

There’s probably a lot more that could be said, and I may return to this topic at a later stage. I think it’s important for people to recognise both the strengths and limitations of scientific models. They can be very powerful tools, but they’re never going to perfectly represent reality. The scientists involved should be willing to acknowledge this and should, in my view, also be checking that their model results make sense. Decision makers should also be aware that scientific models have strengths and limitations; they can certainly guide decision making but can’t really define it. I don’t think this takes anything away from the usefulness of such models, it is simply something that I think is important to recognise.

Update:
As Steve Forden points out, there was a stage where the lower bound for the number of deaths (5000) was presented at the same time as the group was projecting this number of deaths for the following week.

Posted in Philosophy for Bloggers, Research, Scientists, The philosophy of science, The scientific method | Tagged , , , , , , | 461 Comments

Stay in your own lane?

Even though there are scientists who have the kind of expertise that might help us to better understand this pandemic, there’s a tendency to suggest that it would probably be best if they stayed in their own lane. Although I do have some sympathy for this, I think it’s too simplistic; researchers should be free to study what they wish, and those within a discipline should be willing to listen to people with relevant expertise from outside their discipline. However, there are certainly examples when researchers have tackled problems outside their core area and not made particularly constructive contributions. The problem, though, isn’t that people don’t stay in their lane, it’s that they don’t do their homework properly when they move outside their lane.

One of the most difficult things about doing research isn’t the technical aspects, it’s being very familiar with the details of a topic, and knowing what questions are worth asking. Just taking some data and throwing some analysis method at it isn’t very useful if you don’t understand how the data was collected, it’s limitations, or the significance of the analysis in this particular context. For example, the impact of this virus is almost certainly going to depend on the strategy that is employed. Hence, you can’t really infer anything about an alaysis if you don’t take into account what strategy has already been employed and how this strategy might evolve. There are, of course, many other factors that should also be considered; it’s clearly not simple.

What motivated this post was a recent post by Andrew Gelman that highlights how to be curious instead of contrarian about COVID-19. It was itself motivated by an article by Rex Douglass that provided Eight Data Science Lessons, using an article written by a rather contrarian, and not very curious, lawyer to illustrate what not to do.

In my view, the key thing is that even though these are unprecedented times, it doesn’t mean that we should take research short-cuts. As the articles above highlight, we should be familiar with the topic, care about the research questions, be careful about the design of the research project, be willing to revise our understanding if the model doesn’t match the data, or if new data becomes available, and be very clear about assumptions, uncertainties, and the overall context. I would add that we should also be willing to “trust” other experts. If we want to live in a world where people listen to us when our expertise is relevant, we should be willing to listen to other experts when their expertise is relevant.

Posted in Contrarian Matrix, Philosophy for Bloggers, Scientists, The philosophy of science, The scientific method | Tagged , , , , , | 250 Comments

Sometimes it’s never good enough

I’ve, in the past, suggested that climate scientists could end up being criticised whatever happens. If the impact of climate change ends up being less severe than it could have been, climate scientists will probably be criticised for being alarmists. This will probably happen even if the reason why the impacts were less severe was because we actively did things to limit our emissions and to adapt to the changes that were unavoidable. On the other hand, if climate change does end up being severely disruptive, climate scientists will probably be criticised for not speaking out enough.

I may, of course, be wrong and most commenters may appreciate that giving scientific advice about a complex topic is very difficult and that scientists can’t really be held responsible for the decisions that were made. I have a suspicion, though, that we might be about to get some idea of whether or not this is likely on a much shorter timescale than would be the case for climate change.

My guess is that those giving scientific advice about the coronavirus may end up in a similar position. If the mitigation strategies are successful at limiting the impact of the virus, they’ll probably be criticised for suggesting strategies that were too extreme. On the other hand, if the impact is extreme (as I hope it won’t be) they’ll probably be criticised for not having spoken out early enough, or for not having suggested more stringent constraints.

Again, I may be wrong, but it will be interesting to see what happens once this crisis is mostly over. We might expect some criticism from some of the more vocal media critics, but it will also be interesting to see the response from some of the more vocal policy experts. In particular, from those who spend their time suggesting that scientists are naive for thinking that there is a simple path from scientific advice to policy making. You’d like to think that they would appreciate the complexity of this situation and realise that if there isn’t a simple relationship between science advice and policy, you can’t then simply judge the scientific advice on the basis of the effectiveness of the subsequent policy. You might, of course, be wrong.

We’ll have to wait and see. Whatever happens, it will probably still be an opportunity to learn something about the complex relationship between scientific advice, policy making, and how this is then received by the broader public.

Posted in Philosophy for Bloggers, Policy, Politics, Scientists, The philosophy of science | Tagged , , , , , | 105 Comments

Richard's Decoupling

Richard did it again and forgot to say oops”:

Richard’s a high decoupler

Negative feedback was to be expected. Some argue that only by decoupling can we understand Richard’s point. Facts don’t care about feelings and all that jazz.

I love thought experiments. They seldom work, but I love them nevertheless. Let’s risk a few, with Richard himself as our main character. Applying decoupling to the decoupler reveals how self-serving it can be.

§1. A Muslim child is about to fall in a well. You are Richard Dawkins, and could save him without much effort. Do you (a) feel distressed or (b) tweet about Islam?

§2. Richard Dawkins has experienced all of human morality except decency. Would he be able to fill in the concept of decency using his own tweets?

§3. You are abducted and tied to Richard Dawkins so that he can stay alive. He may need your blood or your kidneys. The procedure does not hurt you. You just need to stay in that foreign location for a year. Do you think you are morally obliged to stay, free to go, or allowed to eat him?

§4. The Experience Machine can give you any experience you like or want. You could for instance make Richard Dawkins realize how silly his Gedankenexperiment usually sounds. Would you plug yourself to this machine forever and be free to imagine the rest of your life however you please?

§5. You are Richard Dawkins and take part in an experiment. Researchers put you to sleep with a drug. If you tweeted no bad takes during the weekend, you will wake up Monday, otherwise only Wednesday. What are your odds for seeing Monday?

§6. If Richard Dawkins follows a rule that turns him into delicatessen, of what use was the rule to him?

§7. Richard Dawkins is not saying that mass extermination via virus infection is a Good Thing. But you got to admit it would work.

***

Thought experiments help illustrate claims but don’t replace making them explicit. In this post I would argue that decoupling can easily lead to dogwhistling as themes carry connotations. When a high-decoupler with a big following entertains ideas about eugenics that could work, distanciation cannot hide that the ideas entertained are not value neutral.

Besides, there’s no fact of the matter regarding Richard’s eugenic suggestion, hence why Richard relies on a counterfactual in the first place. Even if we grant him that what goes for cows and dogs goes for humans (which is far from being obvious), Richard needs a set of policies.

If you ever feel like decoupling, please mind your audience.

Addendum. I adapted many thought experiments from Helen’s post. It’s good. Go read it.

Posted in Freedom Fighters, Philosophy for Bloggers | Tagged , , , , | 93 Comments

Responsible SciComm

Yesterday, a group in Oxford released a paper that implied that a signifcant fraction of those in the UK may already have been infected. This was quickly picked up by numerous media outlets who highlighted that coronavirus could already have infected half the British population. James Annan has already discussed it in a couple of post, but I thought I would comment briefly myself.

To be clear, I certainly have no expertise in epidemiology, but I do have expertise in computational modelling. So, I coded up their model, which is described in Equations 1-4 in their paper. They were also doing a parameter estimation, while I’m simply going to run the model with their parameters.

The key parameter is \rho, which is the proportion of the population that is at risk of severe disease, a fraction of whom will die (14%). They explicitly assume that only a very small proportion of the population is at risk of hospitalisable illness. Consequently, they focus on scenarios where the proportion requiring hospitalisation is 1% (\rho = 0.01) and 0.1% (\rho = 0.001). The Figure on the right, which considers \rho = 0.1, \rho = 0.01, and \rho = 0.001, is from my model and seems to largely match what’s been presented in the paper.

The curves that start at 1 and then drop are the proportion of the population that is still susceptible (left-hand y-axis) while the diagonal straight lines are the logs of the cumulative deaths (right-hand y-axis). I’ve also shifted the models so that the latter overlap. This Figure illustrates why this study was picked up by the media. Cumulative deaths to date is just over 400. If the proportion of the population at risk of hospitalisation is small (\rho \sim 0.001) then just over 30% of the total population would still be susceptible. In other words, more than half of the UK population would already have been infected. On the other hand, if the proportion at risk of hospitalisation is large (\rho \sim 0.1) then the proportion susceptible is still large (> 0.9) and the fraction that has already been infected is small.

One way to estimate \rho is from the date at which the first case is reported. If \rho is small then the lag between the first case and the first death is larger than if \rho is large. The paper implies that the current data is more consistent with a small \rho than a large \rho. The problem, as this critique highlights, is that this implies that this first case is the progenitor of most of the subsequent cases. Given the small numbers involved, this may well not be the case, since a localised outbreak may not have taken hold. Hence, there doesn’t really seem to be strong evidence in support of \rho being small and, consequently, there is little evidence to suggest that a significant fraction of the UK population has already been infected.

Okay, despite the lengthy pre-amble, this is really what I wanted to focus on in this post. I think it’s perfectly fine to play around with models and to try and estimate various parameters. However, especially when the results have societal significance, it’s very important to be clear about what’s been done when presenting the work publicly. This research has not demonstrated that more than half the UK population has already been infected, it’s simply illustrated that it’s possible. Clearly if most of the UK population has already been infected, then this virtual lockdown could probably be relaxed. However, if \rho is not small, then the lockdown would seem justified. As James points out in this post, even though the paper implies that the current data is consistent with \rho being small, there do seem to be regions where this seems not to be the case.

So, I think it’s highly irresponsible to present a result like this without being extremely careful to minimise the chances of it being misconstrued. It’s clearly not possible to completely avoid research being misrepresented, but researchers do – in my view – have a responsibility to ensure that this not an easy thing to do. It would be great if the impact of this virus is far less severe than we currently think. However, until we have more evidence to support such a conclusion, we really should be very careful of presenting results that imply that this is the case.

Addendum:
This post ended up being much longer than I intended. I was mostly wanting to highlight how I think the presentation of this result was highly irresponsible. The first bit was just meant to illustrate what they’d done in their model. Since I’m not an expert in this field, and have no interest in spreading misinformation about an important topic, if any experts think I’ve made some kind of mistake, feel free to point it out.

I also wanted to post another figure, which is essentially the same as James highlighted in this post. The curves that rise and fall are the number of people who are infectious (left-hand y-axis) while the curves that rise and then level off are the cumulative deaths (right-hand y-axis).

This again illustrates (given that cumulative deaths to date is just over 400) that if the proportion requiring hospitalisation is small (\rho \sim 0.001) then the number of people who have already been infected is already quite high, while if the proportion needing hospitalisation is large (\rho \sim 0.1) then the number of people who have already been infected is much smaller. It also illustrates that the overall cumulative deaths depends quite strongly on this parameter; if we relax current conditions based on this work and it turns out that \rho isn’t small, the impact could be substantial.

In the interests of transparency, if you would like to codes that produced the two figures, you can download them from here.

Posted in Scientists, The philosophy of science, The scientific method | Tagged , , , , | 114 Comments