The Last Glacial Maximum

There’s an interesting paper by Seltzer et al. called [w]idespread six degrees Celsius cooling on land during the Last Glacial Maximum, which I became aware of through a Twitter thread by Werner Aeschbach. The reason it’s interesting is that it uses noble gases in groundwater to estimate the cooling over land in low-to-mid latitudes during the last glacial maximum (LGM). They find that it cooled by 5.8 ± 0.6oC.

Credit: Seltzer et al., Nature, 2021.

As the figure on the right illustrates, this is somewhat cooler than other estimates and potentially resolves a slight discrepancy between climate sensitivity estimates based on LGM cooling, and other estimates. Although they have typically been consistent, estimates based on LGM cooling have tended to suggest that the equilibrium climate sensitivity (ECS) may be on the low side of the range.

This new estimate of LGM cooling, however, suggests as ECS of around 3.4oC (not sure of the uncertainty), potentially rules out the lowest part of the range and is somewhat more consistent with other ECS estimates. It also probably rules out some of the very high ECS estimates coming from some of the CMIP6 models.

Of course, this is just one study, so it would be interesting to know what others think of this. It does, though, seem to be a very useful update to our understanding of the last glacial maximum and what this might imply with regards to climate sensitivity.

Posted in Climate change, Climate sensitivity, Global warming, Research | Tagged , , , , , | 4 Comments

Halting the vast release of methane is critical

A week or so ago there was a New York Times article called Halting the Vast Release of Methane Is Critical for Climate, U.N. Says. As the title suggests, it was reporting on a United Nations Report that (according to the article) is likely to suggest that slashing emissions of methane, the main component of natural gas, is far more vital than previously thought.

However, when discussing carbon dioxide emissions, the article claimed that:

while it remains critical to keep reducing carbon [dioxide] emissions, which make up the bulk of our greenhouse gas emissions, it would take until the end of the century to see the climate effects.

The problem was that this was wrong. As I point out in this post, if you consider pulses of carbon dioxide emissions, then the maximum warming from such a pulse would occur after about a decade. In other words, the warming from carbon dioxide emissions peaks relatively quickly. Consequently, any emissions we avoid will have an impact on a similar timescale. Hence, it’s wrong to claim that it will take till the end of the century to see the climate effects of reductions in carbon dioxide emissions.

What was impressive was that, after I pointed this out on Twitter, the author (Hiroko Tabuchi) responded to say that they were updating the article. They changed it from it would take until the end of the century, to it will take to the second half of the century. I still don’t entirely agree, but I thought it good that they were willing to take on board the criticism and didn’t really feel like quibbling.

I did want to add, though, that I think that what is being presented in the article is potentially what Ray Pierrehumbert was warning against in this Realclimate post. As I’ve pointed out in previous posts, short-lived greenhouse gases like methane behave differently to long-lived greenhouse gases, like carbon dioxide.

The warming impact of methane emissions largely depends on how these emissions have been changing with time. In fact, if we can get methane emissions to decrease, then that would actually reverse some of the methane-driven warming. When it comes to carbon dioxide, though, how much we warm depends essentially on how much we emit in total. So, limiting carbon dioxide-driven warming requires limiting how much we emit in total, and reversing carbon dioxide-driven warming would require artificially removing carbon dioxide from the atmosphere.

There are certainly good reasons for cutting methane emissions now, and it would almost certainly have an impact on a very short timescales. However, we do need to be careful of framing this as some trade-off between reductions in methane emissions and reductions in carbon dioxide emissions. To limit long term warming requires limiting how much carbon dioxide we emit. In the absence of some kind of negative emission technology, any delay in carbon dioxide emission reductions will either commit future greaters to greater warming, or will require much more drastic emission reductions in the future.

So, I do think we should be cautious of suggesting that because reductions in methane emissions can have a large short-term climate impact that we should focus on this now, partly because it’s not the case that this isn’t true for reductions in carbon dioxide emissions, and partly because long-term warming is going to be dominated by how much carbon dioxide we emit. Of course, if there are easy ways to reduce methane emissions now, then we should do so. There may also be other reasons for reducing methane emissions now and we certainly don’t want it to continue increasing. Let’s just not forget that to limit long-term warming requires limiting how much carbon dioxide we emit.

Posted in Climate change, Environmental change, Global warming, Policy, Science | Tagged , , , , , | 93 Comments

Some thoughts about net-zero

There’s been a reasonable vigorous, but pleasant, debate on Twitter about “net-zero”. It was largely motivated by a Conversation article by James Dyke, Robert Watson, and Wolfgang Knorr called Climate scientists: concept of net zero is a dangerous trap. The basic idea is that framing emission reductions in terms of reaching net-zero has allowed there to be plans that delay actual emission reductions on the basis of us being able to develop and implement negative emission technologies.

To be clear, I agree with the concern than we are using the net-zero framing to delay making actual emission reductions. What I don’t agree with is that this is some trap created by climate scientists. The requirement that we get (net) CO2 emissions to (roughly) zero emerges from the scientific evidence. As Kimberly Nicholas pointed out, this essentially means that we must no longer be adding CO2 to the atmosphere. It doesn’t, though, tell us how we should do so.

The simplest would be to simply stop emitting CO2 into the atmosphere. However, there are some sectors that are challenging to decarbonise, so we might need some kind of negative emission technologies that artificially remove as much CO2 as we emit. If it’s possible to implement these kind of technologies at a large enough scale, then we could also continue emitting CO2 under the assumption that these technologies could then be used to not only eventually balance our emissions, but to also allow us to end up in a position where our emissions are net negative.

This is where the problem comes in. By developing plans that rely on these kind of technologies, we’re not only gambling on these technologies being able to operate at scale, we’re also delaying making the kind of emission reductions that would potentially allow us to meet our targets without relying on these technologies. We can, of course, choose to take this risk, but we should be open about doing so.

So, yes, I do agree with people’s concerns about these types of plans and about the possibility that organisations can make promises about getting to net-zero that they may not be able to keep. However, I don’t think the problem is that scientists are pointing out that the requirement for stabilising global surface temperatures is that we need to get (net) CO2 emissions to ~zero.

In addition, I do think that scientists need to be careful. If people are misinterpreting scientific information, or making decisions that don’t seem consistent with that information, that doesn’t mean scientists should decide to change the information they present. Scientists shouldn’t, IMO, be second-guessing how information might be used and adjusting their messaging to steer things in a way that they regard as optimal.

I also think we should be careful of suggesting that the problem is how scientists have presented their information. Of course, scientists should aim to explain information as clearly as possible, but they’re not responsible for how that information is then used. This isn’t to suggest that we should avoid criticising scientists, but I do think we should be cautious of doing so because we disagree with how others are using the scientific information that is being presented.

Links:

Climate scientists: concept of net zero is a dangerous trap – Conversation article by James Dyke, Robert Watson & Wolfgang Knorr.

Posted in Climate change, Environmental change, Policy, Research, Scientists | Tagged , , , , , , | 13 Comments

Mind Your Units

With both sadness and joy I must report that the Sky Dragons {1} invaded Roy’s.

Joy, because I’m having fun. As an editor friend observed (pers. corr.): this place looks like the perfect Thunderdome for you. She’s not wrong. To follow the comments I reinstalled an RSS reader, like the good ol’ times at Judy’s.

Sadness, because the intensity of denial is too damn high! Higher than Tony’s, where guests can sound like voices of reason nowadays. Climateball veterans might recognize MikeF, who took puppet names circa August 2019. A circus of Dark Triad clowns enable him. In short, Roy forfeited. He can’t even be contacted.

The following gem cited by a Sky Dragon made me look into energy balance models:

Out of the mathematical convenience of not having to treat the system in real-time, and with the real power of sunshine, climate scientists average the real-time power of sunshine over the entire surface of the Earth at once, so that they can get rid of day and night, and also so that they can treat the Earth as flat, which makes things easier for them in the math. By spreading the power of sunshine over the entire Earth at once, so that they don’t have to worry about the difference between day and night, the mathematical number required to do this works out to a division of the real incoming power P by the number 4. It is a result of a geometric math problem of transforming a sphere into a flat plane, which is how climate scientists make the simplifications of the real system to something which is not real but is a convenient approximation.

Source: Joe’s

Joe’s story stinks: “real-time” and “at once” stretch incredulity. Zero-dimensional models express with a single equation the balance between the energy in and out of the Earth {3}:

[EMB] (Disc) x (Sun) x (1 – Albedo) = (Area) x (Emissivity) x (SB) x (Temp + Conv)4

Disc is the Earth’s shadow, Sun the Solar constant, Area the Earth’s area, SB the Stefan-Boltzmann constant, Temp the Earth’s temperature, and Conv the conversion constant from Kelvin to Celsius. The notation is adapted from (Kleeman); (ACS), (Lindsay) and (UCAR) provide good intros; (Kiehl & Trenberth) remains the Climateball battleground.

The only parameters we need to discuss here are Disc and Area. The reason why we “divide by 4” for the Sun’s input is simple. The Earth receives light over its shadow {2}:

But the outgoing energy leaves from the whole Earth area. As AT puts it (pers. comm.), the energy we receive per unit time depends on the cross-sectional area (pi r^2), but the energy radiated per unit time depends on the surface area of the sphere (4 pi r^2).

Note how AT lays out the problem in time and space. We’re looking for the energy flux, a specific rate of energy transfered through a surface. Climate scientists speak of Watt per square meter. In SI units, that’d be W⋅m−2, or (equivalently) in J⋅m−2⋅s−1. When isolating the temperature, the energy balance model states an equality between two quantities, the left one in Watt per square meter, the right one in Celcius. How can’t it work in real time? A Watt is a rate of work per second, for Newton sake!

Joe’s trouble may be geometric. He insists on making the light fall on the hemisphere. Sometimes the disc isn’t real enough, like when, with the tip of Archimedes’ hat, he averages flux over a hemisphere, which increases solar input and intriguingly gets him 15.5C. More often the sphere is the target of his ire: his hemispherical equilibrium equation supports diagrams that display 30C in contrast to the usual -18C {5}. How does he pull his trick? Probably misspecification {6}. Perhaps fiddling too, for he conceals the -273C on the unlit side. Joe’s model for the whole Earth remains elusive. It needs to balance the same way as the other ones or it’s humbug. Meanwhile, I see no computational to prefer divide by two than by four {7}.

A more piecemeal way to account for the flux over a hemisphere would be to correct each part of the surface according to the angle from which the Sun hits it by applying Lambert’s Law. Using a disc saves that integration since the whole of it faces the Sun directly. As AT calculates (pers. comm.), this correction gives the same flux as when taking a disc {8}. So I don’t buy Joe’s appeal to the naturalness of a hemisphere over a disc or a sphere. These toy models are no GCMs!

It’s as if Joe wanted to have the zero-dimensional cake and eat it in one dimension {9}. Instead of modeling the Earth like a single point, climate scientists can split the Earth into regions, each one with e.g. different temperatures or albedos. See (Huber) for such model.

The argument I offered appeals to basic algebraic and geometric intuition and is supported by my references. I also contacted AT’s, who’s solely responsible for any mistake I made in the post. Kidding. Comments and corrections welcome.

§ Notes

{1} By Sky Dragon I am referring to someone who denies the Tyndall Gas Effect.

{2} The expression “Earth shadow” can refer to another astronomical phenomenon than the geometrical fact outlined in the video, i.e. the cross-section of the Earth.

{3} No idea why the new WP editor fails to grok LaTeX. Nevermind. Let’s do it the programmer’s way. Perhaps one day scientists will borrow it: it solves many notation problems.

{5} Notice how Joe’s diagram omits the Disc and how the 0.5 appears on the input side, as if the hemisphere was the receptor. In our EBM, Power would be Sun (in W/m^2) x Disc (in m^2), which gives us Watts, or Joule per second. Only when Joe divides back by the hemisphere (from the output side of the equation) that he gets back flux in W/m^2.

{6} Also called Mathurbation. A friend swayed me to shy away from denigrating masturbation.

{7} See AT’s proof that illustrates how the light already falls on the equivalent of a hemisphere.

{8} Paragraph revised thanks to Rajinder’s feedback. I owe him a copy of The Spanish Prisoner.

{9} Joe is not even wrong about his “flat Earth” jab: the model he attacks has zero dimension! I guess climate scientists call their models so because it takes the Earth as a point. They still measure areas, which means their usage of “zero dimension” differs from geometry. In any event, a sphere is not flat.

[Last revision: 2021-05-13]

§ References on Energy Balance Models

ACS; Energy from the Sun

Huber; 1997-07; One-Dimensional Energy Balance Model

Lindsey; 2009-01; Climate and Earth’s Energy Budget

Kiehl & Trenberth; 1997-02; Earth’s Annual Global Mean Energy Budget

Kleeman; Zero-Dimensional Energy Balance Model

UCAR; 2021; Calculating Planetary Energy Balance & Temperature

Posted in ClimateBall, Pseudoscience, Roy Spencer, SpeedoScience | Tagged , | 550 Comments

Do lockdowns work?

Preamble: I wrote this post for another site that was considering myth-busting type posts, which is why it’s written in the third person. However, the myth-busting part of the site never really took off, so I thought I would post it here. There’s no specific reason to do so now. It just seemed unfortunate to waste a post and it allows me to not think about writing another post for a while.

There have been some who have argued that lockdowns don’t work, or that they’re not effective.  Here we want to clarify the current understanding of the impact of lockdowns.  One issue is that the term lockdown hasn’t always been well-defined.  We define it as a set of restrictions that substantially reduces contact between individuals who are not in the same household.

It is very well understood that viruses mostly spread through people coming into close contact with others.  The spread could be through direct contact with another person, touching something that has been touched by someone else, or through aerosol transmission.  Consequently, restrictions that limit contact between people will also limit the spread of the virus.  The stricter the restrictions, the more they will do so.

Consequently, if we implement restrictions that we might describe as a lockdown (stay at home apart from essential travel and some exercise) then we would expect this to substantially reduce the spread of the virus.  Not only is this consistent with our basic understanding of virus transmission, there are also plenty of studies that indicate that this is indeed the case.

An issue, though, is that a very effective lockdown that brings down the number of cases and limits the spread of the virus, will leave a large fraction of the population still susceptible.  Hence, if the restrictions are then lifted while the virus is still circulating in the community, the infection can start spreading again, cases can start rising again, and limiting this would then require implementing new restrictions. 

So, the issue isn’t so much the lockdown itself, but what we do to avoid cases rising again when we exit the lockdown.  Consequently, public health experts would argue that lockdown-like restrictions should be used to bring cases down to the point where we can then implement alternative interventions, such as effective border controls, and test, trace and isolate, to control the spread of the virus in the community once the lockdown restrictions have been relaxed.

Of course, it may be possible to control the spread of the virus without initially implementing lockdown-like restrictions. However, for a highly transmissible virus this would require acting while case numbers are still low so that these alternatives can be effective.  Any delay means that more stringent restrictions would then need to be implemented. 

Also, during the early stages of an epidemic, cases increase exponentially; in the UK cases were doubling every 3-4 days in early March 2020.  Consequently, if less stringent interventions are implemented and they aren’t effective, cases will continue to rise, as will the number of deaths and the incidence of long-term illnesses. 

So, there is a very short period of time over which we can attempt to control the spread of the virus using interventions that are less stringent than a lockdown.  If these don’t work effectively, then we will have missed an opportunity to limit the number of cases, and deaths, and may have to implement lockdown-like restrictions anyway.

This can also lead to the somewhat counter-intuitive result that the outcome in countries that implemented very strict, lockdown-like restrictions can be worse than in countries that implemented less stringent restrictions.  However, this is because stricter interventions become necessary if the initial interventions had little effect, or if there is a delay to the implementation of restrictions. 

Hence, it’s not lockdowns themselves that cause the outcome to be worse, it’s that regions that have failed to act appropriately at an early stage of the epidemic need to then implement much stricter interventions than those regions that took effective action at an early stage of the epidemic.  Taking decisive action early can limit the spread of the virus without having to implement lockdown-like restrictions, or can do so with shorter, more focussed, lockdowns.

The motivation here is not to debate whether or not lockdowns are good, or bad, but to simply highlight that lockdown-like restrictions clearly have an impact on the spread of the virus.  We also want to stress that the implementation of lockdown-like restrictions, and how invasive they are, depends on how promptly action has been taken to limit the spread of the virus.  More stringent restrictions will tend to be associated with worse outcomes because of delays in implementing effective interventions, not because these stringent restrictions led to these worse outcomes. 

Posted in advocacy, Philosophy for Bloggers, Research | Tagged , , , , | 22 Comments

Did a physicist become a climate truth teller?

Steven Koonin, a theoretical physicist, has been profiled in a recent Wall Street Journal article that suggests he’s become a climate truth teller. If you’re aware of Betteridge’s Law of Headlines you’ll already have worked out that I think this is very unlikely to be the case.

credit : xkcd

I’ve written some previous posts about Steven Koonin’s views on climate science and also highlighted a fantastic comment made by Andy Lacis when Steven Koonin had a guest post on Judith Curry’s blog. We also introduced the term Kooninism to indicate a situation when someone suggests that because a number is small the impact is also small. Realclimate also has a recent post about Steve Koonin’s suggestion that we should have a review of climate science.

I do think it’s unfortunate when physicists live up to the stereotype. I’m not a fan of suggesting that researchers should stay in their own lane, but I do think the lone genius who overturns a well-accepted paradigm is a rarity. There’s, of course, nothing wrong with challenging our current understanding, but continually repeating well-debunked talking points is not the ideal way to do so. I also think that if you have the kind of profile that allows you to be promoted in the mainstream media, you should be cautious about the views that you promote.

I was thinking that I would rebut some of what Koonin says in the Wall Street Journal article, but realised that I’d mostly done so in this post, and that Gavin has been even more thorough in this Realclimate post. I’m not really sure what else to say, other than be cautious of accepting the views of (retired) physicists who think they’ve uncovered some truths that have been missed by a large number of experts in another field.

Update:

It turns out that Ray Pierrehumbert wrote an article in 2014 that rebutted many of the arguments that Koonin continues to make today.

Links:

How a Physicist Became a Climate Truth Teller – Wall Street Journal article about Steven Koonin.
Steve Koonin and the small percentage fallacy – one of my previous posts.
Kooninisms – another of my posts.
Andy Lacis responds to Steve Koonin – post highlighting Andy Lacis’ response to Steven Koonin.
Koonin’s case for yet another review of climate science – Gavin Schmidt’s Realclimate post reviewing a Steven Koonin video.
Climate Science is Settled Enough – The Wall Street Journal’s fresh face of climate inaction
– 2014 article by Ray Pierrehumbert.

Posted in Uncategorized | 24 Comments

Eight years

I’ve just realised that I started this blog eight years ago today. This past year has been relatively quiet, partly because it’s been a rather unusual year and I’ve not really felt all that motivated to write blog posts, and partly because I’m not all that sure what to write about anymore. I do think that the climate debate has shifted so that we are more discussing what to do, rather than focusing on whether or not we should be doing anything.

IPCC AR5 (2013)

It feels like there’s less of a need for scientists to engage in debates about the science, which seems like progress. I do still wish that there were some things that were better appreciated. Carbon dioxide accumulates in the atmosphere, so (without negative emissions) a reasonable fraction of what we’ve emitted will remain in the atmosphere for a very long time. The changes (again, without negative emission technologies) are essentially irreversible on human timescales. The only real way to stop climate change is to get (net) anthropogenic emissions to zero. How much we will warm depends largely on how much we end up emitting (Figure). Just because we might miss a target doesn’t mean that we shouldn’t try; just missing it is still better than giving up.

In other news, I got the first dose of my vaccine today, so I’m supposed to take it easy for the rest of the day. I have, though, ended up in a rather convoluted discussion on Twitter about plausible emission scenarios, so I may be failing to do so. It’s a lovely day, so maybe I should go and read a book in the garden. Hope everyone is keeping safe and well.

Posted in Climate change, Personal, Philosophy for Bloggers | Tagged , , | 15 Comments

Plausible emission scenarios

A paper by Roger Pielke Jr, Matthew Burgess and Justin Ritchie has been submitted that suggests that the most plausible 2005-2040 emission scenarios project less than 2.5oC of warming by 2100. It’s generated a bit of debate on social media, so I thought I might write a post with some thoughts.

What the paper does is compare emission growth rates in IPCC scenarios with observed growth rates over the period 2005-2020, or with a combination of observed growth rates and those suggested by a set of IEA scenarios that go out to 2040. Those that most closely matched were then assumed to be the most plausible when it comes to projecting emissions, and warming, to 2100. The most plausible suggest a change in forcing of about 3.4 Wm-2 by 2100, only a small fraction are consistent with 6 Wm-2 by 2100, and the even higher emission scenarios (RCP7.0 and RCP8.5) lie far outside the envelope of plausible scenarios.

However, as Ken Caldeira pointed out on Twitter, would you expect grow rates in the early part of a century to be a good predictor of century-scale emissions? Imagine going back to the early 20th century, developing a set of emission scenarios for the 20th century, and then basing their plausibility on how well they match early 20th century grow rates. How well do you think you would have done?

On the other hand, given that there are active efforts to limit how much we emit and that the Paris agreement has the aim of limiting global warming to well below 2oC (with aspirations to limit it to below 1.5oC) we may well now be in a position where our current trajectory is taking us towards lower emission pathways and, hence, lower levels of global warming. So, my general sense is that the basic result in the paper is not unreasonable.

However, this is a very complex socio-economic system. How much we emit in future depends quite strongly on what we do in future. We could be stupid and find some way to extract hydrocarbons from clathrates, making the higher emission pathways suddenly more likely, or we could be clever and implement effective negative emission technologies that suddenly make the very low emission pathways plausible. I think both are unlikely, but neither is impossible.

Also, this is not an independent system. If we suddenly believe that we’ve done enough to limit warming to 2.5oC and relax efforts to limit future emissions, the higher emission pathways become more plausible. On the other hand, if we believe that we should re-double our efforts to limit future emisisons, then the lower emission pathways become more plausible. I do think we should be cautious of making probabilistic claims about complex socio-economic systems.

Also, it appears that there are a number of factors that this paper ignores. For example, it seems to be considering only CO2 emissions from fossil fuels and industry. It doesn’t consider emissions due to land use change, or (as far as I’m aware) consider the emission of non-CO2 greenhouse gases, such as methane. Both of these are factors that can influence how much we would need to emit to reach a certain change in forcing, and hence level of global warming. Similarly, it doesn’t include carbon released through thawing of the permafrost, which would act as an additional emission source. There’s also some uncertainty when it comes to associating emissions with concentrations. We could end up at a higher forcing level even if we follow an emission pathway typically associated with a lower forcing level.

I’m not suggesting that this implies that the basic conclusion is wrong; I do think that our current trajectory is taking us towards something like ~2.5oC (± ~1oC) but I’m less convinced that the confidence in this is warranted. I do think the paper would have benefitted from a broader discussion of the various caveats and uncertainties. In fact, given that two of the authors have also published a paper suggesting that the misuse of scenarios is a research integrity issue, you might expect this to have been more explicit. Of course, if you’ve been involved in the public climate debate for a while, you probably wouldn’t.

Update:

For some reason, Roger decided to challenge myself and Richard Betts to a bet on the basis of his claim that “In 2040 global CO2 emissions from energy will be closer to levels projected by SSP1-2.6 than to SSP4-6.0”. This seemed a bit odd given that there seemed little reason to bet on something that we would rather didn’t happen, and have never suggested is necessarily all that likely to happen. However, it seems that this may have been an attempt to somehow test our arguments. If we’re suggesting that something close to SSP4-6.0 is plausible, but won’t take a bet on this actually materialising, then that supposedly says something about the plausibility. One should probably, though, convolve this with the probability of someone betting on an outcome that they neither want to happen, nor think is all that likely to happen.

Posted in ClimateBall, Global warming, Policy, Roger Pielke Jr | Tagged , , , , , | 57 Comments

Solar Radiation Management

It seems the latest controversy in the climate debate is whether or not we should be studying Solar Radiation Management (SRM). As Stoat points out, there is a new National Academy of Sciences report on Reflecting Sunlight, which seems to have divided some people. It turns out, I also have a previous post about Solar Radiation Management.

There seem to be two related criticisms of SRM. One is that studying SRM might be seen as promoting an alternative to actually reducing emissions. SRM researchers do, though, make clear that they’re not proposing SRM as an alternative. The other issue is that SRM, itself, might be risky. It works by reflecting sunlight, which will clearly lead to cooling. However, it doesn’t exactly reverse global warming, so there are potential effects that are possibly difficult to predict. It also does little counter ocean acidification. Additionally, if the impact of SRM is large, there’s the risk of what is called a termination shock if the SRM level isn’t maintained.

On the other hand, if we don’t study SRM and we end up in a situation where we might want to seriously consider some kind of geo-engineering solution, we could end up using something like SRM without having a particularly good understanding of the potential impact. Also, as Andrew Dessler has argued, if we do think that negative emission technologies are viable, then SRM could be used to suppress a relatively small amount of global warming, until negative emissions technologies can be implemented at a suitable scale.

As a researcher, I tend to be biased in favour of understanding things. However, I’m also in favour of being aware of the potential implications of developing that understanding. When the research is specifically aimed at understanding a technology that we could end up developing, then I do think it’s important to have some kind of governance arrangement. Partly because I don’t think it’s the researchers alone who should be making these decisions, and partly because I don’t think it’s in their interests to do so.

So, I am generally in favour of studying SRM but I am slightly concerned that some of the narrative is shifting towards being more positive about its use, which I do not think is a good thing. I still think that we should be prioritising emissions reductions and that SRM should not be something that we should be seriously considering. Maybe there will come a point where we might consider implementing it, but I don’t think we’re anywhere close to that yet.

Links:
Reflecting Sunlight – Stoat’s post.
Solar Radiation Management – a previous post of mine about SRM.

Posted in Climate change, Policy, Research, Scientists, The philosophy of science | Tagged , , , , | 49 Comments

Science in the Time of COVID-19

There was an interesting BBC Radio 4 item, hosted by Sonia Sodha, on Science in the Time of COVID-19. If you can’t access it, there is a related Guardian article. I’ve listened to it a few times, and I’m still not quite sure what to make of it. It essentially focuses on issues with the science, and with scientists, highlighting how some of the scientific analyses have been flawed, how scientists can have quite contentious discussions on social media, and how some double down even when it’s clear that what they presented was wrong.

My first thought was, haven’t people been paying attention to what’s been going on with climate change? What’s happening now with COVID-19 seems to be mirroring what’s been happening in the climate debate for years. Also, the basic narrative seemed to be that presenter was surprised that the process wasn’t as simple as: the scientists do the science, then they tell the rest of us what to do, and lives get saved. However, few think that that is really how things work, so why do people still get surprised when it becomes clear that the actual process is much more complex than the supposed ideal?

Also, many of the examples of flawed science were things that were strongly called out at the time. I even wrote a post about one the examples myself. Yes, it’s not great that some scientific analyses are horribly flawed and that some double down when challenged, but it is an unfortunate reality of what is ultimately a social process. This is why one should be cautious of trusting single studies, or of paying too much attention to an individual’s credentials. It’s one reason why I think it’s important to have some idea of what the consensus is. It could be wrong, but it’s probably a reasonable guide at that time.

The suggestion was that this is all an example of post-normal science, when science takes place in conditions of great uncertainty, where values are in dispute, stakes are high, and decisions are urgent. My problem is that I don’t think it is; it’s just normal science. Science isn’t perfect, scientists do sometimes promote ideas that are wrong, scientists do sometimes refuse to acknowledge their errors, and scientists are clearly sometimes far less objective than we might expect them to be.

This might be more obvious when the stakes are high and decisions are urgent, but I don’t think it’s unique to these situations. I don’t think we benefit from suggesting that under these circumstances science is different to what it is when stakes aren’t as high, and decisions aren’t as urgent. In some sense, what this seems to do is validate flawed science. We should be calling out flawed science, not suggesting that it’s just a part of post-normal science.

There’s more that I could say, but I realise that this is now getting rather long. This is clearly an interesting, and important issue, and I do agree with some of what is presented, but think some of it missed the point, or was too simplistic. I also tend to think that some of what was presented was essentially knocking down strawmen; criticising a simplistic caracture of science/scientists that isn’t really consistent with reality. I’m going to stop there, though, but would be interested in other peoples’ views.

Posted in Philosophy for Bloggers, Science, Scientists, Sound Science (tm), The philosophy of science, The scientific method, We Are Science | Tagged , , , , , , | 62 Comments