A real Hiatus

For reasons I certainly won’t go into here, I intend to take a break from all of this blogging. I don’t know for how long, but for a few weeks at least, and maybe longer. Before I do, though, I thought I might comment on Judith Curry’s recent post about Criticizing with kindness. It’s another attempt to discuss/criticise the tone of the online climate science debate. As much as I agree that the tone is poor, my personal view is that anyone who thinks this, and would like to it to be better, can very easily do something about it : improve their own tone.

As far as I’m concerned, if you think discussions should take place in good faith, just make sure you do so yourself. Be prepared to consider the argument being made by the other person. If they tell you that you’ve misunderstood what they’re saying, consider that you have. Be willing to simply disagree. Consider that at least some of what the other person says may have merit. Don’t just nitpick a minor point in order to undermine what they’re saying. It’s not actually all that difficult. I’m sure we all engage in such discussions on a daily basis. Here’s maybe the crucial point : if you think what the other person says is absurd, just stop. There’s no way you can have a good faith discussion with someone who you think is talking nonsense.

So, I think there are certain things that we can regard – given the evidence we have today – as essentially true. For example

  • The rise in atmospheric CO2 since the mid-1800s is virtually all a consequence of anthropogenic emissions. If you want to know why, you can read this.
  • There have been numerous millenial temperature reconstructions using a variety of different proxies and a variety of different techniques. They almost all produce a hockey stick-like shape and indicate that temperatures today are probably higher than they’ve been for more than a thousand years and the rate at which it has risen is faster than for more than a thousand years. You can read more here.
  • The instrumental temperature record has been replicated/reproduced by numerous different groups. All the different records show that we’ve warmed by more than 0.8 degrees since 1880. Homogenization is a crucial part of generating these temperature records and is not an indicator of data tampering, or because scientists want to show that it’s warming faster than it actually is (e.g., here).
  • Our understanding of climate change is not primarily based on global climate models (GCMs). They provide some evidence for how our climate may change if we continue to increase anthropogenic forcings. Also, claiming that climate models have failed because they didn’t specifically predict the so-called “pause” is like suggesting that you can’t be sure the river will flow downhill because you can’t predict the winner at pooh sticks (H/T Richard Betts).
  • We may not know, precisely, the equilibrium climate sensitivity and the transient climate response, but we do have evidence that provides a range for each of these quantities. Claiming that it will probably be on the low side of these ranges, is simply wrong. The probability distribution tells us the likelihood of each portion of these ranges, and deciding that a particular interval is more likely than this probability distribution suggests, is simply ignoring some (or most) of the evidence.

There are probably more, but the point I’m getting at is that it really isn’t possible to have a good faith discussion with anyone who dispute the points above. There are certainly perfectly valid reasons for discussing the above points, but doing so on a blog (or on Twitter) with someone who disputes them would just seem to be a waste of time. Our current understanding is based on a large amount of published, scientific evidence. It’s highly unlikely that a bunch of non-experts on blogs are going to overturn this understanding. Of course, if someone does some actual research, publishes some papers, and convinces the scientific community that some of the points above are wrong, great. Do it. That’s how science works. I just don’t see the point in having blog discussions with people who are arguing against well-established science, or how such discussions could possibly take place in good faith.

Anyway, for what it’s worth, that’s my view. I’m certainly not suggesting that everyone has to agree with the above points; simply that I don’t see the point in arguing with those who don’t. It’s certainly a waste of my time, if not of theirs. Of course, if anyone does disagree with any of the points I’ve made above, they’re welcome to explain why in the comments. However, it will require that they do more than point to a single (or a few) paper that disputes the mainstream view. Similarly, if anyone wants to add to the list, feel free.

As I said at the beginning, I’m going to take a break for a while. I certainly don’t plan to write any posts, but may respond to comments if I get a chance.

Posted in Climate change, Climate sensitivity, ClimateBall, Global warming, Judith Curry, Michael Mann, PAGES 2k, Personal, Science | Tagged , , , , , , | 29 Comments

Fraudulent?

Greg Laden has a good post about Judith Curry’s recent post in which she implies that the Hockey Stick might be fraudulent. I recommend reading Greg’s post, but essentially he points out that it’s perfectly normal to take a large number of different datasets and combine them to try and illustrate something relatively simple. In the case of the Hockey Stick graph, it’s to try and present our global (or Northern Hemisphere in some cases) temperature history over the last millennium.

If you want to know more about the Hockey Stick controversy, you can read Greg’s post. I thought I might just make a broader point. Maybe I’m just odd (yes, yes, okay, I am), but as far as I’m concerned, the important question – when it comes to something like the Hockey Stick – is whether or not what it presents is a reasonable representation of our millenial temperature history. All these claims of fraud, misconduct, etc. just seem to be attempts to undermine a result without actually showing that what it presents is wrong. In fact, I would argue that if a scientific result is based on fraud/misconduct, it should be trivial to show that it’s wrong (i.e., redo the work in a non-fraudulent way and present the correct result, or show that you can’t reproduce the result). It’s certainly my opinion that all these accusations of fraud/misconduct are really just because the Hockey Stick graph presents a result that some find inconvenient.

I mentioned in an earlier comment that I was engaged in a scientific debate with another group who, in my view, are presenting their work in a way that somewhat overplays the significance of what they’re doing. However, what they’ve actually shown is interesting and quite important, but not really for the reasons that they suggest. I do find it quite annoying that they’ve written some papers presenting their results in a way that sounds much more interesting than – in my view – it warrants (their papers are getting more citations than mine :-) ). On the other hand, I’ve managed to write a couple of papers in response and can show that what they’re suggesting is wrong without needing to make any suggestions of scientific misconduct. At the end of the day, we gain understanding even if there are some blips along the way.

It would be much better, in my view, if people were willing to be more careful about what they present and not overplay the significance of their results, but scientific debates are perfectly normal and can, typically, take place without throwing around accusations of fraud and misconduct. There are certainly occasions when it is valid to make an accusation of fraud or misconduct, but this would normally be when someone cannot replicate a result and it becomes clear that the original researchers were fundamentally dishonest in some way. A mistake does not constitute fraud, nor does doing something in a way that others might disagree with.

That’s really all I was going to say. I just still find myself being amazed by what some people seem willing to say. I know that by now I should no longer be amazed by what anyone says, but I still am. I don’t really see how throwing around accusations of fraud and misconduct helps us to gain scientific understanding but, my guess, is that that isn’t the goal of those who do so.

Posted in Climate change, ClimateBall, Global warming, Judith Curry, Science | Tagged , , , , , , | 225 Comments

Matt Ridley, you seem a little too certain!

I thought I might add a new chapter to my series, which I’ve called helpful tips for the Global Warming Policy Foundation. The first was a quick science lesson for Lord Lawson. The second was the cheerfully titled Come on Andrew, you can get this. My new installment is an attempt to explain, to Matt Ridley, the significance of an uncertainty interval, something that someone with a science PhD should understand, but appears not to. I actually get paid to teach this kind stuff, so you’d think they might appreciate the free and friendly advice. I get the impression, though, that they don’t :-)

It relates to a recent article that Matt has written for the Wall Street Journal, in which he says

As a “lukewarmer,” I’ve long thought that man-made carbon-dioxide emissions will raise global temperatures, but that this effect will not be amplified much by feedbacks from extra water vapor and clouds, so the world will probably be only a bit more than one degree Celsius warmer in 2100 than today.

When I asked Matt on what basis he was making this claim, he directed me to the GWPF report OverSensitive, written by Nic Lewis and Marcel Crok, and – in particular – to Table 3 (below).
Tabl3Over
So, here’s the basic problem, the values that Matt appears to think justify his claim are based on a single value for the Transient Climate Response (TCR). In fact, this appears to be based on the work of Otto et al. which uses observations (plus forcings from models) to estimate the TCR, and concludes that it has a 5 – 95% range of between 1 and 2 degrees (with a best estimate of 1.35 degrees). He also argues that the RCP8.5 emission pathway is completely unrealistic and so should be ignored. Therefore he’s concluding that the worst case scenario is RCP6.0 with a probable TCR of 1.35 degrees. However, the point is Matt, that you can’t just pick a single number; you should really consider the range.

I thought I would illustrate this using a basic one-box model

C \frac{dT(t)}{dt} = F(t) - \lambda T(t)

where F is the change in anthropogenic forcing, C is the heat capacity of the system, and \lambda is the climate sensitivity. I determined \lambda values that resulted in TCR values of 1, 1.35, and 2 degrees (in which the forcing was assumed to change because of a 1% per year rise in CO2). The main reason for this TCR range is uncertainties in the aerosol forcing, so I then adjusted the anthropogenic forcing from the RCP11 dataset so that my model results over the period 1880 – 2010 roughly matched the instrumental temperature record. I then extended the forcing dataset to 2100 along an RCP6.0 pathway (i.e., reaching a change in anthropogenic forcing of 6 Wm-2 by 2100). The basic result is in the figure below.
RCP6warming
So, indeed, if the TCR is 1.35 degrees it would be a bit over 1 degree warmer than today in 2100 (the y-axis in the figure is relative to 1880, so take away about 0.8 degrees to get relative to today). However, the work that Matt is basing his views on suggests that the TCR is as likely to be above 1.35 degrees as below. Hence the warming is as likely to be higher than suggested by Matt as it is to be lower. Also, there is a non-negligible chance that it could be as much as 2 degrees higher than today in 2100. I should add that this is more like a 95% range, rather than the 66% range presented by the IPCC. Anyway, the point is, Matt, that this is roughly what everyone was getting at on Twitter; basing your estimate for future warming on a single TCR value from a single study is not very scientific. Some might call it a cherry-pick.

There are also some additional points, that scientists should really be willing to acknowledge. The range for the TCR from the study used by Matt (1 – 2 degrees) is lower than the IPCC estimate (1 – 2.5 degrees). However, the study used by Matt is an observationally-based approach that suffers from a number of possible issues. In addition to the uncertainty in the aerosol forcing, it’s also sensitive to variability in the surface temperature, cannot capture possible non-linearities in the feedback response, and cannot easily compensate for inhomogeneities in the forcings. This doesn’t mean that it’s wrong, but this is evidence that it might be underestimating the TCR. So, not only could we warm more than Matt suggests (using exactly the same evidence as he’s using), there’s additional evidence suggesting that even this could be an underestimate.

So, I hope this helps Matt to understand what people where getting at when they were questioning him on Twitter. I should add that I did this all rather quickly, so I’m not claiming that the range I’ve got for the warming by 2100 is exactly right, but I think it’s reasonable. This was just meant to be an illustration, rather than being exact. Of course, if anyone has any thoughts or corrections, feel free to make them through the comments.

Posted in Climate change, Climate sensitivity, ClimateBall, Comedy, Global warming, IPCC, Science | Tagged , , , , , , , , , | 78 Comments

Sometimes all you can do is laugh!

I see Andrew Montford has authored a new Global Warming Policy Foundation report about The Warming Consensus and its critics. Personally, I find this whole argument about the consensus tedious and childish. It exists, knowing it exists could be important, its existence doesn’t mean that the scientific view is correct, science doesn’t work via consensus, denying its existence is infantile and foolish.

So, who are the critics that Andrew’s report highlights. Well, there’s a single quote from Mike Hulme made – if I remember correctly – on a Making Science Public blog post. There’s Richard Tol’s paper, that took something like 5 submissions to 4 different journals, and which, at best, simply points out what the original paper already acknowledged (and, at worst, is simply complete and utter bollocks). There are quotes from blog posts written by a ranty PhD student from Arizona, whose views appear so absurd that I can’t bring myself to mention their name or link to their blog posts. And, last but not least, a paper co-authored by Christopher Monckton.

So, here’s my new theory of how some people hope to “win”. Write reports (and say things) that are so absurd that anyone sensible simply bursts out laughing, assumes that they’re joking, and moves on without commenting. That way, the author can then claim that noone has yet contradicted what they’ve written and, therefore, they must be right. In a similar vein, I spent some of my day discussing, with Judith Curry on Twitter, whether or not we’re virtually certain that the rise in atmospheric CO2 is anthropogenic. Again, what else can you do but laugh?

Posted in Uncategorized | Tagged | 278 Comments

The best evidence we have!

Brian Cox took a bit of flack because of a Guardian article about a speech he’d given in which he appeared to be suggesting that – when discussing climate science – scientists should sound more certain than they are. I don’t think this is what he was suggesting. What I think he was arguing was that we should be careful not to sound less certain than we actually are; almost the opposite of what some have interpreted him as saying.

Brian has clarified what he meant in a post on his own blog. He argues that the term uncertainty is often misunderstood and misused. It doesn’t mean we’re uncertain. If anything, it’s the opposite. It represents a confidence interval; it tells us how confident we are about a particular result. He went to say something that really resonated with me. He suggested that there is something that we can say with certainty, which is :

The consensus scientific view is the best we can do at any given time, given the available data and our understanding of it. It is not legitimate and certainly of no scientific value (although there may be political value) to attack a prediction because you don’t like the consequences, or you don’t like the sort of people who are happy with the prediction, or you don’t like the people who made the prediction, or you don’t like the sort of policy responses that prediction might suggest or encourage, or even if you simply see yourself as a challenger of consensus views in the name of some ideal or other.

This probably describes my reason for starting this blog. We have an immense amount of information about climate science, and all of the information is presented with suitable confidence intervals. All of this information is available to be used by our policy makers to decide what we should, or should not, do with regards to climate change. The confidence intervals include the possibility that we will not warm much and, hence, that the impact of climate change will be minimal. However, our understanding at the moment is that this is very unlikely. Similarly, it is possible that warming will be extremely high and the impacts will be severe even if we do reduce our emissions but, again, this is unlikely.

Essentially, the best evidence we have today is the consensus scientific view which represents our best understanding. It could, and will, change as we gather more data and improve our models and theories. It could even up being very wrong, but it’s still the best we have today. All of those who like to point out that consensus views have been wrong in the past, should recognise two very obvious truths : there are also very many consensus views that have turned out to be correct, and – even though some consensus views did turn out to be wrong – those views were still be the best evidence of their day. It makes no sense to argue against a consensus position simply because it could be wrong. That’s an argument for ignoring evidence and basing policy decisions on ideology alone.

Many also try to argue against the consensus position on the basis of it having no significance with respect to science itself. This sounds good, but really doesn’t make much sense. If you’re an ecologist who would like to study the possible impact of climate change on some ecological system, you need to understand the consensus position. You can’t be expected to re-invent the wheel and redo all the climate modelling so as to inform your own research. You use what others have already done. It’s true that the consensus position should not define our understanding indefinitely, in the sense that someone could make a discovery that overturns it, but that doesn’t mean that the consensus position has no relevance whatsoever.

Generally speaking, what Brian Cox seems to be saying is very much what I’ve been trying to say here. The consensus position represents our best understanding today, and includes confidence intervals that tell us something of the likelihood of the different possible outcomes. Sensible policy should be based on this position. If your policy preference requires arguing against this consensus position, then that would suggest that your policy preference is weak if the consensus position turns out to be right. You may turn out to be right, but that would be more by luck than design, and I see no reason why this makes any sense whatsoever. Of course, our understanding will change with time, but that doesn’t mean that the consensus position is not the best evidence we have today.

I may have said this before, but I’d be really interested to see if those who typically argue against mainstream climate science (Andrew Montford being a prime example) can actually construct an argument for their preferred policy option that doesn’t require claiming that there is something fundamentally wrong with the consensus position. I don’t think they can, but it would interesting to see them try. Of course, individuals are welcome to believe whatever they like, but I see no reason why our policy makers should base their decisions on anything other than the best evidence we have today.

Posted in advocacy, Climate change, Global warming, IPCC, Science | Tagged , , , , , , , | 45 Comments

The “pause” that probably isn’t

Matt Ridley has a particularly silly article in the Wall Street Journal called Whatever happened to Global Warming. In his article he says

Well, the pause has now lasted for 16, 19 or 26 years—depending on whether you choose the surface temperature record or one of two satellite records of the lower atmosphere. That’s according to a new statistical calculation by Ross McKitrick, a professor of economics at the University of Guelph in Canada.

It has been roughly two decades since there was a trend in temperature significantly different from zero. The burst of warming that preceded the millennium lasted about 20 years and was preceded by 30 years of slight cooling after 1940.

This is based on a new paper by Ross McKitrick in which he determines what length of period prior to 2009 is required so that the trend plus 2σ uncertainty interval in the temperature record does not intercept zero. Richard Telford has already done a wonderful take-down of this work by showing that 15-20 year periods are quite possible even if the underlying long-term trend is constant and rising. What Ross McKitrick and Matt Ridley are almost certainly doing is making a Type I error. He’s accepting the null hypothesis (we’re not warming) when we almost certainly are. The comments by Dikran and Chris Colose on Richard’s post are also worth reading.

Matt Ridley goes on to say

This has taken me by surprise. I was among those who thought the pause was a blip. As a “lukewarmer,” I’ve long thought that man-made carbon-dioxide emissions will raise global temperatures, but that this effect will not be amplified much by feedbacks from extra water vapor and clouds, so the world will probably be only a bit more than one degree Celsius warmer in 2100 than today. By contrast, the assumption built into the average climate model is that water-vapor feedback will treble the effect of carbon dioxide.

Well, here Matt has completely ignored the ocean heat content data and the continually reducing ice mass, both of which indicate that we’re accruing energy at a rate that is consistent with what we’d expect. The surface warming is only associated with a few percent of planetary energy imbalance, and so it’s not at all surprising that it shows significant variability.

And then, quite remarkably, concludes with,

But now I worry that I am exaggerating, rather than underplaying, the likely warming.

Don’t worry Matt, I think many people still regard you as an out and out denier. Of course, I would never call you that, but your concern that you’ve been a little alarmist is entirely misplaced.

When I see this kind of thing it makes me realise that there’s no real point in discussing this with such people. We’re talking a completely different language. If you want to consider anthropogenic global warming, you really should consider all the evidence. You can’t consider a subset of the evidence and then draw conclusions about whether we’re warming or not. Additionally, these discussion invariably end up being ones in which you’re challenged to show where there’s an error in their calculation. There’s not necessarily an error, but context is crucial. It’s a little like (and I exaggerate) someone doing a simple calculation (2 + 2 = 4), claiming that they’ve shown that Einstein is wrong, and then insisting that you can’t prove them wrong until you find the error in their calculation. I’m sure Willard would have some term that described such forms of argument.

I’ll finish this post by mentioning a relevant article by Richard Betts on Climate Revolution called Pooh sticks, pauses, and predictability. It’s a good post, and it reminded me that – at university – we had a student magazine that once included the classic drawing of Pooh and Piglet playing Pooh sticks, but in which they were facing the other way. My mother claimed that it had ruined Winnie-the-Pooh for her. Richard finishes his post with

Claiming that long-term warming won’t happen because the ‘pause’ was not specifically predicted is like saying that you can’t be sure the river will flow downhill because you can’t predict the winner at pooh sticks.

Posted in Uncategorized | Tagged , , , , , , , , | 75 Comments

The length of the “pause”?

In a recent post, Judith Curry highlighted a new paper by Shaun Lovejoy called Return periods of global climate fluctuations and the pause. This is a follow-up of an earlier paper that concluded that

the probability of a centennial scale giant fluctuation was estimated as ≤0.1%, a new result that allows a confident rejection of the natural variability hypothesis.

In other words, the rise in temperature over the last 100 years or so is almost certainly anthropogenic. This new paper also says

The hypothesis is that while the actual series Tnat(t) does depend on the forcing, its statistics do not. From the point of view of numerical modeling, this is plausible since the anthropogenic effects primarily change the boundary conditions not the type of internal dynamics and responses,

which is interesting given the discussion that prompted an earlier post. The basic result of this paper is essentially that the variability we’ve seen in the instrumental temperature record is simply a consequence of natural variability around a long-term anthropogenic trend. Additionally, the estimated climate sensitivities are entirely in line with IPCC estimates.

So, I should quite like this paper as it is not only a follow-up to a paper that ruled out that the rise in temperature over the last 100 years or so could be natural, but also illustrates that the variability in the instrumental temperature record is simply natural variability around a long-term anthropogenic trend; something I’ve been stressing on this blog. But I don’t really. This is a rather odd paper and – unless I’m missing something – am rather surprised that it got published.

Why? Well the fundamental equation is essentially the one below

T(t) = \lambda_{2xCO2,eff} log_2 \left( \rho_{CO2}(t)/ \rho_{CO2,pre} \right) + T_{nat} (t) .

This seems to be a rather unusual form of such an equation, as it suggests that the temperature rise is a linear function of the increasing forcing, plus some term representing natural variability. So, if the forcing stops rising, the forced response stops instantly, rather than continuing to rise to equilibrium. The coefficient in front of the first term on the right-hand-side is, therefore, then really only some kind of effective transient response. The bigger issue, in my view, is that the forcing is CO2 only. The paper actually says

Two things should be noted: first, Tnat includes any temperature variation that is not anthropogenic in origin, i.e., it includes both “internal” variability and responses to any natural (including solar and volcanic) forcings.

So, as stated, the natural variability being investigated in this paper is both external forcings and internal variability. What’s more, this isn’t even quite right because if the only forcing included is CO2, then the natural variability term also includes other anthropogenic influences (aerosols, black carbon, land use). Therefore, not only is the natural variability in this paper not what Judith would regard as natural variability (i.e., it normally refers to unforced natural influences), it’s not even natural in any reasonable interpretation of the term as includes anthropogenic influences.

So, what this paper seems to have done is determine the variability associated with non-CO2 anthropogenic influences and both forced and unforced natural influences. Given that some of these influences are stochastic, some have cycles (solar), and some are monotonically increasing or decreasing (aerosols, land use, black carbon) how can any kind of pattern really make any sense. If there is a pattern, it surely has be purely coincidental. Furthermore, what’s of real interest is the magnitude of the influence of internal variability, which this paper appears entirely unable to determine (and is what I think some have thought it is doing).

What it really should be doing, I think, is using a standard one-dimensional model

C \frac{d T}{dt} = dF(t) - \lambda T + T_{nat}(t),

where C is the heat content of the system, dF(t) is the change in external forcing (natural and anthropogenic), \lambda is the climate sensitivity term, and T_{nat}(t) could be some term representing internal variability. If this was done (and it may already have been) I think one would find that the unforced variability is much smaller than indicated in this paper.

So, I think Judith likes the paper because it suggests that natural variability could be quite large and because it suggests “pauses” could return every 20 years or so. Given that “natural” in this paper doesn’t really mean natural and that any kind of pattern is presumably entirely coincidental, it’s a rather unconvincing result. The paper finishes with

To be fully convincing, GCM-free approaches are needed: we must quantify the natural variability and reject the hypothesis that the warming is no more than a giant century scale fluctuation.

I don’t really agree with this. There’s only so much one can do with simple models. They’re very useful, but the idea that we can completely characterise the anthropogenic and natural influences using simple models, seems a little unrealistic. In my view, the role of simple models is to provide a way of checking that the results of more complex models make sense. Of course, those who don’t like GCMs, appear to like this conclusion.

Posted in Uncategorized | Tagged , , , , , , | 20 Comments