There is no tribe!

David Rose has a new article about Judith Curry called I was tossed out of the tribe. Well, here’s problem number one. There is no tribe. If you’re a scientist/researcher, then you should be aiming to do research that is honest and objective, the results of which should not depend on who you regard as being your contemporaries. If you think there’s some kind of tribe to which you need to belong, then you’re doing it wrong.

Apparently, also, Judith Curry’s record of peer-reviewed publication in the best climate-science journals is second to none. Sorry, but this is simply not true. It’s pretty decent, but it’s not second to none. The article also says: Warming alarmists are fond of proclaiming how 97 per cent of scientists agree that the world is getting hotter, and human beings are to blame. Ignoring the term Warming alarmists, the reason people say this is because it is essentially true.

Judith Curry apparently also says

‘…..A sensitivity of 2.5˚C makes it much less likely we will see 2˚C warming during the 21st century. There are so many uncertainties, but the policy people say the target is fixed. And if you question this, you will be slagged off as a denier.’

Firstly, a sensitivity (ECS, I assume) of 2.5oC does not make it much less likely that we will see 2oC during the 21st century. Not only do the ranges of projected warming already include the possibility that the sensitivity might be 2.5oC, but what we will see depends largely on how much we emit. Also, the target is fixed in the sense that it is defined according to giving us some chance of keeping warming below 2oC; normally a 66% chance. It already includes the uncertainty about climate sensitivity and uncertainty about carbon cycle feedbacks. Maybe when Judith questions this, she gets slagged off for apearing to not understand this basic concept; something a scientist with a record that is apparently second to none should be able to understand.

Judith Curry also added that

her own work, conducted with the British independent scientist Nic Lewis, suggests that the sensitivity value may still lower, in which case the date when the world would be 2˚C warmer would be even further into the future.

Well, yes, but there are many reasons why their ECS estimate is probably too low. Just because you’re proud of your own work, doesn’t mean you get to dismiss everything else. That climate sensitivity could be lower than we currently think is likely, does not mean that it probably will be.

There are numerous other examples of nonsense, such as

Meanwhile, the obsessive focus on CO2 as the driver of climate change means other research on natural climate variability is being neglected.

No, it’s not.


solar experts believe we could be heading towards a ‘grand solar minimum’ — a reduction in solar output (and, ergo, a period of global cooling) similar to that which once saw ice fairs on the Thames. ‘The work to establish the solar-climate connection is lagging.’

Firstly, there isn’t some lag in work on the solar-climate connection, and the solar experts were rather clueless about climate.

The article finishes with

She remains optimistic that science will recover its equilibrium, and that the quasi-McCarthyite tide will recede:

Rather than it receding, Judith Curry appears to be helping it to start.

So, as far as I can tell, Judith Curry gets criticised because she says things that – for a senior scientist who has a record that is apparently second to none – are embarassingly wrong. She also appears to have ejected herself from a tribe that only exists in her imagination. Good thing there are credulous journalists, like David Rose, who are willing to write supportive articles.

Posted in Climate change, Climate sensitivity, ClimateBall, Judith Curry, Science | Tagged , , , , , | 58 Comments

Energy budgets

Stoat highlights an interesting paper that suggests that the rates of ancient climate change may be underestimate. I haven’t had a chance to really look at it, so if anyone has looked at it, and has any views, that would be interesting. I might suggest that you post any comments at Stoat, though, as I think he’s looking for them :-)

This, however, gives me an opportunity to mention another paper that I found quite interesting. It’s by Xie, Kosaka, and Okumura, and is called Distinct energy budgets for anthropogenic and natural changes during global warming hiatus. The basic idea is that if you consider the basic energy balance formalism, then the system heat uptake rate, Q, is essentially given by

Q = F - \lambda T,

where F is the change in forcing, T is the change in temperature and \lambda is the feedback factor. If there is a slowdown in surface warming (dT/dt \Rightarrow 0) while we continue to increase anthropogenic forcings, then – if this equation applies – we’d expect to see an increase in the system heat uptake rate (dQ/dt \sim dF/dt > 0). However, this isn’t really what we’ve seen in last decade, or so, when surface warming has been slower than expected.

What Xie, Kosaka & Okumura suggest is that the feedback response is different when the warming is internally-driven, compared to when it is externally forced. What they suggest is that when the temperature variation is internally-driven, the resulting spatial pattern produces a feedback response that leads to a top-of-the-atmosphere imbalance that is somewhat out of phase with the temperature variation.

This is illustrated in the figure below, which shows (in the left hand panel) the externally-forced temperature response (black line), the internally-driven temperature variation (green line) and the net temperature response (red line). The right-hand panel shows the change in system heat uptake rate due to the externally forced component only (black line), and what would be expected if the response to the internally-driven warming were the same as due to externally-driven warming (brown line). This shows that we’d expect – if the above equation applied – an increase in system heat uptake rate as the internal variability produced a temperature slowdown. The green line, however, shows a TOA response that is out-of-phase with the internally-driven temperature variation, and the red line shows how this influences the net system heat uptake rate, and might explain why the system heat uptake rate hasn’t increased during the surface warming slowdown.

Credit : Xie, Kosaka & Okumua (2015)

Credit : Xie, Kosaka & Okumua (2015)

I don’t really know if what they’re suggesting is plausible, or not. Given some of the discussion here, that the spatial pattern of the warming could influence the feedback response seems entirely reasonable. They also show that this out-of-phase response to internally-driven warming is consistent with what is seen in climate models.

That’s really all I was really going to say. I found it an interesting paper, as I had wondered why we weren’t see an increase in system heat uptake rate during the temperature slowdown, and the suggestion in the paper seems plausible. If anyone has any other views, though, feel free to make them in the comments. I should probably add that I’ve written this quite fast, and it’s getting late here, so apologies if I’ve made some kind if basic blunder in explaining this.

Posted in Climate change, Climate sensitivity, Science | Tagged , , , , , , | 20 Comments

One graph to rule them all

Given that I’ve written a number of posts about the so-called “pause”, I thought I would mention a recent paper by Lewandowsky, Risbey & Oreskes called On the definition and identifiability of the alleged “hiatus” in global warming. I don’t want to say too much about it as there are various articles about it already.

Credit : Lewandowsky, Risbey & Oreskes (2015)

Credit : Lewandosky, Risbey & Oreskes (2015)

All I really wanted to do is mention the figure on the right. The top panel is the linear trend for a particular vantage year (end year looking back in time), and the number of years included. The colour is the actual trend, and the dots are those combinations of vantage years and years included for which the trend is statistically significantly different from 0 (no trend). The bottom panel is essentially just illustrating this showing which vantage years and years included have p-values less than 0.05 (the standard test of statistical significance). What it shows is that if you include less than 17 years, trends are typically not statistically significant, while if you include 17 years or more, they are almost always significant – for the time period considered, at least.

So, this figure essentially shows what’s been pretty obvious to many; if you define the “pause” as the trend being statistically consistent with 0 (no trend) then if you consider periods that are too short, you will almost always find a “pause”. Given the natural variability in the data, you need to consider periods of – typically – more than 17 years if you want to determine if there is a non-zero trend, or not. This is essentially what Richard Telford pointed out when he illustrated that Ross McKitrick’s paper was essentially describing a recipe for a hiatus.

So, I think the recent Lewandowsky, Risbey & Oreskes paper is simply illustrating nicely what has been pretty obvious to many people for a while now. This doesn’t mean that the recent period isn’t interesting from a model/observation perspective and that interesting and unexpected things didn’t happen. It’s just clear that there hasn’t really been any kind of “pause”.

Posted in Climate change, ClimateBall, ENSO, Science | Tagged , , , , , , | 74 Comments

Be hated?

I thought I would post this video (H/T Victor) by Veritasium who, last year, posted a video called 13 misconceptions about global warming. This new video is him discussing the fall out from his climate change video and, given the title, focuses on being hated. His suggestion is that maybe it’s okay to be hated, as it means that you’re probably sticking to your principles and saying things that challenge what other people might believe.

One problem I have with the “be hated” idea is that – as he himself says – simply being hated doesn’t mean that you’re doing something right. On the other hand, if you’re going to stick your neck out and publicly express your views, it’s probably hard to avoid encountering some who might appear to hate you. He also discusses whether or not one should engage with such people – “don’t feed the trolls” – and suggests that you lose something by not responding. Willard might be pleased – ClimateballTM: the only losing move is not to play the game :-)

My own personal take is to simply be true to yourself. Be honest, say what you believe, take heed of criticism when it seems measured and thoughtful, and largely ignore those who seem incapable of not being vitriolic and unpleasant (I say “largely”, because sometimes there may be a grain of truth in their vitriol). Sadly, if you do choose to engage publicly about a contentious topic like climate change, it’s hard to avoid being insulted and potentially discovering people who appear to hate you because of what you say.

Personally, I think this is unfortunate, as it is an important topic and it would be good if more engaged publicly. On the other hand, I can understand why many might choose not to. Certainly my advice to anyone who doesn’t like the idea of being hated, is to not get involved. I have found it quite difficult at times, because it is something I’ve never encountered before. You do, however, learn how to deal with it and how to mostly ignore those who seem incapable of being reasonable. One thing to be careful of is to not start responding in kind; easier said than done at times, though. Another thing to be careful of is starting to ignore all your critics; you can’t be right all the time. I’d like to say that you eventually learn how to do this, but I’m not sure that’s strictly true; it’s a continual learning experience.

Posted in Climate change, ClimateBall, Personal, Science | Tagged , , , , | 93 Comments

A (stupid) $100000 bet

I was going to ignore this, but since this is a blog and I’ve nothing better to write about, I thought I would comment on Doug Keenan’s $100000 challenge. If you want some insights into Doug Keenan, Richard Telford’s blog is a reasonable place to start. I’ve also written about some of his antics.

So, what is his big challenge? Well, it appears to be to identify (with 90% accuracy) which of his 1000 time series were simply random, and which have had a trend added to them. Doing so would, according to Doug

demonstrate, via statistical analysis, that the increase in global temperatures is probably not due to random natural variation.

Really? No, this is just silly. Doing so would simply demonstrate that one can identify which of a set of randomly generated time series have had a trend added to them. It will tell you absolutely nothing as to whether the increase in global temperature is due to random natural variation or not. If you want to establish this you would need to base your analysis on what could cause changes to the global temperatures, and try to establish the most likely explanation for the observations. You cannot do it using statistical analysis alone. This should be obvious to anyone with a modicum of understanding of the basics of data analysis.

We are also pretty certain that it’s not simply a random walk. You can read Tamino’s post if you want an explanation as to why it isn’t a random walk. We understand the energy flows pretty well, and the idea that global temperatures could randomly drift up, or down, is simply bizarre and represents a remarkable level of ignorance.

In principle, I should just ignore this as being silly, but it is actually particularly frustrating. It is very obviously complete nonsense. Anyone who promotes this, or sees it as somehow interesting, or worthwhile, is either somewhat clueless, or particularly dishonest. There isn’t really a third option. This is the kind of thing about which there should be little disagreement. Not dismissing this challenge as silly is why I object to the idea that there aren’t really people who deserve to be labelled as climate science deniers, since it is the epitomy of climate science denial. It’s why I think we need a better class of climate “sceptic”.

There are, however, some conclusions one might be able to draw from this whole episode

  • Doug Keenan has promoted this basic idea on numerous occasions and, on numerous occasions, people have explained why it is nonsense. Maybe Doug Keenan is simply particularly dense.
  • Possibly Doug Keenan thinks that all mainstream climate scientists are particularly dense and that they’ve been doing something fundamentally wrong for decades. However, given that they haven’t actually been doing what he seems to think they’ve been doing, this means that – at best – he’s simply savaging a strawman, and returns us to the point above.
  • Doug Keenan knows exactly what he’s doing, but thinks that all the climate science deniers to whom he’s trying to appeal are simply particularly dense. I might be insulted if I were them, since it’s pretty obviously nonsense. Of course, those who think this is a good idea would probably deny being climate science deniers, putting them into some kind of infinite loop of denial.
  • Some combination of the above.

Whatever conclusion one might draw, it does seem to put Doug Keenan’s comment into a different light. When he said

the best time-series analysts tend to be in finance. Time-series analysts in finance generally get paid 5–25 times as much as those in academia; so analysts in finance do naturally tend to be more skillful than those in academia—though there are exceptions.

I assumed he meant that there were exceptions in academia. I hadn’t appreciated that he might have been talking about himself. That seems more plausible, given recent evidence.

Posted in Climate change, ClimateBall, Comedy, Science | Tagged , , , , , , | 99 Comments

Feedbacks, climate sensitivity and the limits of linear models

Since I’ve discussed climate sensitivity here quite a lot, I thought I would highlight a recent paper, by Knutti & Rugenstein, called feedbacks, climate sensitivity and the limits of linear models. It’s a very nice, and readable, summary and – as the title suggests – focuses somewhat on the linear models that are a favourite of Nic Lewis, for example. The basic message, as the abstract says, is

Our results suggest that the state- and forcing-dependency of feedbacks are probably not appreciated enough, and not considered appropriately in many studies. A non-constant feedback parameter likely explains some of the differences in estimates of equilibrium climate sensitivity from different methods and types of data. Clarifying the value and applicability of the linear forcing feedback framework and a better quantification of feedbacks on various timescales and spatial scales remains a high priority in order to better understand past and predict future changes in the climate system.

It also says:

…it becomes clear that ECS and TCR are rather limited characterizations of a much larger and interactive system. Other feedbacks such as vegetation, chemistry or land ice are now included in some climate models as their relevance is better understood. Some feedbacks operate on very long timescales that are determined by the internal dynamics of the system, and their response is not proportional to temperature.

which reminded me of this Michael Tobis comment, where he essentially argues that focusing on a simple metric, like climate sensitivity, ignores a great deal of important complexity. Climate sensitivity might give us some broad brush idea of the magnitude of the change, but it tells us little of what will happen where we live, which will – of course – differ from place to place.

The interesting issue at the moment, however, is that different estimates for climate sensitivity can produce quite different results. For example:

… some but not all recent studies on the twentieth-century warming find rather low ECS values (median at or less than 2°C) [17–19,21]. Climate models show a large spread in ECS, with the spread half as big as the actual value. The highest uncertainty can be attributed to the cloud feedbacks (traceable to certain cloud types and regions), and the lapse rate feedback [50–53]. But all comprehensive climate models indicate sensitivities above 2°C, and those that simulate the present-day climate best [54–57] even point to a best estimate of ECS in the range of 3–4.5°C.

The paper then goes on to discuss the basic linear model, largely represented as

N = F - \lambda \Delta T,

where N is the system heat uptake rate, F is the change in forcing, \Delta T is the change in temperature, and \lambda is the feedback parameter. These linear models assume that \lambda is constant, but the paper discusses in quite some detail why this may not, and probably isn’t, the case and says:

it has been suggested that the non-constancy in the global \lambda is caused by the evolving spatial surface temperature pattern, which (through \Delta T) enhances certain local feedbacks at different times [62]. Further, it has been shown that the evolving sea surface temperature pattern alone could explain the time or state dependency of \lambda

Anyway, I’ve actually said more than I meant to. The paper itself is very accessible, so I would recommend those who are interested in this, and who would like to know more, to go ahead and read it.

Posted in Climate change, Climate sensitivity, ClimateBall, Science | Tagged , , , , | 11 Comments

Richard Tol’s climate sensitivity estimate

In his interview with Roger Harrabin, Richard Tol said something quite interesting. He claimed:

RT: The very start of my career was about trying to show that CO2 and other greenhouse gases cause climate change. We were one of the first to establish that on a satisfactory statistical basis.

Quite a remarkable claim. So, I went and looked at Richard’s publication list and found a paper called Greenhouse Statistics-Time Series Analysis. It really is what it says in the title; simply a time series analysis. Compare various time series with the known temperture record, some with a CO2 term, some without, some with an additional linear term, and find which correlates best.

One obvious problem is, as it says in the Conclusion:

many climatologists classify this type of results as ‘correlation calculations’, which refers to the many wrong and misleading results obtained by this type of analysis.

So, yes, it might show that a time series with a CO2 correlates best with the observed temperature timeseries, but claiming that this shows that CO2 and other greenhouse gases cause climate change, seems a bit strong. However, they do state:

We have casted the hypothesis that the increase in atmospheric greenhouse gases causes global warming in a sophisticatedly simple model which enables efficient statistical testing; ….. The hypothesis of no influence is rejected. We have not found a proof or an explanation of the phenomenon though we can describe it. We have shown with much statistical care that the data are in line with the climatological hypothesis; the combination of the econometric techniques and climatological theory confirmed it in sign;

There are a couple of interesting issue relating to this. On an earlier post there was a rather acrimonious and contentious exhange between myself and Richard, mainly as a result of Richard defending Doug Keenan. Doug Keenan’s claim to fame (in the climate arena, at least) is making accusations of fraud and claiming that the surface temperture dataset is not significant. Richard said things like

The context is whether or not there is a statistically significant trend in the temperature record. That is a time series question if there ever was one.


He is correct in this case

At no point did Richard mention that he had published a time series analysis of his own, which he claims shows that CO2 and other greenhouse gases cause climate change. That would seem to be rather significant, given the context of the thread about Keenan. I wonder why he failed to mention this?

There is, however, something else interesting about Richard’s paper; it makes estimates of climate sensitivity. For a CO2 rise of 300ppm (which I assume is intended to be about a doubling since pre-industrial times) it estimates a 95% range of 2.99oC to 7.02oC for one model, and a 95% range of 3.4oC to 5.7oC for the other. Hmmm, quite a bit higher than the IPCC range, even at that time.

Credit: Tol & de Vos (1993)

Credit: Tol & de Vos (1993)

However, in the paper they then argue that they used CO2, rather than CO2-equivalent, which rose more than CO2 only, and that they should therefore reduce their climate sensitivity estimates accordingly. They then produced the table on the left, which shows that their estimate for climate sensitivity hovers around 3oC.

Now, there are a few things to bear in mind. Their model is simply a time series, with no real physics. They do introduce a lag of 20 years between the CO2 time series, and the temperature time series, but this still means that their climate sensitivity estimate is probably something like a mixture of an Effective Climate Sensitivity and a Transient Climate Response. Also, even though CO2-equivalent rose more than CO2 alone, when you include all the anthropogenic emissions, the anthropogenic effect is actually quite close to the CO2 only influence. Hence, maybe they shouldn’t have adjusted their estimates down.

Either way, though, we can now state that Richard Tol’s published estimate for climate sensitivity – which is based on a sophisticated time series analysis – suggests climate sensitivity is probably around 3oC, maybe even higher. Who’d thunk it? ;-)

Posted in Climate change, Climate sensitivity, ClimateBall, Comedy, Satire, Science | Tagged , , , , | 62 Comments