Since I’m sitting at the station waiting for a train that is delayed 40 minutes, I thought I would post on something that I’ve been thinking about for the last couple of days. There is quite a lot of discussion about open science. The idea is that we should make everything available; data, codes, papers, etc. Fundamentally this is a good thing, and so what I’m about to say isn’t an argument against it. However, there are almost always unintended consequences, and open science is no exception, which is nicely illustrated by the recent furore over the role of El Niño in surface warming.
It all started with David Whitehouse presenting an analysis on the Global Warming Policy Foundation (GWPF) site showing that temperatures have dropped substantially in the last month or so. The argument being made is that this indicates that most of the recent record warming was due to the El Niño, despite what has been claimed by climate scientists. This was then picked up by David Rose in the Mail Online, who claimed that stunning new data indicates that El Niño drove record highs in global temperature, suggesting rise may not be down to man-made emissions. This was then followed by Ross Clark in the Spectator, who asked gobal temperatures have fallen – so why isn’t it being reported?. There was also the standard Delingpole response, but I won’t bother linking to that.
So, what is the issue? Let’s start at the beginning. The analysis by climate scientists suggests that although the El Niño clearly contributed to recent global surface temperature records, the contribution was such that they would still have been records without the El Niño contribution. The claim being made now is that the recent large falls in temperature show that this is not the case.
Given that they have data showing a sudden drop in temperatures, why isn’t this analysis valid? Well, the first thing is very simply that they’re looking at satellite data; the temperature is for the lower troposphere (which goes from just above the surface to about 10km). You can’t refute a claim about global surface temperatures using data that doesn’t measure the surface.
The next problem is also pretty obvious; the data they’re using is land-only; it is intended to be lower tropospheric temperature over land. You can’t refute a claim about global temperatures using data that isn’t global. The next issue is somewhat subtler. The data the used was the RSS land only TLT version 3.3. They’ve upgraded some of their data to version 4.0, but say
The lower tropospheric (TLT) temperatures have not yet been updated at this time and remain V3.3. The V3.3 TLT data suffer from the same problems with the adjustment for drifting measurement times that led us to update the TMT dataset. V3.3 TLT data should be used with caution.
So, the TLT data has not yet been updated, suffers from a problem related to adjusting for drift measurements, and should be used with caution. The TTT data, which has been updated to version 4, also does not show the same kind of sudden drop in temperature as shown in the TLT data.
I guess I’ve probably made the point I was going to make. If you’re going to present an analysis of some data, you need to know what that data actually represents and you need to know if there are reasons why that data should be used with caution, or if there are reasons why that data might not be appropriate. You can’t simply get data, plot graph, draw conclusion; it typically takes more work than that. There is a reasonably simple rule of thumb that is worth considering. If your options are a global conspiracy to hide something that this data indicates is clearly present, or you’ve misunderstood what the data is really indicating, it’s often best to go with the latter, rather than the former. Your mileage may vary, of course.