This is a guest post by AnOilMan
No, but it sure is an enticing title.
It was 4 years ago when I first started getting concerned about what climate change skeptics were saying. This was the first article foisted on me: The smoking gun at Darwin zero. It’s an article by Willis Eschenbach which claims that adjustments made to temperature records – also called homogenisation – is evidence that scientists are fudging the results to further a political agenda.
At the time, I reasoned that even if the data was fudged badly, it couldn’t affect the overall results because there are thousands of measurement stations and Willis’ article refers to just one station in Darwin. I felt that it claimed there was something nefarious going on without actually proving it. That’s when I decided that WattsUpWithThat was a waste of time.
“Consequently, each station record was corrected for discontinuities caused by changes in site location and exposure, and other known data problems (Peterson et al. 1998). Such discontinuities can be as large, or larger than, real temperature changes (e.g. the approximate 1°C drop in maximum temperatures associated with the switch to Stevenson screen exposure (Nicholls et al. 1996)) and consequently confound the true long term trend.”
It was the 1940’s when they installed the Stevenson Screens for weather stations, and if you look at Willis’ data, you can see a 1C discontinuity quite clearly. Hardly something to say Yikes over …
On another note, some years later, the fellow who showed me Darwin Ground Zero told me that he thought it was all a conspiracy to fake the data, just like UFO landings. He went on to tell me that only special people like him could spot it. I laughed …
So what does Temperature Data contain?
In a word, ‘errors’. A good temperature dataset is one which includes changes in climate only but sometimes there will be variations caused by non-climatic factors like, in the case of Darwin, the installation of Stevenson screen shelters for housing measurement instruments. Historically weather stations may also have had their IDs reused from other stations, they may have been moved, or their measurement apparatus upgraded. There are not always logs of adjustments to locations, so the data will just appear different at those locations. Thus in order to detect long-term trends in temperature datasets, non-climatic influences on temperature must be removed.
Victor Venema discusses this over at Variable Variability.
This article shows an example of why homogenization needs to be done. It’s not a great example in that this is probably from a different location, but you get the point. It shows what appears to be a cold spot in the middle of the dataset as though there is a sudden local refrigeration event like so:
How can they make a determination that the data is indeed wrong, and in need of repair? It’s simpler than it sounds, but as data and grid quality improve, it’s possible to see just how inhomogenous that location is. For instance, precipitation measurement used to be done on roofs of buildings until someone noticed that it reported precipitation levels that were 10% lower than measurements taken on the ground. If you move the station to the ground from the roof, there’s a jump in the data set.
Records of changes in measurement stations really help in making these determinations as is the case for ‘Ground Zero Darwin’ … 🙂
What is done to clean the data?
Homogenization is adjusting temperatures based on what other stations are measuring. Two stations nearby ought to show similar climate trends. If these trends diverge from each other, then this probably indicates the existence of non-climatic influences. The interesting part is figuring out which station has been affected by this influence.
It’s relatively easy to determine how accurate certain stations are relative to other stations using modern data which is known to be more accurate and has denser coverage. We can then go back in time using old data to reevaluate the measurements. The point is that you know approximately where a measurement is supposed to be, and thus you can identify outliers.
Victor Venema has a how-to on Statistical homogenisation for dummies. Of course, being a climate scientist himself, he may be biased. 🙂
Is this clear evidence of Willis’ assertion: “clumsy fingerprints of someone messing with the data Egyptian style … they are indisputable evidence that the “homogenized” data has been changed to fit someone’s preconceptions about whether the earth is warming.” Hardly, that statement is supposition.
But hey, what does this do to the global temperature measurement?
Not much. Here’s a great article at Skeptical Science comparing raw data to adjusted data. You can see a side-by-side comparison of global adjusted data with global unadjusted data in Figure 5:
Interesting … is the homogeneity adjustment artificially creating the ‘pause’ by amplifying El Nino? Can anyone answer that?
BEST uses automated statistical methods to arrive at similar conclusions.
Here’s a good break-down of computation methods used by BEST Berkeley Earth Temperature Averaging Process. When it comes to discontinuities in the data, they don’t homogenize, they simply break the time series, and evaluate the temperature sequences separately. Instead of one trend line, they will generate two, one for each time interval. (What is important is that they are trying to measure station temperature trends, not absolute temperature. Look up Scalpel in the BEST paper.)
AnOilMan is an electrical engineer who works in oil and gas.