Is the GWPF ‘avin a larf – again?

I’ve been doing quite a lot of cycling during our holiday, and today cycled over Duke’s Pass (a classic Scottish cycling route, apparently) and then cycled back around the Loch on the banks of which we’re staying. Very nice, but very tiring. Given that I need a bit of a break, I thought I might highlight the Global Warming Policy Foundation’s (GWPF’s) latest illustrative series. A previous one was how to whine like a 7-year old. This one appears to have a running title of how to lack irony and self-awareness. After years of highlighting anyone who suggested that global warming stopped in 1997/1998, they finally say

When estimating trends, especially for such short periods in a noisy data set such as global surface temperatures, care must be taken with start and end points as they can affect the trend obtained.

Of course, the only reason they’ve done so is because Karl et al. (2015) have suggested that the trend since 1998 may now exceed the 2 \sigma confidence interval (i.e., it might be statistically significant, for those who think that’s appropriate in this circumstance). I guess it’s good that the GWPF now recognise that one should take care when estimating trends over short time intervals in noisy data. It would have been much better if they’d recognised this when the trend did not exceed the 2 \sigma confidence interval, and they were using this to claim that global warming had stopped.

The GWPF report also includes a discussion on statistical significance by Professor Gordon Hughes, from the University of Edinburgh!!!. The only time I’ve encountered Gordon Hughes was when Nic Lewis repeated some of what he’d said about the Marotzke & Forster paper. In that instance it was clear that Gordon Hughes did not understand the physical sciences particularly well. His most recent foray into this topic would seem to further confirm this lack of understanding. His basic argument seems to be that to estimate the variability in 17 years trends, one should use all possible 17 years trends. This may be true if all 17 year time intervals were essentially equivalent, apart from some kind of random variability, but they’re not. It’s true that the variability for different 17-year time intervals isn’t the same, but that doesn’t change that the variability in a particular 17-year time interval is determined by the data for that time interval, not by the data from a large sample of other 17-year time intervals.

If anything, what Gordon Hughes has illustrated is that it is quite likely that variability on 17-year timescales can mask an anthropogenic/forced trend. That’s why one should be careful of claiming that global warming has stopped if the time interval is short. It’s certainly not a reason for claiming that it has, as the GWPF has done, time and time again. In fact, one problem with over-promoting the Karl et al. result is that it’s not all that surprising that variability could have produced a slowdown in surface warming.

Anyway, that’s all I was going to say. There’s more that I could say, but the Wi-Fi here is very slow and I’m not even sure if this will actually post. It’s also why I haven’t included many links in this post. I’m also struggling to access the comments, so apologies if I don’t respond, and let’s keep everything civil and polite.

This entry was posted in Climate change, Global warming and tagged , , , , , , . Bookmark the permalink.

36 Responses to Is the GWPF ‘avin a larf – again?

  1. blied7656 says:

    You always manage to thread intricate lines of thought into a narrative I can follow, and provide enough context that I can follow along even if I don’t know the references.

  2. Magma says:

    When estimating trends, especially for such short periods in a noisy data set such as global surface temperatures, care must be taken with start and end points as they can affect the trend obtained. The GPWF (!!!!!)

    In 1973 comedian Tom Lehrer remarked that Kissinger’s Nobel peace prize “made political satire obsolete”. Scientific(ish) irony held on for another 42 years.

  3. I see the GWPF are still taking care with start and end points so they can affect the trend obtained.

  4. Tom Curtis says:

    Could some mathematically adept person lay out what Hughes has done for me. From his chart he had determined the 90% confidence intervals for the full period to be +0.15/-0.23. I do not see how he could obtain so large an assymmetry using Karl’s method, even allowing for the fact that he used all data.

    For context, using Tamino’s method of determining the uncertainty, but using the full 1880-2014 period to determine the autocorrelation gives a 2 sigma uncertainty of +/- 0.156 C/decade. For a 90% confidence interval, that is approximately +/- 0.129 but does not include a monte carlo determination of the effect of the error in determining GMST. Allowing for that, Hughes’ upper bound may be correct, but his much larger lower bound would be surprising (not to mention entirely mysterious to me).

  5. Tom Curtis says:

    I should have added to the previous, it was entirely remiss of Hughes not to point out (given the setting of the post) that the null hypothesis for the claim that there was a “pause” in the trend is that the trend continued as before, and that therefore the ongoing trend needs to be excluded statistically before a claim of a pause amounts to anything more than an unconfirmed hypothesis (and is really just politics masquerading as pseudoscience).

  6. matt says:

    Are you there Richard? It’s me matt.

    Can you fill us in on the process of the GWPF and its academic advisory council? From ‘everyone gets sent the material to be published and there is unanimous agreement’ to ‘it doesn’t get sent to anyone on the council before publication’, there are a lot of alternatives. What is the extent that the academic advisory council advises? And does it advise both forms (the GWPFoundation and the GWPForum)?

    ThANKS

  7. harrytwinotter says:

    This hiatus thing reminds me of the IPCC 2001 Lamb chart Medieval Warming Period hoopla all over again.

    The IPCC says there might be a hiatus in global warming, allowing for the uncertainty of such a short trend, start and end dates etc. New evidence then says the hiatus didn’t really exist. But the climate change deniers say, no we like the hiatus, how dare you take it away from us?

    I can smell the cognitive dissonance from here…

  8. Tom,
    Yes, I wondered the same thing myself. I don’t see how Hughes can have got such an asymmetric confidence interval. I also agree with your second comment and had meant to make that point myself. That someone can write on statistical significance without making that point is indeed remiss.

  9. Lars Karlsson says:

    The part that starts with “Think of an interested observer at the end of 1998.” is just plain weird. It refers to Figure 2, where trend lines (central estimate, upper and lower limits of c.i.) are drawn starting from the value for 1998. To me it seems to an irrelevant argument.

  10. Lars,
    Yes, I thought the same.

    I also noticed this comment at the end, which probably tells you all you need to know.

    There is no dispute that temperatures have increased since 1950. Subject to any qualifications about the ways in which global temperatures are measured, that is a matter of fact. On the other hand, there is no reasonable statistical certainty that the increase signifies an underlying trend which will lead to an increase in global temperatures of 2°C or more by the middle or end of the 21st century unless drastic policies are adopted. Climate models may support such a conclusion but the statistical evidence on global temperatures does not.

    What happens by the mid to end of the 21st century will depend on what we do from now till then, not what has happened in the past. Given that we could emit as much CO2 in the next 40-50 years as we have in the last 130 years suggests that another ~1C of warming by the middle of the 21st century is clearly not implausible, and by the end of the 21st century is almost unavoidable.

  11. dikranmarsupial says:

    “If, as a separate exercise, we estimate a trend that is constrained to start from the actual temperature in 1998, the slope is 0.016 +/- 0.068 °C per decade using the Karl et al method – ”

    This is utterly bogus; there is no statistical or physical justification for doing so, and it is bound to systematically underestimate the trend even more than the standard least-squares regression approach, given that the trend is calculated to start with a cherry picked El-Nino spike. Do Karl et al. constraing the trend to go through any of the datapoints? If not then it is disingenuous to suggest the trend is estimated “using the Karl et al method”.

    “It turns out that the residual for 1998 – i.e. the difference between the actual and the predicted values for 1998 – is the largest of all of the residuals for the 17-year period. There is no surprise about this: we know now that 1998 was an extreme outlier in the global temperature series.”

    Great, so constrain the trend to go through a point that you know is an “extreme outlier”, well that won’t be misleading at all! The is exactly the thing that you shouldn’t put in a report on understanding statistical tests!

    “Think of an interested observer at the end of 1998. She is told by Karl et al that the trend increase in temperatures from 1998 to 2014 will be 0.106 +/- 0.058 °C per decade. So, in Figure 2, she draws the solid black line as her forecast of the average global temperatures and draws the two dashed blue lines to define the range of outcomes that would fall within the 90% confidence range for the trends.”

    Do Karl et al. actually do that? I rather doubt it, it is the way Monckton misrepresents climate projections for WUWT.

    The idea of estimating the variance using previous 17 year trends would be a reasonable idea if the data were uncorrelated and if a linear trend model were satisfactory for all 17 year periods, but sadly neither of these things is likely to be true.

  12. Tom Curtis says:

    I’ll just note David Whitehouse’s bizarre disconnect from reality. He claims to have used the same properties at the Karl et al data from 1998-2014 (what properties is not specified – is it just autocorrelation, or something else as well) to perform a monte carlo analysis (again unspecified – does he generate random data with a base trend equal to that of Karl et al, or to zero, or to some other value). From that he determines an error range. The key points are that the error range for the 1998-2014 trend is +0.10/-0.08, while the Karl et al trend is 0.106 C per decade; and for 2000 to 2014 he has an error range of +0.11/-0.09, while Karl et al’s trend is 0.116 C.

    Whitehouse declaims:

    “My result indicates that the trends reported by Karl et al 2015 – which were only ever marginally significant at the 10% level – are much less significant. Comparing their trends – 0.086°C per decade for 1998-2012, 0.106°C per decade for 1998-2014 and 0.116°C per decade for 2000-2014 – with the outcome of the Monte Carlo simulation revealed positive trends between 0.08-0.12°C per decade 1,133 times out of the 10,000 simulations. We conclude that, irrespective of their quoted small errors in their trends, none of them are robust or provide evidence that the “hiatus” does not exist”

    His evidence that the Karl et al trends are not significant turns out to be the fact that they are included in the 88.67 to 100% range of the statistical distribution. That is, they are included in a group of trends, some of whom are statistically significant, and some of whom are not. That is the claim in the second quoted sentence, and as a statistical test it is complete nonsense.

    Indeed, rounding to two significant figures to match the uncertainty range, the Karl et al 1998-2014 trend is 0.11 C per decade, comfortably exceeding the statistical range generated by Whitehouse. The 2000-2014 trend is 0.12 C per decade, again comfortably exceeding the uncertainty interval given. So actually looking at the trends and confidence intervals, based on Whitehouse’s analysis Karl et al did find statistically significant trends, a fact he tries to obscure with that bizarre claim in the second quoted sentence.

  13. They say:


    When estimating trends, especially for such short periods in a noisy data set such as global surface temperatures, ”

    That data is not noisy. It’s composed of individual components that can be modelled as deterministic behaviors. Like Curry, they keep up with the charade of the uncertainty monster.

  14. WHT,
    Yes, I agree, the noise in this data isn’t uncorrelated random noise, but variability due to things like ENSO events, volcanoes, Solar variability,…. Hence, as Dikran is pointing out, arguing that the variance should be based on using all previous 17-year time intervals is clearly wrong.

  15. Tom,
    I don’t even fully understand what Whitehouse has done there. It seems that he has generated random data with a mean trend of zero, and then used that to argue that the Karl et al. trend is not significant. However, that seems so bizarre I can’t really believe that that is what he’s actually done.

  16. Curry’s hubby Peter Webster has been studying ENSO long enough that you would think he would have it figured out. That whole CSU contingent of Salby, Curry, Webster, Gray have really been a bust when it comes to ENSO. The feds should ask for their money back.

  17. verytallguy says:

    …today cycled over Duke’s Pass…

    Good effort! Bealach na Ba surely awaits…

    http://www.theguardian.com/travel/2012/jul/20/britains-top-10-toughest-cycle-climbs

  18. Given that Duke’s Pass doesn’t even make the top 10, I suspect that it doesn’t.

  19. Tom,
    Does the last figure in the GWPF post maybe illustrate what Hughes has done? He seems to have drawn then trend from the actual temperature at the beginning of the time interval, rather than done a best fit trend through the data. As such, if the starting temperature has a tendency to be high, then the confidence interval will end up negatively skewed. Having said that, I’m still not quite sure how he can have done that since even if he fixes the starting point, he should then have drawn the trend through the data. I guess one possibility is that he’s computed the trend correctly, but then shifted it so that it starts at the actual temperature at the beginning of the time interval. That, however, would seem rather bizarre.

  20. Tom Curtis says:

    Anders, I assume he has generated random data with mean trend of zero and with the same autocorrelation as the Karl et al data. If so, the resulting spread could reasonably be supposed to model random variation with zero underlying trend; from which it would follow that trends lying outside the 95% range of the random data would not be random, ie, would have a non-zero underlying trend. Unfortunately for him, if that is what he has done, the Karl et al data do lie outside his random spread, and he has used an alternative method to show that it is statistically significant. But it is far from clear that that is what he has done.

  21. Tom Curtis says:

    Anders, the last figure in the GWPF post relates to the argument that Dikran dissects above.

    Frankly I am astonished that a professor at a top university would put his name to such rubbish. If, in an academic career I had to rely on a paper published by somebody willing to do that, I would make a point of clearly highlighting those parts of the paper that have not been independently confirmed, draw attention to the public rubbish and make plain that I do not trust their work unverified because of the clearly faulty reasoning they are prepared to employ. If they are willing to use such rubbish in public discourse, the assumption must be that they are likewise willing to do so in academic work, which make that work of zero value until independently verified.

  22. jgnfld says:

    Another key misinform on the graph, an old bromise in the denial literature I might add, that Hughes makes is to construct the “prediction” line in his Figure 2 as starting from the local max at 1998 because the “prediction” is that “temps will rise at at trend aX for the next n number of years”. Note that by starting at the 1998 max he has subtily CHANGED the prediction to “temps will rise at trend X starting at value C” without actually telling us he has done that. This is a totally different matter statistically and leads to fewer degrees of freedom and larger CIs to boot.

    It is easily possible to find over the years that trend X does, in fact, occur at level aX but is defined by a regression line with a lower C. In fact that is quite likely to be the case when you start from a local max given regression effects.

    Old trick, but effective on the statistically illiterate. Either Hughes is statistically illiterate or he is intentionally misinforming. I have no hypothesis as to which is the case.

  23. jgnfld says:

    Whoops, after you made an update I see that dikran made the same observation.

  24. dikranmarsupial says:

    To see why Prof. Hughes Moncktonian trend analysis is so bogus, consider generating lots of random time series of 17 points with zero underlying trend. Throw away all those samples for which the first data point is not the one with the largest residual or where that residual is not positive (modelling the cherry picking that is going on). Repeat this procedure until you have a large number of samples, say 1024 (being a computer scientist these days, integer powers of 2 are what I would consider to be a round number ;o). Now calculate the trend according to the usual linear regression method. You will find it gives an expected trend that is less than zero (because the cherry picking will mean that the data is slightly higher at the start than at the end). Now constrain the trend to go through the first datapoint. Now ALL the trend lines will be negative as the first datapoint will be the highest. So if we start off with the actual trend being flat, by construction, Hughes’ method would be guaranteed to give a negative trend!

  25. jgnfld says:

    dikran…

    I did this a slightly different way recently and have the data from several million Monte Carlo runs stored on this computer. I am considering writing it up for a blog in the fall when I should have the time.

    What I did was to generate random sequences of lengths between 15 and 20 with trends CIs in the range of normal climate data. Then I counted only those instances for which BOTH the first point was .5, 1, 1.5, 2…etc SDs above the trend line AND the trend was significant. The effect of cherrypicking local maxes is immediately and seriously obvious. By looking at various series lengths and combinations of them you can get a handle on how cherrypickers “see”.

    Of course this can be worked out analytically as well.

  26. Tom,

    Anders, the last figure in the GWPF post relates to the argument that Dikran dissects above.

    Okay, yes, I see that now. It still a bit confusing in that he’s claiming a reasonably large positive confidence interval, even though none of the data points are above the trend line that starts on the data point for 1998. Very strange.

  27. Dikran,

    So if we start off with the actual trend being flat, by construction, Hughes’ method would be guaranteed to give a negative trend!

    Maybe he’s taking lessons from the new Chairperson of the GWPF’s Academic Advisory council who developed a recipe for a hiatus.

  28. A says:

    It’s because all they do is work to find ways to keep their belief/desire that AGW is either not real or not significant, and come to thus continue to believe it, by dismissing all that gets in the way and reformulating arguments that in their mind fit that belief.

    Thus, AGW stopped or paused (really they only slowed in terms of geologically short term variance, and even though air temps is only part of it, in the total earth picture – and what is relevant – there was no pause, but if anything acceleration). So, they conveniently ignored the noise thing. As well as everything else relevant, which is that air temp is a complex interaction of the entire picture, including oceans; so if more heat goes there only a small uptick will have a huge impact on atmospheric temperatures – and ocean heat ratcheted upward yet air temps STILL increased.

    Turns out even air temps over this fairly short recent time period, after a LOT of climbing upward, have cont’d to do so.The “fake” pause – http://bit.ly/1HYY3Mt – fit their belief. This now, doesn’t. So NOW they cling to the noise thing as a way to similarly ignore/dismiss the new and more relevant data analysis.

    It is also a group solidarity reinforcing thing. They do this under the group self reinforcing belief they are practicing “better science.” Judy Curry is a good example of both of these. She mangles the basic issue and more importantly the risk range aspect so badly that it is almost caricature – it’s not just that she’s not logical, it’s that her absurd lack of logic, also, “just happens” to reinforce the notion that she’s fallen into on this… and then reinforces. Thus, as part of this same broader pattern (which really needs to be shown, and to the media – what’s left of it) http://bit.ly/1LY86C

    So naturally the U.S Congress turns repeatedly to Curry to for “expert” testimony on AGW, and her widely popular blog – given credibility by her apparent pedigree (and Congress’ professional reliance on her) – as with many other similar sites, helps feed the inflammatory self reinforcing climate scientist impugnment and demonization and ongoing illogical but fervently believed skepticism that fuels so much misinformation on this subject.

  29. entropicman says:

    “Of course, the only reason they’ve done so is because Karl et al. (2015) have suggested that the trend since 1998 may now exceed the 2 \sigma confidence interval (i.e., it might be statistically significant, ”

    For GISTEMP, at least, the linear trend has increased by 4.25 standard deviations since 1998..

    The 4SD threshold is a quick and dirty method of estimating significance, but perhaps the trend is already statistically significant?

  30. WebHubTelescope says:

    Playing in to their game by summoning statistical arguments. Most of natural variability is described by deterministic physical behavioral models.

  31. ontspan says:

    Gordon Hughes may be a newcomer in the global warming FUD department, for a long time already he’s spouting FUD about renewables via the Renewable Energy Foundation. As you already might have guessed REF, is to renewables what Friends Of Science is to science, look it up at SourceWatch.

    One example of Hughes’ top-notch statistical chops learns us that wind turbines only last ~10 years based on Danish and UK wind turbine production data. The infamous David McKay took a good look and concluded otherwise (PDF).

  32. ontspan,
    Yes, I was aware of that one. There’s also this, where he apparently claimed that gas could provide energy almost 10 times cheaper than renewables, but forgot to make clear that he hadn’t included the cost of the gas in his calculation.

  33. ontspan says:

    ATTP, ouch! Proves again that even a professor can make a fool of himself when arguing towards a predetermined answer. Looking at some of the local FUD blogs reactions to the report I can confirm everyone there uncritically (surprise!) accepted the idea that it showed windpower to be 10x more expensive then gas, while the press release only talked about construction costs not actual generation costs.

  34. Pingback: The GWPF is funny | …and Then There's Physics

  35. Pingback: It woz El Nino wot dunnit! | …and Then There's Physics

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s