The GWPF is funny

Well, no, not really, but sometimes all you can do is larf. They’ve released a new report called Statistical forecasting: How fast will future warming be?. It is by Terence Mills, a statistics professor who specialises in Time Series analysis, and has already been picked up tby The Times and The Australian. The main motivation was to

set out a framework that encompasses a wide range of models for describing the evolution of an individual time series.

Bottom line; he used basic time series analysis to develop models that he could then use to make forecasts of future temperatures. Was there any physics, I hear you ask? The answer, as I’m sure you’ve guessed, is no.

Cb45TtrXIAAw1Z_.png_largeThe basic result is shown in the figure on the right, which shows forecasts based on two different models. The forecast (red line) indicates no future warming, essentially suggesting that climate sensitivity is 0. Well, this is obvious nonsense. Furthermore, Gavin Schmidt has added the more recent observations (thin blue line) which already fall outside the models’ 95% confidence intervals (green and black lines). So, a year in and the models are already diverging from reality.

Here’s the key point; projecting future warming requires some kind of estimate for future emissions. Trying to forecast future warming using some model with no physics and based only on past temperatures is obvious nonsense. Even a Professor of Statistics should be able to get this utterly trivial point. Maybe Terence Mills is so clueless that he really can’t grasp what is a pretty straightforward concept. Alternatively, maybe £3000 was enough for him to put his name to a report that he knew was garbage. Whichever it is, I fully expect Richard Tol to come along and defend it.

This entry was posted in Climate change, Climate sensitivity, ClimateBall, Satire, Science and tagged , , , , , . Bookmark the permalink.

212 Responses to The GWPF is funny

  1. dikranmarsupial says:

    “The forecast (red line) indicates no future warming, essentially suggesting that climate sensitivity is 0.” this ought to be a wake-up call for the GWPF, apparently some on their academic advisory panel have called for a carbon tax! ;o)

  2. Apparently they have, but – of course – if there’s no future warming then the carbon tax is zero. Convenient.

  3. Trying to forecast future warming using some model with no physics and based only on past temperatures is obvious nonsense.

    Statisticians working in finance also have no physics for their forecasts. Using Prof. Tol’s argument they are paid more and are thus more qualified than academic statisticians that would argue for the use of physics in the selection of your statistical models.

  4. Statisticians working in finance also have no physics for their forecasts. Using Prof. Tol’s argument they are paid more and are thus more qualified than academic statisticians that would argue for the use of physics in the selection of your statistical models.

    Yes, I almost pointed that out, but Terence Mills does not work in finance, so maybe – according to Richard’s criterion – he’s just not very good.

  5. MartinM says:

    I wonder which members of the ‘academic’ advisory council reviewed this work? Would be interesting to see if any of them are willing to put their name to it.

  6. BBD says:

    Was there any physics, I hear you ask? The answer, as I’m sure you’ve guessed, is no.

    [Canned laughter]

  7. verytallguy says:

    Martin,

    The GWPF say review process is “peer review”.

    So that’s Nigel Lawson and Matt Ridley 😉

    It’s a “paper” so obviously shit you do wonder whether the academic who wrote it was deliberately taking the piss.

    On a slightly more serious note, the purpose of this crap is to get some publicity and shed some doubt, not to be academically credible. So it’s already met the objectives of those who commissioned it.

  8. Maybe Terence Mills is an atheist who thinks that makes him rational …

    [potholer54 strike again, and boy, is he on form]

  9. Magma says:

    The GWPF report by Mills was so awful that I thought it must be a one-off by a statistician who had never worked with climate data before. But I was shocked to find he’s authored or co-authored at least 20 peer-reviewed papers on climate-related topics over the past decade or so.

    Even if he was motivated by remuneration, £3000 wasn’t nearly enough to cover the damages.

  10. Magma says:

    That copy and paste of McKitrick’s fulsome praise didn’t out well. Here’s another try. Mod, feel free to delete the first version.

    From Foreword by Professor Ross McKitrick:

    In this insightful essay, Terence Mills explains how statistical time-series forecasting methods can be applied to climatic processes. The question has direct bearing on policy issues since it provides an independent check on the climate-model projections that underpin calculations of the long-term social costs of greenhouse gas emissions. In this regard, his conclusion that statistical forecasting methods do not corroborate the upward trends seen in climate model projections is highly important and needs to be taken into consideration.

    As one of the leading contributors to the academic literature on this subject, Professor Mills writes with great authority, yet he is able to make the technical material accessible to a wide audience. While the details may seem quite mathematical and abstract, the question addressed in this report is of great practical importance not only for improving the science of climate forecasting, but also for the development of sound long-term climate policy.

  11. Now I have to redo my comment 🙂

    The question has direct bearing on policy issues since it provides an independent check on the climate-model projections that underpin calculations of the long-term social costs of greenhouse gas emissions.

    It’s only independent in the sense of having no basis in reality.

  12. Mills finds that the three temperature records are non-stationary rather than trend-stationary. The forecasts follow mechanically. Note that 1 in 20 observations is supposed to be outside the 95% confidence interval, so the probability of observing 1 in 12 outside the confidence interval is about 35%.

    The challenge is to explain why trend-stationarity — which corresponds to a greenhouse signal — is rejected in favour of non-stationarity — which corresponds to natural variability. None of the commenters above rises to this challenge

  13. Richard,
    Beautiful, you’ll defend any old crap if it comes from one of your GWPF mates.

  14. MartinM says:

    None of the commenters above rises to this challenge

    The answer is literally the name of this blog.

  15. The Very Reverend Jebediah Hypotenuse says:


    None of the commenters above rises to this challenge.

    We’ve already got one. It’s very nice.

  16. Pingback: Considera l'armadillo - Ocasapiens - Blog - Repubblica.it

  17. Jim Hunt says:

    As luck would have it I gave Benny Peiser a piece of my mind over that very article this very morning. However you’ll have to scroll down past all my many missives about the GWPF’s recent “funny” articles about the Arctic in order to read it:

    The Great Global Warming Policy Forum Con

    Good morning Benny,

    I note that the GWPF webmaster has still not taken on board any of the helpful advice I have proffered over the last few weeks, and has now posted some inaccurate information about “global warming”. Will he or she never learn?

  18. wehappyfew says:

    Dr Tol,

    I see no back-testing or retro-diction. Apply this “method” of prediction to HadCrut4 data from 1850 to 1975, for example. See how well it predicts 1976 to 2016 data.

    I predict it will not do well.

    I predict Mill’s “method” will not rise to this challenge.

  19. The Very Reverend Jebediah Hypotenuse says:

    Actually, physics gets a brief nod near the end of the piece – only to be dismissed as making the science of climate become as problematic an area of study as… economics or finance.


    It may be thought that including ‘predictor’ variables in the stochastic models will improve both forecasts and forecast uncertainty. Long experience of forecasting non-stationary data in economics and finance tells us that this is by no means a given, even though a detailed theory of such forecasting is available.
    Models in which ‘forcing’ variables have been included in this framework have been considered, with some success, when used to explain observed behaviour of temperatures.
    Their use in forecasting, where forecasts of the forcing variables are also required, has been much less investigated, however: indeed, the difficulty in identifying stable relationships
    between temperatures and other forcing variables suggests that analogous problems
    to those found in economics and finance may well present themselves here as well. (p18)

    Forcing the membership of the GWPF to see the wonderful irony in that fallacious passage could involve turning heads inside out.

  20. Has Keenan had a name change?

  21. Richard,
    I wondered something similar myself 🙂

  22. The temperature series investigated so far are both ‘global’ and hence contain no seasonal fluctuations. (page 9)

    This is utterly clueless. There are no seasonal fluctuations because the timeseries are anomalies. This was peer-reviewed wasn’t it.

  23. Morbeau says:

    It’s a “paper” so obviously shit you do wonder whether the academic who wrote it was deliberately taking the piss.

    And there’s the challenge right there. The bar isn’t very high, but it is wide.

    On a slightly more serious note, the purpose of this crap is to get some publicity and shed some doubt, not to be academically credible. So it’s already met the objectives of those who commissioned it.

    Credibility is a double-edged sword.

  24. This is utterly clueless. There are no seasonal fluctuations because the timeseries are anomalies.

    Clueless on many different levels.

    This was peer-reviewed wasn’t it.

    Possibly by a Viscount, but other than that, I don’t think so.

  25. Willard says:

    > Mills finds that the three temperature records are non-stationary rather than trend-stationary.

    “But random walk”:

    Global average temperature increase GISS HadCRU and NCDC compared

    Only 2,190 comments to read.

  26. As Richard Betts points out on BH, Mills published a paper on How robust is the long-run relationship between temperature and radiative forcing?. So, it seems that he wasn’t unaware of some of the basic physics.

  27. @willard
    Indeed, it is not a new discussion. Mills has been arguing this same point, in a series of peer-reviewed papers, for 15 years now. As you know, I’m with Estrada & Perron.

  28. MarkB says:

    Tol: The challenge is to explain why trend-stationarity — which corresponds to a greenhouse signal — is rejected in favour of non-stationarity — which corresponds to natural variability.
    Trend-stationarity is rejected because, without justification, Mills segmented the data such that “the current regime” is virtually trendless. For HADCRUT he uses data since 2002, for RSS since 2000, and for CET he uses everything since 1660 (real temperatures, not anomalies). If one arbitrarily picks segments minimizing the recent trend, then it’s not surprising that performing the statistical mechanics doesn’t detect a trend.

  29. Richard,
    You’d think after 15 years of doing this, he’d have worked it out by now.

  30. Mills (2009) reports a climate sensitivity of 2 +/- 1 °C/doubling. Low but not absurd. Don’t know why he thinks sensitivity is now zero. Or does he think that emissions might randomly fluctuate to zero in the next decade.

  31. Richard Telford,
    Well I have seen Matt Ridley suggest that the range of RCPs used by the IPCC suggests that we might – by chance – follow a low emission pathway. Quite how we can do so without actually reducing our emissions is, however, somewhat beyond me.

  32. Quaint as the customs of Lawsonland may be, America’s Heartland is far funnier.

  33. jamesannan says:

    Thanks for the pointer, I’ve emailed him suggesting a bet on the basis of his forecast 🙂

  34. James,
    Let me know what he says 🙂 . I thought of emailing him too, but decided against it.

  35. James, whatever you offered, may I join that bet?

  36. Michael Hauber says:

    Easy enough to measure the trend from 1975 to 2015, extrapolate that into the future and call that a forecast. Not even any physics required – although I’d appeal to principals of physics to justify this as reasonable – and I bet it will be a pretty reasonable forecast for at least 20 or 30 years.

    But somehow the author measures the trend for the entire HADCRUT data series to be not significantly different from 0, and that the term that represents the trend in his time series model can therefore be replaced by 0.

  37. jsam says:

    Who cares if its scientific nonsense? It feeds the denialist press. The paper’s purpose is not to advance knowledge, it is to advance a cause.

  38. John Hartz says:

    Propaganda by any other name, including pseudo-science poppycock like Mills’ paper, is still propaganda.

  39. Jim Hunt says:

    @RichardTol – Have you taken a close look at the reams of utter hogwash the GWPF is printing about Arctic sea ice at the moment?

    Forgive me if I harp on about my personal hobby horse, but even a practioner of the dismal science can surely see it for what it is? “Clueless” doesn’t even get close!

  40. Ken Fabian says:

    There is what Mills and GWPF says and then there’s physics; no contest which can be safely ignored without consequence and which cannot.

    But we live in an age when corporations and economies are built on foundations of self interest without responsibility – responsibility adding unwanted burdens of costs that are readily avoidable by ordinary means (PR, judicious donations, lobbying, tankthink and economic alarmism). It is a system aided by ‘free’ markets wherein advertised enticements of vicarious lust, gluttony and envy displace disclosure of information. It is a system that is, rather ironically, championed by lots of religious leaders as morally superior – in a deal that exchanges non-binding statements of intent to eliminate evolution from education and climate change from energy policy for influential endorsements. Inconvenient physics based foresight is displaced by convenient beliefs and appropriate illusions.

    Change apparently requires the highest levels of certainty whilst continuing business as usual, irrespective of the high levels of certainty of strong, ongoing and irreversible planet changing consequences, requires none.

  41. L Hamilton says:

    Magic terms are specified to separate the HadCRUT time series into 5 “regimes” each with different slope:
    1 1850–1919
    3 1945–1975
    4 1976–2001
    5 2002–2014
    Perhaps someone will claim that 2015-16 ventured out of prediction bounds, and Mills should decline any betting with James, because a new “regime” has just started. Or maybe not, the magic terms (permitting instantaneous slope changes) appear to dominate overall fit and are not predicted.

  42. Some statisticians use a statistical package like R. Others rely on spreadsheets like Excel. A few, apparently, use a Ouija board.

  43. David Hamilton says:

    Climate science is founded on physics, because it’s about physics – it is about how the heat balance of the planet is being changed by increasing greenhouse gases in the atmosphere. While many details of climate science are about a lot more than the physics, when it is the big picture that is being considered, it is all about the physics. The key question is: does the increase in greenhouse gases result in the accumulation of heat in the earth/atmosphere system? The answer is “yes”, and I don’t see any deniers question that basic physics. Where are the papers questioning the measurements and the radiative forcing calculations? The thing is, if any individual or group of deniers question the scientific consensus while not addressing the basic physics, then to me that is proof that they are not doing science, they are peddling doubt. Either the physics is correct, or it is wrong; if it is correct, then the rest is detail.

  44. anoilman says:

    Next up, a statistician will predict the future path of a hockey puck during a game with only past puck trajectory data to assist him. Afterwards, physicists and hockey players share a beer…

  45. Marco says:

    ATTP, more interesting is that Tol essentially admits that Mills’ work puts severe doubts on Tol’s earlier work from 1993 and 1994, in which he and his co-authors used statistics to show the hypothesis that “increased CO2 is not the cause of the increased global temperature” should be rejected (P<0.01).

    I thought the Tol was self-professed infallible?

  46. @marco
    As you can see from those papers, we test our model (which does not have a linear trend, but rather a trend that follows radiative forcing) against ARIMA and reject ARIMA. Others, particularly Estrada&Perron, did the same.

    So, I think that Mills is wrong, as accomplished an econometrician as he may be. Hamilton apart, no one on this thread have offered any valid argument as to why Mills is wrong. Abuse aplenty, but little substance.

  47. Lars Karlsson says:

    I don’t find the body of the report expecially problematic. It reads mostly like a tutorial. As for as I can see, Mills doesn’t make any claims that these methods are comparable to or better than GCMs.
    The exaggerations start with McKitrick in the foreword and in the news item of the GWPF, and seems to escalate when it reaches the media.
    The Times writes:
    “The global average temperature is likely to remain unchanged by the end of the century, contrary to predictions by climate scientists that it could rise by more than 4C, according to a leading statistician.”

    Seems to me that Mills has been conned by the con men of the GWPF.

  48. Richard,

    Hamilton apart, no one on this thread have offered any valid argument as to why Mills is wrong.

    Apart from this bit in the post?

    Here’s the key point; projecting future warming requires some kind of estimate for future emissions. Trying to forecast future warming using some model with no physics and based only on past temperatures is obvious nonsense.

    Abuse aplenty, but little substance.

    A fair amount, but then this is someone who was paid to write something that it’s hard to believe they did not know was nonsense. They’ve also succeeded in getting this obvious nonsense promoted in the mainstream media.

  49. Lars,

    Seems to me that Mills has been conned by the con men of the GWPF.

    Maybe, but there are still things in the report that are wrong, as Richard Telford highlights.

  50. @wotts
    Instead of “abuse” I should have written “abuse and misunderstanding”. Mills explicitly tests a linear trend (greenhouse forcing) against natural variability, and comes out in favour of the latter. His forecast follows immediately from his test, so you should find fault with his test (unless of course you want to argue, pre-Enlightenment, that you reject the method because you don’t like the result).

  51. Jim Hunt says:

    Richard,

    You frequent Twitter I believe? How about this from the other Richard for starters?

  52. Richard,

    Mills explicitly tests a linear trend (greenhouse forcing) against natural variability, and comes out in favour of the latter.

    I don’t think you get to say the above and claim that others misunderstand. A linear trend is NOT greenhouse forcing. Greenhouse forcing typical requires some knowledge of past forcings or some estimate for future forcings. Once again, you appear to be confusing descriptive statistics and inferential statistics.

    Look this isn’t even all that complicated. You cannot make projections/predictions/forecasts for a physical system using time series analysis alone unless you happen to know that your time series is – somehow – a good representation of that system (throwing dice, for example). Given that our climate is not simply random, using time series analysis to make forecasts is clearly wrong. That you would end up defending this is, however, not surprising.

  53. Lars Karlsson says:

    The report would have been more useful for climate modelers if Mill’s had addressed how to include ‘predictor variables’ (forcings). He only briefly mentions these in the discussion section. That might be a way to get some physics into the statistical models.

  54. jimt says:

    @Richard Tol,
    Apart from obvious problems of 1) ignoring physics and 2) trying to predict future trends purely on the basis of past trends, here is just one major problem with Mills “segmented regression” analysis:
    He arbitrarily splits the series into “regime” periods, with no statistical justification for choosing the start of each period. Actually its not completely arbitrary, it is obviously cherry picked to ensure that the most recent regime has a non-positive trend. This is barely a step above Monckton’s cherry picked ‘pause’ posts on WUWT (in fact its worst, because a professor of statistics should know better). Mills goes to such lengths to ensure the most recent trend is not positive, that he invents a ‘regime’ of < 2 years in length in the RSS data, into which he crams about 15 years of warming (see Figure 2, Fig 1 is no better). It is laughable.
    There are perfectly good methods for objectively testing for changes in linear slope – they are called change-point models. When applied properly to global temperature data, they consistently fail to find any evidence of a change in linear trend in the last 40+ years (in any dataset). So any objective attempt to predict future trends based on past trends using segmented regression would predict a continuation of the last 40 years upward trend.

  55. Lars,
    Indeed, but then he said

    Long experience of forecasting non-stationary data in economics and finance tells us that this is by no means a given, even though a detailed theory of such forecasting is available.

    which suggests that he rather dismisses this idea. Maybe he could explain what conservation laws apply in economics and finance, since the forcings in climate modelling are fundamentally linked to the conservation of energy.

  56. jimt,
    Thanks, your comment seems to cover most of the relevant issues. I hope Richard is happy now 😉

  57. verytallguy says:

    This is all very simple.

    The point of the GWPF was to get publicity for work that can be portrayed as casting doubt on global warming. It succeeded, excellently – it’s in a reputable newspaper.

    The point of Richard Tol’s posts is to gain attention and demonstrate his superior intelligence. He’s doing very well on the former at least.

    The point of Mill’s involvement is the only interesting bit. Was he really in it for the money? I suspect this is more about having some fun and getting some publicity, plus the opportunity for a bit of abstruse academic debate. Academics do tend to enjoy that sort of thing (apologies for the stereotyping).

    No-one, but no-one involved in any of this actually believes the “forecasts” are at all relevant to anything real or physical. Responding in that vein is probably necessary but largely irrelevant to the purpose of the exercise.

  58. vtg,
    I suspect that sums it up pretty well. It’s hard to believe that the GWPF or Mills could really believe that these forecasts had some merit.

  59. Lars Karlsson says:

    ATTP,
    Yes, that was not a very convincing dismissal. It would certainly make sense to try it out. Not using anything like ‘predictor variables’ on the other hand, is most likely a bad idea.

  60. Jim Hunt says:

    VTG,

    Do have any hard evidence to support your unsubstantiated assertion that The Times is “a reputable newspaper”?

  61. dikranmarsupial says:

    Richard wrote “Mills finds that the three temperature records are non-stationary rather than trend-stationary. The forecasts follow mechanically.”

    Do you think that forecast or no further warming, even though radiative forcing will almost certainly increase, is correct?

  62. verytallguy says:

    Jim,

    “Newspaper of Record” is the phrase I should have used

    https://en.wikipedia.org/wiki/Newspaper_of_record#Examples

    a major newspaper that has a large circulation and whose editorial and news-gathering functions are considered professional and typically authoritative

    It is actually the only serious point of the whole thing – that the GWPF have sufficient influence in the press to get this obvious tosh reported in the Times as an apparently credible piece of work.

    That’s the issue.

  63. BBD says:

    The Murdoch press. WTF do you expect?

  64. Jim Hunt says:

    @VTG/@BBD – That’s precisely what I expect.

    @RichardTol – You do you realise that you’re defending the indefensible?

  65. dikranmarsupial says:

    RichardTol wrote “Note that 1 in 20 observations is supposed to be outside the 95% confidence interval, so the probability of observing 1 in 12 outside the confidence interval is about 35%.”

    Richard, would you agree then that the CMIP5 models are not called into serious question by the observations (for some but not all datasets) being briefly outside the 95% spread of the model runs:

    And even then the difference is partially explainable by the forcings not being exactly as the scenario?

  66. Dikran,
    Indeed, thanks for pointing that out. It is rather amusing that I think I pointed this out tio Richard a while back, so to see him now using it to defend what is clearly an incredibly poor model is rather bizarre.

  67. So, Loughborough University is actually promoting this

  68. Lars Karlsson says:

    It will be interesting to see how Mills reacts to these misrepresentations of his work. Is he an unwitting victim or is he a con man too?

  69. BBD says:

    Somebody at Loughborough NEEDS to speak to the idiot PR now, now, now…

  70. dikranmarsupial says:

    Richard wrote: “Mills explicitly tests a linear trend (greenhouse forcing) against natural variability, and comes out in favour of the latter. … so you should find fault with his test”

    Not difficult, the “exogenous” choice of the segmentation boundaries invalidates the statistical assumptions of the test (the period is not a random sample from some underlying distribution, but has been chosen after looking at the data).

    “(unless of course you want to argue, pre-Enlightenment, that you reject the method because you don’t like the result).”

    Richard, is designing the test to give you the result you want (by cherry picking a 2002 start date to minimise the trend) any better?

  71. I long for the days of the ‘Stadium Wave.’

  72. Lars,

    It will be interesting to see how Mills reacts to these misrepresentations of his work. Is he an unwitting victim or is he a con man too?

    The tweet appears to have gone, which may be because they linked to the Geography department, rather than Economics. However, it would be interesting to know if he approved the claim that he says it won’t warm by 2100, or of they simply lifted that from the GWPF press release.

  73. dikranmarsupial says:

    It would be interesting to know if Mills discussed his work with anybody from the Geography department.

  74. Jim Hunt says:

    I see that you had a word in Loughborough’s shell like Anders:

  75. Mills’ forecast looks similar to what pseudonymous blogger “VS” predicted based on similarly physics-devoid time series analysis (https://ourchangingclimate.wordpress.com/2010/04/01/a-rooty-solution-to-my-weight-gain-problem/ ). Blind application of statistical tools without considering the physical characteristics of the system leads to meaningless results. Unless of course one would like to throw conservation of energy out of the window.

  76. Bart,
    Eli just tweeted that post. Really good.

    Blind application of statistical tools without considering the physical characteristics of the system leads to meaningless results.

    Exactly.

  77. verytallguy says:

    Somebody at Loughborough NEEDS to speak to the idiot PR now, now, now…

    Feature or bug, BBD? No such thing as bad publicity and all that.

  78. @wotts
    CO2 is the most important greenhouse gas. It’s concentration has risen exponentially. Radiative forcing is the natural logarithm of CO2 concentration. A linear trend is therefore a reasonable approximation.

    @jimt
    The number and location of the breakpoints are estimated, rather than set by the analyst.

    @dikran
    No. As I said, I think that Mills is wrong, because, focussing on significance rather than power, he misinterprets his test results, and he also ignores data.

  79. Radiative forcing is the natural logarithm of CO2 concentration. A linear trend is therefore a reasonable approximation.

    I agree that it’s reasonable approximation. However, estimates of the forcings suggests that it isn’t actually linear. I’m not averse to reasonable approximations. However, given that we don’t expect the underlying trend to be linear, pointing out that linear approximation doesn’t work very well is not an argument in favour of natural variability, whatever the statistical tests might suggest.

  80. @wotts
    A reasonable but unnecessary approximation (as data on radiative forcing are readily available) … if a student would do this, we would deduct points for lack of diligence.

  81. A reasonable but unnecessary approximation (as data on radiative forcing are readily available) … if a student would do this, we would deduct points for lack of diligence.

    Isn’t that essentially my point? You wouldn’t be suggesting that I’m suddenly arguing in favour of a simple linear approximation?

  82. jamesannan says:

    No interest in a bet, he’s just doing a bit of mathturbation that he doesn’t believe in, SOP for a maths professor, I think. Bit disappointing for an applied maths prof, though. Think I’d rather deal with the delusional idiots who think it’s all a conspiracy than the deliberately disingenous who know it isn’t but generate misleadining nonsense regardless.

  83. verytallguy says:

    I don’t like to say “I told you so” but Prof Mills confirms my analysis via the excellent James Annan:

    No-one, but no-one involved in any of this actually believes the “forecasts” are at all relevant to anything real or physical.

    http://julesandjames.blogspot.com/2016/02/no-terence-mills-does-not-believe-his.html

  84. verytallguy says:

    Ha! Crossed.

  85. dikranmarsupial says:

    @dikran No. As I said, I think that Mills is wrong, because, focussing on significance rather than power, he misinterprets his test results, and he also ignores data.”

    “challenge is to explain why trend-stationarity — which corresponds to a greenhouse signal — is rejected in favour of non-stationarity — which corresponds to natural variability.” perhaps you have answered your own question then?

  86. L Hamilton says:

    @Tol
    “The number and location of the breakpoints are estimated, rather than set by the analyst.”

    No, the number and location of the breakpoints are not estimated, but rather set by the analyst. Mills explains:
    “The break-points were determined ‘exogenously’, in other words by visual examination of
    a plot of the series.”

    The 4 eyeball-selected and forecast-determining break points Mills specifies for HadCRUT give these 5 regimes:
    1. 1850-1919
    2. 1920-1944
    3. 1945-1975
    4. 1976-2001
    5. 2002-2014

    Mills’ eyeballed HadCRUT breakpoints are unrelated to his eyeballed RSS breakpoints, which likewise make 3 forecast-determining “regimes” (one only 2 years in length) for that series:
    1. 1979–1997
    2. 1998-1999
    3. 1999–2014

  87. @L Hamilton
    Really? In response to Wotts, I gave Mills an “L” for lazy. If you’re right, he deserve an “XL” for extra lazy (particularly since he has a theory paper on estimating multiple breakpoints).

  88. @dikran
    Sometimes we ask questions because we want to know how the other person would answer.

    I guess we have now reached the point where we have taken the paper almost completely apart without discussing the author’s choice of clothes.

  89. dikranmarsupial says:

    Richard “Sometimes we ask questions because we want to know how the other person would answer.”

    Yes, but more often we ask questions because we want to know the answer or just to be sure that we understand their position correctly, which is why it is generally a bad idea to answer their questions cryptically or evasively and instead give a straight answer to the question as posed.

    “The number and location of the breakpoints are estimated, rather than set by the analyst.”

    Indeed, at but least he explicitly said exactly what form of breakpoint analysis was performed, and not all authors do that. It was just the consequences of this that were not made explicit (e.g. invalidation of the assumptions of the tests).

  90. @dikran
    “The number and location of the breakpoints are estimated, rather than set by the analyst.”

    Note that Hamilton corrected me.

  91. The Very Reverend Jebediah Hypotenuse says:

    Tol:

    A linear trend is therefore a reasonable approximation.

    GWPF Report (p18):

    What the analysis also demonstrates is that fitting a linear trend, say, to a pre-selected portion of a temperature record, a familiar ploy in the literature, cannot ever be justified.

    A familiar ploy in the literature has now gone to the blogs.

  92. It also says, about linear trends,

    At best such trends can only be descriptive exercises,

    Well, yes, that’s how they’re typically used.

  93. dikranmarsupial says:

    Richard wrote “The number and location of the breakpoints are estimated, rather than set by the analyst.” I obviously misread what Richard wrote, as it is the other way round. Apparently Richard didn’t read that bit of the report (or misread it).

  94. dikranmarsupial says:

    ATTP indeed, if climatologists actually thought that physics predicted a linear trend, one wonders why they bother with those GCM things! ;o)

  95. Indeed, we love simple models 🙂

  96. The Very Reverend Jebediah Hypotenuse says:

    >>> one wonders why they bother with those GCM things!

    Another familiar ploy in the literature that cannot ever be justified.

  97. anoilman says:

    …and Then There’s Physics says:
    February 24, 2016 at 4:05 pm

    Indeed, we love simple models 🙂

    Indeed, I seem to recall a simple time series model you discussed recently where climate sensitivity was calculated to be 6. Who did that I wonder? Someone who thinks he’s famous I think.

  98. Jamie says:

    Ben Webster doesn’t tweet a great deal but he does seem to tweet most of his own articles but has been curiously silent about this one. Could it be he was ordered to write this knowing full well it was a load of bollocks?

    https://twitter.com/bwebster135/with_replies

  99. MarkR says:

    Based on the Mills GWPF approach there has been a change in the background trend from 2014 and we are now in a new regime. HadCRUT4 gives a background trend of +2.7 C/decade, how long until the GWPF update their forecast with a +2.7 C/decade trend?

  100. MarkR says:

    If I understand it right then once you’ve got your ARIMA parameters there’s basically no information going into the forecasts except for the previous 3 years of data and some arbitrarily chosen trends on breakpoints. So the same prediction should work starting from any sufficiently long period. So back around 1970 this method would have forecast flat temperatures. Same if you’d started around 1900.

    Can anyone show that I’m wrong?

  101. Mark,
    I’m not totally sure, but I think that’s right. He should be able to test his model by considering some earlier period and comparing what his model forecasts with what actually happened.

  102. MarkR says:

    I’ve dropped Professor Mills an email to ask if he’s tested his model with out of sample data, whether he’s willing to try it now with series where we know the answer (e.g. CMIP5), and how he chose his break points.

    We’ll see, maybe I’ve misunderstood.

  103. Would be interested to know what he says.

  104. MartinM says:

    You could also fit an ARIMA model to, say, a historical CMIP5 run, then compare the resulting forecast to the corresponding RCP4.5 run. Surprisingly, it doesn’t do very well!

  105. MartinM says:

    Well, there was supposed to be an image in there, but apparently I screwed up the tags. Have a direct link instead:

    http://img.photobucket.com/albums/v124/MartinM/arima.png~original

  106. MarkB says:

    MarkR,
    If I understand correctly, the (HADCRUT) ARIMA model is (currently) trendless because it is based on data spanning 2002-2014 only. Mills implicitly acknowledges a “regime change” prior to that so one wouldn’t expect it to be valid over the previous intervals.

    Perhaps the most generous interpretation of this work is that it has no predictive power for the next regime change so it could go off the rails at any time. The less generous would observe that it was off the rails before it was published so predictive power of the model is not the primary objective of this work.

  107. MartinM says:

    If I understand correctly, the (HADCRUT) ARIMA model is (currently) trendless because it is based on data spanning 2002-2014 only.

    Mills actually gives two models for HadCRUT; a trendless ARIMA model, and a segmented trend model with AR(4) noise. The former model will indeed forecast constant temperatures, no matter when you choose to end the analysis, albeit constant with fairly huge error bars..

  108. @markr
    Just read Box & Jenkins (1970) to discover how wrong you are.

  109. Richard,
    A bit too much to expect you to add a few more words explaining why? No, sorry, that would be silly.

  110. dikranmarsupial says:

    MarkR wrote “Can anyone show that I’m wrong?”

    Richard Tol wrote “Just read Box & Jenkins (1970) to discover how wrong you are.”

    That would be a “no” then.

  111. Richard,
    Apart from some possible technicality, I fail to see how MarkR is wrong. An ARIMA process still forecasts based on some number of previous past data points in the time series. Therefore, as long as you have enough past datapoints, you should be able to test the model to see how it does for a period where we know what actually happened.

  112. dikranmarsupial says:

    Richard wrote “Wikipedia?”

    still a “no” then. Seriously Richard, if you really want to demonstrate your statistical prowess, give a detailed demonstration of where MarkR is wrong, your current approach is not creating a good impression.

    The segmented regression approach is unlikely to have useful predictive skill simply because the segments of the regression do not necessarily correspond to physically meaningful “regimes” where there is e.g. some particular change in the forcings. If the data stopped in 1970, the resulting model would not have predicted the rise in temperatures that followed immediately after because it is accounted for in the model of the whole dataset by a new “regime”, not the ARMA/ARIMA component of the model.

  113. MarkR says:

    @ Richard Tol,

    I’ve checked through the equations on wiki and cross referenced against Mills and I still can’t see that I’m wrong, it still seems that the only information going into Mills’ forecast is (1) the parameters estimated from the full series and (2) the previous few years of data. Perhaps you could help me understand by explaining what the forecast woukd be:

    1) we run the models from the end of the first “regime” in 1919, what’re the forecasts?

    2) Or how about making an RSS forecast using the Jan 1998-Oct 1999 regime?

  114. I’ve checked through the equations on wiki and cross referenced against Mills and I still can’t see that I’m wrong

    There’ll probably be something. We’ll spend a long time trying to get Richard to explain and when we work it out, it’ll turn out to be irrelevant, or something silly, like the wrong terminology. It won’t, typically, be worth the effort.

  115. jimt says:

    If anyone is interested in a more objective ‘segmented’ regression approach, or checking how well Mill’s cherry picked regimes stack up, I’ve been working on a “R-Shiny” app that does a form of Bayesian change-point analysis…

    tanytarsus.shinyapps.io/changepoint/

    Its a work in progress, and can be slow if you have a long time series and try for 2 change-points (if you upload monthly data, use decimal years as time, not month!…and include an AR term).

    I would welcome constructive feedback (via the email given under help).

    PPS it takes a few seconds to load up (loading R packages etc)
    PS Obviously, this is not a forecasting tool! Just a way of objectively testing for past changes in trend.

  116. jim,
    That’s pretty impressive. What would it take to do the kind of projections that Mills’ report does?

  117. @markr
    AR is linear difference equation, but MA is the multiplicative inverse of a linear difference equation. In other words, the MA part uses the complete history in forecasts.

  118. Richard,
    Unless I’m mistaken, the MA part relates to the error terms, so how does that make MarkR’s point wrong?

  119. jimt says:

    thanks ATTP,
    The simplest projection would be the final model averaged trend estimate and its uncertainty applied to future years, tacked on to the end of the fitted line.
    I could add a ‘project’ feature that would do the calcs properly (model average the predictions from all possible models)…but apart from providing a counter-projection to Mills (using his own ‘ignore physics’ approach), is there any value in that?

  120. Pingback: Terence Mills does not believe his “forecast” and other hits – Stoat

  121. Also (and, of course, this may be intentional) I think MarkR’s point is that if you’re trying to do a forecast, then the unknown future term depend only on a few of the essentially known past terms, even if those past terms themselves depend on even earlier terms.

  122. but apart from providing a counter-projection to Mills (using his own ‘ignore physics’ approach), is there any value in that?

    Probably not, given that it sounds like it’s just an obvious extension. I was just thinking it might be nice to see it illustrated.

  123. L Hamilton says:

    Curious to try this out, I fit a time series regression model segmented as Mills describes, with ARIMA(4,0,0) errors. My results resemble but don’t match Mills’, not sure why. Regardless, experimenting with this approach and these data quickly reveals how much forecasts are controlled by the operator’s choice of break points. Leaving out 2015 data also helps to reduce the slope of the final segment and forecasts.

  124. MarkR says:

    @ Richard Tol

    What’s the forecast for those models at each of the other breakpoints? Say 1925 or 1975 in HadCRUT4 or 1999 in RSS? This would really help me visualise what’s going on.

  125. jimt says:

    FWIW, here is the HADCRUT projection based on a Bayesian changepoint model with up to 5 trend changes allowed (it finds only 3 up to 2015)…

    The CI’s for the projected trend (and predicted values, in blue) reflect the fact that the prior says more change-points are possible, and the data says they do occur occasionally, so for the future it averages over random, low prob. change-points of random magnitude and direction.

  126. MarkR says:

    @ jimt

    So that means that a statistical model that allows change points that are automatically selected based on some criterion (criteria?) would currently project continued warming, which is different from the GWPF-type approach in which the author chooses their own preferred change points?

  127. jimt says:

    exactly MarkR.
    GWPF choose changepoints to ensure non-positive trends.
    When the change points are chosen objectively, in this case based on likelihood functions (see “About” under help section here…https://tanytarsus.shinyapps.io/changepoint/), there is no evidence of a changepoint at any time since 1970….
    so if you must use recent past trends to project future trends, you’d be projecting continued warming…(which in this case happens to agree with physical reality)

  128. If I understand correctly,
    – we don’t need to study the forcings to understand the temperature trend
    – we get better trend lines if we create arbitrary “regimes” of variable length
    – “regimes” are best chosen through pareidolic examination of a graph
    – nothing is too wrong for Tol not to try and garner attention through defending it

  129. Arthur Smith says:

    Maybe a little late on this, but way up there Richard Tol said “CO2 is the most important greenhouse gas. It’s concentration has risen exponentially. Radiative forcing is the natural logarithm of CO2 concentration. A linear trend is therefore a reasonable approximation.”

    This is commonly asserted, but false. The (roughly) exponential growth is in the fossil C we have added to the atmosphere. That is on TOP of a pre-industrial natural level of CO2. The formula would be something like log(A + B exp(c t)) – and this is NOT a straight line. In early stages (while B exp(ct) is much less than A) it is itself close to (constant plus) exponential. A linear trend for response to our CO2 forcing is therefore not a reasonable approximation over any long-enough period of time.

  130. KR says:

    This cr– I’m sorry, this nonsense again??? Tamino has a lovely analysis of M. Beenstocks claim of a non-stationary climate, and it’s really quite laughable. All credit to him for the following:

    Not a Random Walk

    Still Not

    Making an (erroneous) determination of non-stationary behavior requires abusing the Augmented Dickey-Fuller test, which can only give a weak indication of unit roots and hence non-stationary behavior for variations around a linear trend. It completely fails if the forcings (as in the case of climate) are nonlinear, for example with all forcing included, not just CO2. The proper test, the Covariant ADF, clearly demonstrates that temperatures are trend-stationary with forcings. Or you could check with a number of other unit root tests, such as the Phillips-Perron test, of check explanatory power with the AIC or BIC tests – all of which reject non-stationary behavior.

    In short, climate temps are trend- stationary with forcings, not a non-physical random walk that ignores the Conservation of Energy, and if you encounter a claim of random walks you can stop reading the paper right there and save yourself some waste of time. It’s nonsense through and through.

    It’sa shame Mills has wasted so many years and papers on such twaddle.

  131. KR says:

    It’s also interesting to note that (by my quick count) fifteen of the thirty-one references in Mills non-peer reviewed work (and no, I don’t count peers of the Realm as scientific peers by default) as being too Mills own work. That level of self referencing is never a good sign..

  132. Marco says:

    “GWPF choose changepoints to ensure non-positive trends.”
    @jimt, make no mistake here: while the GWPF can be accused of many things, only one person is responsible for doing what he did, and that is Terence Mills. Even if the GWPF asked him to do what he did, ultimately he is still the one to blame.

    And with Tol apparently also seeing the flaws in Mills’ work, the question once again arises which “peers” reviewed a GWPF paper.

  133. @wotts, markr
    You both have PhDs. You should be able to work out, as third year undergrads in economics can, that MA(1) = AR(inf). If you can’t, read Box & Jenkins.

  134. Richard,
    I’ve just had a look at Box and Jenkins. Are you referring to equation 2.2 which seems to be suggesting that AR(1) = MA(inf)? Also, I’m not sure how this is relevant as it appears to require an infinite number of points, which clearly we do not have.

  135. dikranmarsupial says:

    jimt, nice! The lack of a breakpoint around 2000 is an interesting finding.

  136. dikranmarsupial says:

    The second of Steven Mosher’s links is well worth reading. Structural break models are quite useful for “descriptive statistics”, but as I mentioned earlier there is no good reason to think the breakpoints actually mean something (unless you can put some physics behind it) and it does complicate the statistical testing procedure, especially if the breakpoints are chosen by hand.

  137. Jim Hunt says:

    So to summarise for the benefit of those who follow bull channels on graphs of stock prices but not global temperatures:

    1) Prof. Mills magnum opus has no basis in physical reality
    2) It is “curve fit” using all the available data, and thus the “backtest” employs no “walk forward analysis”
    3) Consequently if you were to “bet the farm” on it you’d in all probability be sleeping in shop doorways shortly thereafter.
    4) Using Tamino’s methodology instead, you’d get rich slowly:

  138. MarkR says:

    @ Richard Tol

    I checked before posting which is why I was confused. It seems that projections done in 1925 and 1975 are what I would describe as pretty shitty, consistently underestimating later warming.

    But perhaps I did it wrong. With your economics insight perhaps you could correct me: what are the predictions if made in 1925 or 1975?

  139. @wotts
    I guess you’re looking at the original edition. In the revised edition of 1976, this is discussed in section 3.3.5, pp 72-73.

    Lacking a complete history, you would need to make an approximation for the deviation between actual and equilibrium temperature before your first observation. People typically set this to zero, which is an accurate approximation (for a record of 160 years) unless one of your roots is on or beyond the unit circle.

  140. Richard,
    What you appear to be highlighting is that your time series depends on all previous data values, which is fairly obvious given that the value at n depends on the value at n – 1, but the value at n-1 also depends on the value at n-2, etc. However, that doesn’t change that if you want to use your model to make a forecast, all you need to know are some finite number of past data values. That those past data values may technically depend on even earlier data values doesn’t make this not true. Hence you should be able to make a forecast from any time, as long as you have sufficient past data values. Therefore, you should be able to test the model by considering some earlier time period and comparing the forecast with what was known to have happened.

  141. Jim Hunt says:

    Therefore, you should be able to test the model by considering some earlier time period and comparing the forecast with what was known to have happened.

    Precisely. “Walk forward analysis” as those who wander in technical trading circles call it.

  142. dikranmarsupial says:

    Richard Tol wrote “@markr “Just read Box & Jenkins (1970) to discover how wrong you are.”

    Richard Tol wrote “I guess you’re looking at the original edition. In the revised edition of 1976, this is discussed in section 3.3.5, pp 72-73.”

    Mildly amused 🙂

  143. dikranmarsupial says:

    “Therefore, you should be able to test the model by considering some earlier time period and comparing the forecast with what was known to have happened.”

    Indeed, if this were 2020, it would be very silly to say that we couldn’t have built the model in 2015 to see what it would have forecast as that is just what Mills actually did.

  144. lerpo says:

    Mills projects out to 2020. The Times drew the graph out to 2100 and claimed “The global average temperature is likely to remain unchanged by the end of the century,” but they could have drawn it out much further and written “New study shows that global average temperature can never change”.

  145. jamesannan says:

    To be fair (not sure why I should be really) the uncertainty bounds grow such that it’s not the case that the temperature will not change, but rather that it will go up and down randomly with no expected persistent trend.

  146. Arthur Smith said on February 25, 2016 at 2:22 am,

    “Maybe a little late on this, but way up there Richard Tol said “CO2 is the most important greenhouse gas. It’s concentration has risen exponentially. Radiative forcing is the natural logarithm of CO2 concentration. A linear trend is therefore a reasonable approximation.”

    This is commonly asserted, but false. The (roughly) exponential growth is in the fossil C we have added to the atmosphere. That is on TOP of a pre-industrial natural level of CO2. The formula would be something like log(A + B exp(c t)) – and this is NOT a straight line. In early stages (while B exp(ct) is much less than A) it is itself close to (constant plus) exponential. A linear trend for response to our CO2 forcing is therefore not a reasonable approximation over any long-enough period of time.”

    Thank you so much for this. Every so often for some time now I have tried to get folks to see that the response does not need to be a linear one. This is especially so when thinking about it from a purely mathematical viewpoint.

    I’ve noticed for some time now that one of the strains of thought in mainstream climate science denial seems to contradict some purely mathematical ideas in basic analysis. This strain of thought seems to involve two mathematically false ideas: If we “combine” an exponential function with a logarithmic function, then we get an approximation of a straight line, and that if given a logarithmic function, we actually need to “combine” it with an exponential function to obtain something like a straight line.

    They seem to not get that it’s very easy to obtain increasing convex functions (which give upward accelerated graphs, graphs that grow faster and even much faster than a straight line) from logarithmic functions without having to “combine” exponential functions with these logarithmic functions at all. They seem to not get that we can “combine” logarithmic functions with merely polynomial functions (including polynomial functions that are merely linear) to “counteract” logarithmic functions to obtain these increasing convex functions – it’s one of the basic facts of analysis that we don’t need to “combine” exponential functions with logarithmic functions to “counteract” logarithmic functions to obtain these increasing convex functions.

    That is, if it’s very easy to obtain a graph that grows even much faster than a straight line from “combining” a logarithmic function with a polynomial function (even just a linear one), then, with all the more force, it’s very easy to obtain a graph that grows even much faster than a straight line from “combining” a logarithmic function with an exponential function.

    What all this means is this: That which seems to be some assumptions behind what Tol said (and what so many deniers of mainstream climate science say) are quite mathematically false. Especially from just a purely mathematical viewpoint, it is most certainly not reasonable to expect a linear function (or a linear trend) to result from “combining” exponential and logarithmic functions.

    See such as this

    Click to access growth.pdf

    for some of the analysis in question.

    (It’s easy to forget these basic ideas in analysis on growth of functions. Back in the late 1990s, I was working on a problem whose solution required the creation of a function that required no more than polynomial growth. I asked for some help from a gifted mathematician more knowledgeable than I in analysis. He spent most of a day creating some very complicated formulas covering the entire backboard at the front for the room. But later, as soon as I walked into the room and scanned all of that, I immediately realized that it could not possibly work due to some exponential functions present in some of the numerators, giving long term exponential rather than merely polynomial growth. Needless to say, he was pissed. All that work was for naught because some basic analysis did not occur to him when he saw and used the formulas I gave to him to work from.)

  147. MartinM says:

    What you appear to be highlighting is that your time series depends on all previous data values, which is fairly obvious given that the value at n depends on the value at n – 1, but the value at n-1 also depends on the value at n-2, etc. However, that doesn’t change that if you want to use your model to make a forecast, all you need to know are some finite number of past data values. That those past data values may technically depend on even earlier data values doesn’t make this not true.

    No, that’s not what he’s getting at. An AR(p) series depends on the last p datapoints. An MA(q) series depends on the last q error terms. But we don’t have the error terms, only the datapoints. We have to estimate the error terms, and those estimates will depend on all the data.

    Therefore, you should be able to test the model by considering some earlier time period and comparing the forecast with what was known to have happened.

    This, on the other hand, is entirely correct. Tol’s digression into the properties of MA series is particularly pointless, given that the segmented trend model which MarkR was discussing has no MA component. It’s technical, pedantic trolling completely devoid of any relevance to the substantive points under discussion. Sounds suspiciously similar to your earlier prediction, in fact. It’s almost as if Tol has a habit of doing this, or something.

    I do hope he’s not teaching any 3rd year undergrads that MA(1) = AR(inf), though, since it’s not true in general; it applies only in the case of invertible processes.

  148. lerpo says:

    The Times gets around that by dropping the uncertainty bounds. Although the fitted segmented trend model doesn’t seem to have expanding uncertainty so maybe they are not misrepresenting Mills.

  149. Martin,

    But we don’t have the error terms, only the datapoints. We have to estimate the error terms, and those estimates will depend on all the data.

    Yes, okay, that makes sense. Thanks.

    It’s almost as if Tol has a habit of doing this, or something.

    Well, yes, and he’s pretty good at it too.

  150. MartinM says:

    To be fair (not sure why I should be really) the uncertainty bounds grow such that it’s not the case that the temperature will not change, but rather that it will go up and down randomly with no expected persistent trend.

    In fact, the ARIMA(0,1,3) forecast from HadCRUT is consistent with the lower end of RCP8.5.

  151. KR says:

    Keep in mind that a physical process constrained by the Conservation of Energy cannot vary it’s temperature in a random walk – it will always be trend stationary WRT the forcings.

    Mills post on climate is therefore twaddle, and all subsequent discussion on that topic boils down to asking how many angels can dance on the head of a pin.

  152. @Martin
    We indeed tell our students about the virtues of invertibility.

  153. @james
    “To be fair (not sure why I should be really)”
    because your mother taught you

  154. The Very Reverend Jebediah Hypotenuse says:

    There is a new charitable foundation that will have a mandate to inform policy-makers, media outlets, and the general public about walrus populations.

    It’s called the Global Walrus Population Forum.

    Our most current publication presents the results of a thorough and quantitative study of recorded walrus sightings.

    Based on the assumption that walruses are not hunted by humans, we have concluded that the global walrus population will remain stable in the future.

    Models in which ‘hunting’ variables have been included in this framework have been considered, with some success, when used to explain observed behaviour of walrus populations. Their use in forecasting, where forecasts of the hunting variables are also required, has been much less investigated, however: indeed, the difficulty in identifying stable relationships between hunting and other walrus population variables suggests that analogous problems to those found in elephant and whale population studies may well present themselves here as well.

    Although we suggest that the the global walrus population will remain stable in the future, we object to the use of linear approximations because they are a common ploy in the walrus literature. In fact, since our Professor Chris Sussex dies the existence of global averages, we find the very idea of a global walrus population to be deeply problematic.

    It is difficult not to wonder whether a parallel with modern climatology will arise. Like the walrus population, the climate is a deeply complex system that defies simple representation. Giant computer modelling systems have been developed to try and simulate its dynamics, but their reliability as forecasting tools is proving to be very weak.

    Foreword by Professor Kitrick McMoss.

    Keywords:
    Walrus, Semolina Pilchard, Eiffel tower, Elementary Penguin, Hare Krishna, Edgar Allen Poe

  155. John Hartz says:

    KR: Well said, Bravo!

  156. guthrie says:

    The question is how much can we save the world from Richard Tol by keeping him busy on places like this?

  157. MarkR says:

    Tol said “Hamilton apart, no one on this thread have offered any valid argument as to why Mills is wrong” and seemed unconcerned by my questions about whether the prediction works when you test it against known results. I’m not arrogant enough to assume I know more about time series analysis so I didn’t want to leap in too hard to begin with. After a response from Professor Mills and checking MartinM’s ARIMA figure against what I worked out I’m now more confident that the prediction is bullshit.

    In 1925 Mills would have predicted no warming, but then it warmed. In 1975 Mills would have predicted no warming, but then it warmed. The only way it appears to match past observations is if you wait until the real world happens and then manually add in global warming by “eyeballing” trends. And in the most basic sense the ARIMA model’s uncertainties balloon so much that it’s hard to go outside its bounds, but that’s true of any prediction with large uncertainties, bullshit or otherwise.

    I’m still hoping I’ve got the wrong end of the stick here, maybe Richard Tol could point out that actually the predictions do work when we know the answer, say from 1925 or 1975. But if I’m right then it’s embarrassing that any competent reviewer would let this through if the purpose of their review is to ensure typical scientific standards.

  158. @MarkR
    As I said, read Box and Jenkins. That would tell you that the best forecast from a ARIMA without trend indeed (rapidly) converges to a constant value.

  159. That would tell you that the best forecast from a ARIMA without trend indeed (rapidly) converges to a constant value.

    Which probably tells us that it’s completely unsuitable for determining forecasts for a systems that responds to external stimuli.

  160. izen says:

    Dismissing the Terence Mills contribution to climate science as (allowable) academic stupidity ignores the fact that since 2003, (in between econometric analysis that notably failed to predict the financial crash), Mills has been publishing some sort of paper on the uncertainty of climate prediction predicated on unit-roots, level-shifts or other mathtubation at least once a year.

  161. MarkR says:

    So Richard Tol, I was correct that either with ARIMA or Mills’ arbitrary selection of trends, he would have failed to predict the warming that happened after 1925 or 1975.

    Since it contradicts physics and the real world works on physics I think “useless” might be a charitable description. Do you agree?

  162. dikranmarsupial says:

    Richard wrote “As I said, read Box and Jenkins. That would tell you that the best forecast from a ARIMA without trend indeed (rapidly) converges to a constant value.”

    The question is not what the best forecast from an ARIMA model without a trend is, but whether Prof. Mills method gives sensible, to quote MarkR: “So back around 1970 this method would have forecast flat temperatures. Same if you’d started around 1900. Can anyone show that I’m wrong?”. AFAICS he isn’t wrong, it is just that Richard has substituted a different question from the one that was actually asked.

  163. AFAICS he isn’t wrong, it is just that Richard has substituted a different question from the one that was actually asked.

    One of Richard’s skills. As long as Richard can construct a question using the words you used, then he seems to feel entitled to answer the question he’s constructed, rather than the one you actually asked.

  164. dikranmarsupial says:

    He is so mild and modest in the way he goes about it though that nobody minds at all! ;o)

  165. I’d like to say that it must be what his mother taught him, but that just seems wrong 😉

  166. izen says:

    Ask not for whom the Tol bells…

  167. jamesannan says:

    I’m sure that Richard is delighted that this thread is almost entirely about him. Well done Richard!

  168. John Hartz says:

    Meanwhile, back in the real world…

    The world was a vastly different place 250 years ago. There weren’t 50 states, Taylor Swift feuds or viral videos anywhere in sight.

    Another thing that was also less plentiful: carbon dioxide in the atmosphere. Since then, CO2 has risen and with it, a host of other impacts have befallen our planet. That includes the rapid acidification of our seas at a rate unseen in at least 300 million years.

    Scientists Turned Back the Clock on Climate Change by Brian Kahn, Climate Central, Feb 24, 2016

    and

    If the world hopes to avoid the most catastrophic effects of climate change, humanity must emit less than half the carbon dioxide than previously thought in the coming years, a new study shows.

    In order to keep global warming to no more than 2°C (3.6°F) — the basis for the Paris climate agreement struck last year — scientists have devised a “carbon budget” for how much carbon can be emitted before warming crosses into catastrophic territory.

    Study Calls For Leaner ‘Carbon Budget’ to Slow Warming by Bobby Magill, Climate Central, Feb 24, 2016

  169. jsam says:

    On the energy front, the most crucial part of the letter centers on an equation cooked up by Gates: P x S x E x C = carbon dioxide. He shows that changes to P (the world’s population), S (services used by each person) and E (energy) will not be dramatic enough to get carbon dioxide production down to zero—something that has to happen, according to Gates, to avoid catastrophic consequences from global warming. The factor that matters most is C (carbon dioxide produced by energy).

    Bill Gates Q&A on Climate Change: ‘We Need a Miracle’>/b>
    http://www.bloomberg.com/news/articles/2016-02-23/bill-gates-q-a-on-climate-change-we-need-a-miracle

  170. John Hartz says:

    Speaking of Bill Gates…

    After Bill Gates explained his strategy for boosting energy access while limiting climate change in a videotaped interview we published on Tuesday, readers were invited to submit questions for the Microsoft co-founder, philanthropist and investor.

    Below are his answers to a few of the hundreds of questions he received on The Times and on Facebook, covering everything from artificial meat to Americans’ gas guzzling driving preferences (with some light editing of his dictated responses):

    Bill Gates Explains How to Make Climate Progress in a World Eating Meat and Guzzling Gas by Andrew C Revkin, Dot Earth, New York Times, Feb 25, 2016

  171. Michael Hauber says:

    Why does everyone keep feeding the tol?

  172. izen says:

    The Gates ‘Breakthrough Energy coalition’ sounds like the French response to the advent of the steam engine.

    The French got together a group of experts to research the beat, optimal design and deployment of this new technology. This was when the Newcomen atmospheric engine was pretty much state of the art.
    Inevitably there was a disincentive for any private investment in existing technology while the authoritative expert group was still deciding on the best type.

    In the UK, without such official oversight there was investment in existing machines, with minor tweaks and fundamental improvements arising out of the practicalities and competition of the deployment.

    Its a fable of evolutionary development versus intelligent design. French utilisation of Steam power was delayed by several decades.

  173. @MarkR
    ARIMA has been classified as an agnostic model. Agnostic models are useful when you know little. They are less useful in this case, as we know a lot.

    Ditto for breakpoints in trends. Wonderful tools to describe the past, but without a model to predict the next trend break, fairly hopeless for forecasting the future (or indeed understanding the past).

  174. Andrew dodds says:

    Izen –

    Problem is that in this case, there is no incentive without government.

    With no regulation/tax/subsidy of some sort, the cheapest way to supply electricity is by burning coal with an absolute minimum of waste treatment. Possibly natural gas at the moment. And given that we already know the ‘menu’ of power sources, unless physics is wrong, someone really does have to make an expert decision. Indeed, pretending that a magical new energy source will solve the problem is dangerous in itself because it justifies inaction.

    Allowing markets to refine any solution – fine. They are an optimising function. But there is no natural economic incentive to avoid CO2 emissions.

  175. But isn’t Izen’s point that really you need to just get on and try to do things. A committee deciding, in advance, on the best solution is likely to not realise the many problems that will likely be encountered, and that a more proactive approach encounter and solve along the way.

  176. @izen
    But where the French experts were fleecing the taxpayers of France, Gates and co intend to fleece the taxpayers of the world.

  177. Joshua says:

    Putting together a coalition of experts wouldn’t preclude new developments from being made outside that coalition.

  178. Lars Karlsson says:

    Gates in the Bloomberg interview:
    “I do think with some tuning, the Breakthrough Energy Coalition group that we’re putting together will have some characteristics of a venture fund to invest in these breakthrough ideas.”

    Doesn’t sound like the French steam engine commission.

  179. Pingback: Le Oche con mandria - Ocasapiens - Blog - Repubblica.it

  180. Roger Jones says:

    “Ditto for breakpoints in trends. Wonderful tools to describe the past, but without a model to predict the next trend break, fairly hopeless for forecasting the future (or indeed understanding the past).”

    Except that’s what scenarios are for and they are far more useful than linear projections into the dark. Forecasting is oversold.

    Also, the physics of these breakpoints are getting better understood – the criticality of the climate system will soon be diagnosable. For instance, we are in a shift at the moment but we don’t have a great idea physically of what’s going on. We will know in hindsight.

    And this “The challenge is to explain why trend-stationarity — which corresponds to a greenhouse signal — is rejected in favour of non-stationarity — which corresponds to natural variability. None of the commenters above rises to this challenge”

    Has it occurred to anyone that both sides of the argument are wrong? Colleagues and I are trying to publish on this and keep getting knocked back by people who have a stake in one side or the other. The bias in the science community isn’t a lot different to that in the contrarian community -after all, they’re all people.

  181. @roger
    The fight between the temperature-has-unit-root and temperature-is-stationary-around-a-trend camps has been long and bitter. Writing that both are wrong is bound to land you in hot water.

  182. Eli Rabett says:

    How about something interesting? We know Mills’ price. What was Richard”s? Is there a statement of outside income that can be FOIed?

  183. @eli
    You know the way. Our FOI team loves you.

  184. guthrie says:

    I think Richard does it for the delusional egoboo. Prove me wrong!!!!

  185. It seems that people are forgetting, or choosing to violate, Tol’s 2nd Law of Blog. Since the discussion of outside money has come up, this question was asked and – I think – answered.

    I say, “I think” because of course the answer implies that Richard wasn’t (common practice) but doesn’t say that he wasn’t. I will assume he wasn’t until I learn otherwise.

  186. Eli Rabett says:

    Too trusting. All they said is they didn’t pay him for an intro. Not that they didn’t pay him.

  187. True, and technically they didn’t even say that they didn’t pay him for the intro. However, I’ll still give the benefit of the doubt till shown otherwise, too trusting or not 🙂

  188. Pingback: A little surprised it took this long | …and Then There's Physics

  189. jamesannan says:

    Poor Richard, they’ve even given me some money 🙂

  190. Pingback: Think tank throws out centuries of physics, climate scientists laugh, conservative media fawns | Dana Nuccitelli – Enjeux énergies et environnement

  191. Pingback: Lord Lawson thinktank’s report ignores everything we know about climate science – The Guardian | Dude Times

  192. Pingback: Lord Lawson thinktank’s report ignores everything we know about climate science – The Guardian | Online News of The World

  193. Pingback: Lord Lawson thinktank’s report ignores everything we know about climate science – The Guardian | Only Share News of the World

  194. Pingback: Lord Lawson thinktank’s report ignores everything we know about climate science – The Guardian | Today News Update

  195. Pingback: Lord Lawson thinktank’s report ignores everything we know about climate science – The Guardian | News Arab Update

  196. Roger Jones says:

    “@roger
    The fight between the temperature-has-unit-root and temperature-is-stationary-around-a-trend camps has been long and bitter. Writing that both are wrong is bound to land you in hot water.”

    It already has, but I’m not giving it away

  197. Pingback: GWPF throws out centuries of physics, climate scientists laugh, conservative media fawns | Daily Green World

  198. Pingback: An unchallengeable strategy? | …and Then There's Physics

  199. franktoo says:

    Richard Tol wrote (February 24, 2016 at 9:09 am) “Mills explicitly tests a linear trend (greenhouse forcing) against natural variability, and comes out in favour of the latter. His forecast follows immediately from his test, so you should find fault with his test (unless of course you want to argue, pre-Enlightenment, that you reject the method because you don’t like the result).”

    Unfortunately, Mills runs this test against the full length of the temperature record. The rising forcing from GHGs has only became significant since the middle of the 20th century. When considering the full length of the CET record (350+ years) or HadCru4 (160+ years), it is not surprising that the linear trend is lost in the noise – one isn’t expecting such a trend in 85% or 60% of the data. It is not surprising that the statistical model he derives describes unforced, not forced variability, From his 2009 paper mentioned above, however, Mills knows that radiative forcing contributes 2+/-1 K/doubling of warming to the HadCru4 record. There is pretty of room for responsible skepticism about CAGW in his value of 2+/-1 K/doubling. It is irresponsible of him not to include this knowledge in his statistical forecasts.

    This leaves the RSS record. Since the troposphere has a much lower heat capacity that the surface, unforced variability is greater. If the surface temperature record has occasional 0.5-1.0 degC warming spikes from major El Nino events and the record were only 35 years long, one might expect that unforced variability could overwhelm any forced linear trend. Unfortunately, Mills chooses to focus on an analysis with a two breakpoint during the 1997-88 El Nino. A breakpoint implies that factors driving the forced and unforced variability of temperature changed in 1998, which appears to be an absurd suggestion. It is not absurd to suggest that new (anthropogenic) factors became important some time in the first half of the 20th century.

  200. @franktoo
    Beenstock agrees with you: The time series of radiative forcing is more than linear.

  201. Pingback: Il modello econometrico del clima – OggiScienza

  202. Pingback: Matt Ridley doesn’t understand free speech | …and Then There's Physics

  203. Pingback: How do Climate Change Denialists rationalise their position? - Skeptical Science

  204. Pingback: It woz El Nino wot dunnit! | …and Then There's Physics

  205. Pingback: Clutching at straws GWPF style | …and Then There's Physics

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.