Well, no, not really, but sometimes all you can do is larf. They’ve released a new report called Statistical forecasting: How fast will future warming be?. It is by Terence Mills, a statistics professor who specialises in Time Series analysis, and has already been picked up tby The Times and The Australian. The main motivation was to

set out a framework that encompasses a wide range of models for describing the evolution of an individual time series.

Bottom line; he used basic time series analysis to develop models that he could then use to make forecasts of future temperatures. Was there any physics, I hear you ask? The answer, as I’m sure you’ve guessed, is no.

The basic result is shown in the figure on the right, which shows forecasts based on two different models. The forecast (red line) indicates no future warming, essentially suggesting that climate sensitivity is 0. Well, this is obvious nonsense. Furthermore, Gavin Schmidt has added the more recent observations (thin blue line) which already fall outside the models’ 95% confidence intervals (green and black lines). So, a year in and the models are already diverging from reality.

Here’s the key point; projecting future warming **requires** some kind of estimate for future emissions. Trying to forecast future warming using some model with no physics and based only on past temperatures is obvious nonsense. Even a Professor of Statistics should be able to get this utterly trivial point. Maybe Terence Mills is so clueless that he really can’t grasp what is a pretty straightforward concept. Alternatively, maybe £3000 was enough for him to put his name to a report that he knew was garbage. Whichever it is, I fully expect Richard Tol to come along and defend it.

### Like this:

Like Loading...

*Related*

“The forecast (red line) indicates no future warming, essentially suggesting that climate sensitivity is 0.” this ought to be a wake-up call for the GWPF, apparently some on their academic advisory panel have called for a carbon tax! ;o)

Apparently they have, but – of course – if there’s no future warming then the carbon tax is zero. Convenient.

Trying to forecast future warming using some model with no physics and based only on past temperatures is obvious nonsense.Statisticians working in finance also have no physics for their forecasts. Using Prof. Tol’s argument they are paid more and are thus more qualified than academic statisticians that would argue for the use of physics in the selection of your statistical models.

Yes, I almost pointed that out, but Terence Mills does not work in finance, so maybe – according to Richard’s criterion – he’s just not very good.

I wonder which members of the ‘academic’ advisory council reviewed this work? Would be interesting to see if any of them are willing to put their name to it.

[Canned laughter]

Martin,

The GWPF say review process is “peer review”.

So that’s Nigel Lawson and Matt Ridley 😉

It’s a “paper” so obviously shit you do wonder whether the academic who wrote it was deliberately taking the piss.

On a slightly more serious note, the purpose of this crap is to get some publicity and shed some doubt, not to be academically credible. So it’s already met the objectives of those who commissioned it.

Maybe Terence Mills is an atheist who thinks that makes him rational …

[potholer54 strike again, and boy, is he on form]

The GWPF report by Mills was so awful that I thought it must be a one-off by a statistician who had never worked with climate data before. But I was shocked to find he’s authored or co-authored at least 20 peer-reviewed papers on climate-related topics over the past decade or so.

Even if he was motivated by remuneration, £3000 wasn’t nearly enough to cover the damages.

That copy and paste of McKitrick’s fulsome praise didn’t out well. Here’s another try. Mod, feel free to delete the first version.

From Foreword by Professor Ross McKitrick:

In this insightful essay, Terence Mills explains how statistical time-series forecasting methods can be applied to climatic processes. The question has direct bearing on policy issues since it provides an independent check on the climate-model projections that underpin calculations of the long-term social costs of greenhouse gas emissions. In this regard, his conclusion that statistical forecasting methods do not corroborate the upward trends seen in climate model projections is highly important and needs to be taken into consideration.

As one of the leading contributors to the academic literature on this subject, Professor Mills writes with great authority, yet he is able to make the technical material accessible to a wide audience. While the details may seem quite mathematical and abstract, the question addressed in this report is of great practical importance not only for improving the science of climate forecasting, but also for the development of sound long-term climate policy.

Now I have to redo my comment 🙂

It’s only independent in the sense of having no basis in reality.

Mills finds that the three temperature records are non-stationary rather than trend-stationary. The forecasts follow mechanically. Note that 1 in 20 observations is supposed to be outside the 95% confidence interval, so the probability of observing 1 in 12 outside the confidence interval is about 35%.

The challenge is to explain why trend-stationarity — which corresponds to a greenhouse signal — is rejected in favour of non-stationarity — which corresponds to natural variability. None of the commenters above rises to this challenge

Richard,

Beautiful, you’ll defend any old crap if it comes from one of your GWPF mates.

The answer is literally the name of this blog.

”

None of the commenters above rises to this challenge.

”

We’ve already got one. It’s very nice.

Pingback: Considera l'armadillo - Ocasapiens - Blog - Repubblica.it

As luck would have it I gave Benny Peiser a piece of my mind over that very article this very morning. However you’ll have to scroll down past all my many missives about the GWPF’s recent “funny” articles about the Arctic in order to read it:

“The Great Global Warming Policy Forum Con”

Good morning Benny,I note that the GWPF webmaster has still not taken on board any of the helpful advice I have proffered over the last few weeks, and has now posted some inaccurate information about “global warming”. Will he or she never learn?Dr Tol,

I see no back-testing or retro-diction. Apply this “method” of prediction to HadCrut4 data from 1850 to 1975, for example. See how well it predicts 1976 to 2016 data.

I predict it will not do well.

I predict Mill’s “method” will not rise to this challenge.

Actually, physics gets a brief nod near the end of the piece – only to be dismissed as making the science of climate become as problematic an area of study as… economics or finance.

”

It may be thought that including ‘predictor’ variables in the stochastic models will improve both forecasts and forecast uncertainty. Long experience of forecasting non-stationary data in economics and finance tells us that this is by no means a given, even though a detailed theory of such forecasting is available.

Models in which ‘forcing’ variables have been included in this framework have been considered, with some success, when used to explain observed behaviour of temperatures.

Their use in forecasting, where forecasts of the forcing variables are also required, has been much less investigated, however: indeed, the difficulty in identifying stable relationships

between temperatures and other forcing variables suggests that analogous problems

to those found in economics and finance may well present themselves here as well. (p18)

”

Forcing the membership of the GWPF to see the wonderful irony in that fallacious passage could involve turning heads inside out.

Has Keenan had a name change?

Richard,

I wondered something similar myself 🙂

The temperature series investigated so far are both ‘global’ and hence contain no seasonal fluctuations.(page 9)This is utterly clueless. There are no seasonal fluctuations because the timeseries are anomalies. This was peer-reviewed wasn’t it.

It’s a “paper” so obviously shit you do wonder whether the academic who wrote it was deliberately taking the piss.And there’s the challenge right there. The bar isn’t very high, but it is wide.

On a slightly more serious note, the purpose of this crap is to get some publicity and shed some doubt, not to be academically credible. So it’s already met the objectives of those who commissioned it.Credibility is a double-edged sword.

Clueless on many different levels.

Possibly by a Viscount, but other than that, I don’t think so.

> Mills finds that the three temperature records are non-stationary rather than trend-stationary.

“But random walk”:

https://ourchangingclimate.wordpress.com/2010/03/01/global-average-temperature-increase-giss-hadcru-and-ncdc-compared/

Only 2,190 comments to read.

As Richard Betts points out on BH, Mills published a paper on How robust is the long-run relationship between temperature and radiative forcing?. So, it seems that he wasn’t unaware of some of the basic physics.

@willard

Indeed, it is not a new discussion. Mills has been arguing this same point, in a series of peer-reviewed papers, for 15 years now. As you know, I’m with Estrada & Perron.

Tol:The challenge is to explain why trend-stationarity — which corresponds to a greenhouse signal — is rejected in favour of non-stationarity — which corresponds to natural variability.Trend-stationarity is rejected because, without justification, Mills segmented the data such that “the current regime” is virtually trendless. For HADCRUT he uses data since 2002, for RSS since 2000, and for CET he uses everything since 1660 (real temperatures, not anomalies). If one arbitrarily picks segments minimizing the recent trend, then it’s not surprising that performing the statistical mechanics doesn’t detect a trend.

Richard,

You’d think after 15 years of doing this, he’d have worked it out by now.

Mills (2009) reports a climate sensitivity of 2 +/- 1 °C/doubling. Low but not absurd. Don’t know why he thinks sensitivity is now zero. Or does he think that emissions might randomly fluctuate to zero in the next decade.

Richard Telford,

Well I have seen Matt Ridley suggest that the range of RCPs used by the IPCC suggests that we might – by chance – follow a low emission pathway. Quite how we can do so without actually reducing our emissions is, however, somewhat beyond me.

Quaint as the customs of Lawsonland may be, America’s Heartland is far funnier.

Thanks for the pointer, I’ve emailed him suggesting a bet on the basis of his forecast 🙂

James,

Let me know what he says 🙂 . I thought of emailing him too, but decided against it.

James, whatever you offered, may I join that bet?

#FeeTheTol£3000

Easy enough to measure the trend from 1975 to 2015, extrapolate that into the future and call that a forecast. Not even any physics required – although I’d appeal to principals of physics to justify this as reasonable – and I bet it will be a pretty reasonable forecast for at least 20 or 30 years.

But somehow the author measures the trend for the entire HADCRUT data series to be not significantly different from 0, and that the term that represents the trend in his time series model can therefore be replaced by 0.

Who cares if its scientific nonsense? It feeds the denialist press. The paper’s purpose is not to advance knowledge, it is to advance a cause.

Propaganda by any other name, including pseudo-science poppycock like Mills’ paper, is still propaganda.

@RichardTol – Have you taken a close look at the reams of utter hogwash the GWPF is printing about Arctic sea ice at the moment?

Forgive me if I harp on about my personal hobby horse, but even a practioner of the dismal science can surely see it for what it is? “Clueless” doesn’t even get close!

There is what Mills and GWPF says and then there’s physics; no contest which can be safely ignored without consequence and which cannot.

But we live in an age when corporations and economies are built on foundations of self interest without responsibility – responsibility adding unwanted burdens of costs that are readily avoidable by ordinary means (PR, judicious donations, lobbying, tankthink and economic alarmism). It is a system aided by ‘free’ markets wherein advertised enticements of vicarious lust, gluttony and envy displace disclosure of information. It is a system that is, rather ironically, championed by lots of religious leaders as morally superior – in a deal that exchanges non-binding statements of intent to eliminate evolution from education and climate change from energy policy for influential endorsements. Inconvenient physics based foresight is displaced by convenient beliefs and appropriate illusions.

Change apparently requires the highest levels of certainty whilst continuing business as usual, irrespective of the high levels of certainty of strong, ongoing and irreversible planet changing consequences, requires none.

Magic terms are specified to separate the HadCRUT time series into 5 “regimes” each with different slope:

1 1850–1919

3 1945–1975

4 1976–2001

5 2002–2014

Perhaps someone will claim that 2015-16 ventured out of prediction bounds, and Mills should decline any betting with James, because a new “regime” has just started. Or maybe not, the magic terms (permitting instantaneous slope changes) appear to dominate overall fit and are not predicted.

Some statisticians use a statistical package like R. Others rely on spreadsheets like Excel. A few, apparently, use a Ouija board.

Climate science is founded on physics, because it’s about physics – it is about how the heat balance of the planet is being changed by increasing greenhouse gases in the atmosphere. While many details of climate science are about a lot more than the physics, when it is the big picture that is being considered, it is all about the physics. The key question is: does the increase in greenhouse gases result in the accumulation of heat in the earth/atmosphere system? The answer is “yes”, and I don’t see any deniers question that basic physics. Where are the papers questioning the measurements and the radiative forcing calculations? The thing is, if any individual or group of deniers question the scientific consensus while not addressing the basic physics, then to me that is proof that they are not doing science, they are peddling doubt. Either the physics is correct, or it is wrong; if it is correct, then the rest is detail.

Next up, a statistician will predict the future path of a hockey puck during a game with only past puck trajectory data to assist him. Afterwards, physicists and hockey players share a beer…

ATTP, more interesting is that Tol essentially admits that Mills’ work puts severe doubts on Tol’s earlier work from 1993 and 1994, in which he and his co-authors used statistics to show the hypothesis that “increased CO2 is not the cause of the increased global temperature” should be rejected (P<0.01).

I thought the Tol was self-professed infallible?

@marco

As you can see from those papers, we test our model (which does not have a linear trend, but rather a trend that follows radiative forcing) against ARIMA and reject ARIMA. Others, particularly Estrada&Perron, did the same.

So, I think that Mills is wrong, as accomplished an econometrician as he may be. Hamilton apart, no one on this thread have offered any valid argument as to why Mills is wrong. Abuse aplenty, but little substance.

I don’t find the body of the report expecially problematic. It reads mostly like a tutorial. As for as I can see, Mills doesn’t make any claims that these methods are comparable to or better than GCMs.

The exaggerations start with McKitrick in the foreword and in the news item of the GWPF, and seems to escalate when it reaches the media.

The Times writes:

“The global average temperature is likely to remain unchanged by the end of the century, contrary to predictions by climate scientists that it could rise by more than 4C, according to a leading statistician.”

Seems to me that Mills has been conned by the con men of the GWPF.

Richard,

Apart from this bit in the post?

A fair amount, but then this is someone who was paid to write something that it’s hard to believe they did not know was nonsense. They’ve also succeeded in getting this obvious nonsense promoted in the mainstream media.

Lars,

Maybe, but there are still things in the report that are wrong, as Richard Telford highlights.

@wotts

Instead of “abuse” I should have written “abuse and misunderstanding”. Mills explicitly tests a linear trend (greenhouse forcing) against natural variability, and comes out in favour of the latter. His forecast follows immediately from his test, so you should find fault with his test (unless of course you want to argue, pre-Enlightenment, that you reject the method because you don’t like the result).

Richard,

You frequent Twitter I believe? How about this from the other Richard for starters?

Richard,

I don’t think you get to say the above and claim that others misunderstand. A linear trend is NOT greenhouse forcing. Greenhouse forcing typical requires some knowledge of past forcings or some estimate for future forcings. Once again, you appear to be confusing descriptive statistics and inferential statistics.

Look this isn’t even all that complicated. You cannot make projections/predictions/forecasts for a physical system using time series analysis alone unless you happen to know that your time series is – somehow – a good representation of that system (throwing dice, for example). Given that our climate is not simply random, using time series analysis to make forecasts is clearly wrong. That you would end up defending this is, however, not surprising.

The report would have been more useful for climate modelers if Mill’s had addressed how to include ‘predictor variables’ (forcings). He only briefly mentions these in the discussion section. That might be a way to get some physics into the statistical models.

@Richard Tol,

Apart from obvious problems of 1) ignoring physics and 2) trying to predict future trends purely on the basis of past trends, here is just one major problem with Mills “segmented regression” analysis:

He arbitrarily splits the series into “regime” periods, with no statistical justification for choosing the start of each period. Actually its not completely arbitrary, it is obviously cherry picked to ensure that the most recent regime has a non-positive trend. This is barely a step above Monckton’s cherry picked ‘pause’ posts on WUWT (in fact its worst, because a professor of statistics should know better). Mills goes to such lengths to ensure the most recent trend is not positive, that he invents a ‘regime’ of < 2 years in length in the RSS data, into which he crams about 15 years of warming (see Figure 2, Fig 1 is no better). It is laughable.

There are perfectly good methods for objectively testing for changes in linear slope – they are called change-point models. When applied properly to global temperature data, they consistently fail to find any evidence of a change in linear trend in the last 40+ years (in any dataset). So any objective attempt to predict future trends based on past trends using segmented regression would predict a continuation of the last 40 years upward trend.

Lars,

Indeed, but then he said

which suggests that he rather dismisses this idea. Maybe he could explain what conservation laws apply in economics and finance, since the forcings in climate modelling are fundamentally linked to the conservation of energy.

jimt,

Thanks, your comment seems to cover most of the relevant issues. I hope Richard is happy now 😉

This is all very simple.

The point of the GWPF was to get publicity for work that can be portrayed as casting doubt on global warming. It succeeded, excellently – it’s in a reputable newspaper.

The point of Richard Tol’s posts is to gain attention and demonstrate his superior intelligence. He’s doing very well on the former at least.

The point of Mill’s involvement is the only interesting bit. Was he really in it for the money? I suspect this is more about having some fun and getting some publicity, plus the opportunity for a bit of abstruse academic debate. Academics do tend to enjoy that sort of thing (apologies for the stereotyping).

No-one, but no-one involved in any of this actually believes the “forecasts” are at all relevant to anything real or physical. Responding in that vein is probably necessary but largely irrelevant to the purpose of the exercise.

vtg,

I suspect that sums it up pretty well. It’s hard to believe that the GWPF or Mills could really believe that these forecasts had some merit.

ATTP,

Yes, that was not a very convincing dismissal. It would certainly make sense to try it out.

Notusing anything like ‘predictor variables’ on the other hand, is most likely a bad idea.VTG,

Do have any hard evidence to support your unsubstantiated assertion that The Times is “a reputable newspaper”?

Richard wrote

“Mills finds that the three temperature records are non-stationary rather than trend-stationary. The forecasts follow mechanically.”Do you think that forecast or no further warming, even though radiative forcing will almost certainly increase, is correct?

Jim,

“Newspaper of Record” is the phrase I should have used

https://en.wikipedia.org/wiki/Newspaper_of_record#Examples

It is actually the only serious point of the whole thing – that the GWPF have sufficient influence in the press to get this obvious tosh reported in the Times as an apparently credible piece of work.

That’s the issue.

The Murdoch press. WTF do you expect?

@VTG/@BBD – That’s precisely what I expect.

@RichardTol – You do you realise that you’re defending the indefensible?

RichardTol wrote

“Note that 1 in 20 observations is supposed to be outside the 95% confidence interval, so the probability of observing 1 in 12 outside the confidence interval is about 35%.”Richard, would you agree then that the CMIP5 models are not called into serious question by the observations (for

some but not alldatasets) being briefly outside the 95% spread of the model runs:And even then the difference is partially explainable by the forcings not being exactly as the scenario?

Dikran,

Indeed, thanks for pointing that out. It is rather amusing that I think I pointed this out tio Richard a while back, so to see him now using it to defend what is clearly an incredibly poor model is rather bizarre.

So, Loughborough University is actually promoting this

It will be interesting to see how Mills reacts to these misrepresentations of his work. Is he an unwitting victim or is he a con man too?

Somebody at Loughborough NEEDS to speak to the idiot PR now, now, now…

Richard wrote:

“Mills explicitly tests a linear trend (greenhouse forcing) against natural variability, and comes out in favour of the latter. … so you should find fault with his test”Not difficult, the “exogenous” choice of the segmentation boundaries invalidates the statistical assumptions of the test (the period is not a random sample from some underlying distribution, but has been chosen

afterlooking at the data).“(unless of course you want to argue, pre-Enlightenment, that you reject the method because you don’t like the result).”Richard, is designing the test to give you the result you want (by cherry picking a 2002 start date to minimise the trend) any better?

I long for the days of the ‘Stadium Wave.’

Lars,

The tweet appears to have gone, which may be because they linked to the Geography department, rather than Economics. However, it would be interesting to know if he approved the claim that he says it won’t warm by 2100, or of they simply lifted that from the GWPF press release.

It would be interesting to know if Mills discussed his work with anybody from the Geography department.

I see that you had a word in Loughborough’s shell like Anders:

Mills’ forecast looks similar to what pseudonymous blogger “VS” predicted based on similarly physics-devoid time series analysis (https://ourchangingclimate.wordpress.com/2010/04/01/a-rooty-solution-to-my-weight-gain-problem/ ). Blind application of statistical tools without considering the physical characteristics of the system leads to meaningless results. Unless of course one would like to throw conservation of energy out of the window.

Bart,

Eli just tweeted that post. Really good.

Exactly.

Feature or bug, BBD? No such thing as bad publicity and all that.

@wotts

CO2 is the most important greenhouse gas. It’s concentration has risen exponentially. Radiative forcing is the natural logarithm of CO2 concentration. A linear trend is therefore a reasonable approximation.

@jimt

The number and location of the breakpoints are estimated, rather than set by the analyst.

@dikran

No. As I said, I think that Mills is wrong, because, focussing on significance rather than power, he misinterprets his test results, and he also ignores data.

I agree that it’s reasonable approximation. However, estimates of the forcings suggests that it isn’t actually linear. I’m not averse to reasonable approximations. However, given that we don’t expect the underlying trend to be linear, pointing out that linear approximation doesn’t work very well is not an argument in favour of natural variability, whatever the statistical tests might suggest.

@wotts

A reasonable but unnecessary approximation (as data on radiative forcing are readily available) … if a student would do this, we would deduct points for lack of diligence.

Isn’t that essentially my point? You wouldn’t be suggesting that I’m suddenly arguing in favour of a simple linear approximation?

No interest in a bet, he’s just doing a bit of mathturbation that he doesn’t believe in, SOP for a maths professor, I think. Bit disappointing for an applied maths prof, though. Think I’d rather deal with the delusional idiots who think it’s all a conspiracy than the deliberately disingenous who know it isn’t but generate misleadining nonsense regardless.

I don’t like to say “I told you so” but Prof Mills confirms my analysis via the excellent James Annan:

http://julesandjames.blogspot.com/2016/02/no-terence-mills-does-not-believe-his.html

Ha! Crossed.

@dikran

No. As I said, I think that Mills is wrong, because, focussing on significance rather than power, he misinterprets his test results, and he also ignores data.”“challenge is to explain why trend-stationarity — which corresponds to a greenhouse signal — is rejected in favour of non-stationarity — which corresponds to natural variability.”perhaps you have answered your own question then?@Tol

“The number and location of the breakpoints are estimated, rather than set by the analyst.”

No, the number and location of the breakpoints are not estimated, but rather set by the analyst. Mills explains:

“The break-points were determined ‘exogenously’, in other words by visual examination of

a plot of the series.”

The 4 eyeball-selected and forecast-determining break points Mills specifies for HadCRUT give these 5 regimes:

1. 1850-1919

2. 1920-1944

3. 1945-1975

4. 1976-2001

5. 2002-2014

Mills’ eyeballed HadCRUT breakpoints are unrelated to his eyeballed RSS breakpoints, which likewise make 3 forecast-determining “regimes” (one only 2 years in length) for that series:

1. 1979–1997

2. 1998-1999

3. 1999–2014

@L Hamilton

Really? In response to Wotts, I gave Mills an “L” for lazy. If you’re right, he deserve an “XL” for extra lazy (particularly since he has a theory paper on estimating multiple breakpoints).

@dikran

Sometimes we ask questions because we want to know how the other person would answer.

I guess we have now reached the point where we have taken the paper almost completely apart without discussing the author’s choice of clothes.

Richard “Sometimes we ask questions because we want to know how the other person would answer.”

Yes, but more often we ask questions because we want to know the answer or just to be sure that we understand their position correctly, which is why it is generally a bad idea to answer their questions cryptically or evasively and instead give a straight answer to the question as posed.

“The number and location of the breakpoints are estimated, rather than set by the analyst.”Indeed, at but least he explicitly said exactly what form of breakpoint analysis was performed, and not all authors do that. It was just the consequences of this that were not made explicit (e.g. invalidation of the assumptions of the tests).

@dikran

“The number and location of the breakpoints are estimated, rather than set by the analyst.”

Note that Hamilton corrected me.

Tol:

”

A linear trend is therefore a reasonable approximation.

”

GWPF Report (p18):

”

What the analysis also demonstrates is that fitting a linear trend, say, to a pre-selected portion of a temperature record, a familiar ploy in the literature, cannot ever be justified.

”

A familiar ploy in the literature has now gone to the blogs.

It also says, about linear trends,

Well, yes, that’s how they’re typically used.

Richard wrote

“The number and location of the breakpoints are estimated, rather than set by the analyst.”I obviously misread what Richard wrote, as it is the other way round. Apparently Richard didn’t read that bit of the report (or misread it).ATTP indeed, if climatologists actually thought that physics predicted a linear trend, one wonders why they bother with those GCM things! ;o)

Indeed, we love simple models 🙂

>>> one wonders why they bother with those GCM things!

Another familiar ploy in the literature that cannot ever be justified.

…and Then There’s Physics says:February 24, 2016 at 4:05 pm

Indeed, we love simple models 🙂Indeed, I seem to recall a simple time series model you discussed recently where climate sensitivity was calculated to be 6. Who did that I wonder? Someone who thinks he’s famous I think.

Ben Webster doesn’t tweet a great deal but he does seem to tweet most of his own articles but has been curiously silent about this one. Could it be he was ordered to write this knowing full well it was a load of bollocks?

https://twitter.com/bwebster135/with_replies

Based on the Mills GWPF approach there has been a change in the background trend from 2014 and we are now in a new regime. HadCRUT4 gives a background trend of +2.7 C/decade, how long until the GWPF update their forecast with a +2.7 C/decade trend?

If I understand it right then once you’ve got your ARIMA parameters there’s basically no information going into the forecasts except for the previous 3 years of data and some arbitrarily chosen trends on breakpoints. So the same prediction should work starting from any sufficiently long period. So back around 1970 this method would have forecast flat temperatures. Same if you’d started around 1900.

Can anyone show that I’m wrong?

Mark,

I’m not totally sure, but I think that’s right. He should be able to test his model by considering some earlier period and comparing what his model forecasts with what actually happened.

I’ve dropped Professor Mills an email to ask if he’s tested his model with out of sample data, whether he’s willing to try it now with series where we know the answer (e.g. CMIP5), and how he chose his break points.

We’ll see, maybe I’ve misunderstood.

Would be interested to know what he says.

You could also fit an ARIMA model to, say, a historical CMIP5 run, then compare the resulting forecast to the corresponding RCP4.5 run. Surprisingly, it doesn’t do very well!

Well, there was supposed to be an image in there, but apparently I screwed up the tags. Have a direct link instead:

http://img.photobucket.com/albums/v124/MartinM/arima.png~original

MarkR,

If I understand correctly, the (HADCRUT) ARIMA model is (currently) trendless because it is based on data spanning 2002-2014 only. Mills implicitly acknowledges a “regime change” prior to that so one wouldn’t expect it to be valid over the previous intervals.

Perhaps the most generous interpretation of this work is that it has no predictive power for the next regime change so it could go off the rails at any time. The less generous would observe that it was off the rails before it was published so predictive power of the model is not the primary objective of this work.

Mills actually gives two models for HadCRUT; a trendless ARIMA model, and a segmented trend model with AR(4) noise. The former model will indeed forecast constant temperatures, no matter when you choose to end the analysis, albeit constant with fairly huge error bars..

@markr

Just read Box & Jenkins (1970) to discover how wrong you are.

Richard,

A bit too much to expect you to add a few more words explaining why? No, sorry, that would be silly.

MarkR wrote “Can anyone show that I’m wrong?”

Richard Tol wrote “Just read Box & Jenkins (1970) to discover how wrong you are.”

That would be a “no” then.

Wikipedia? https://en.wikipedia.org/wiki/Autoregressive%E2%80%93moving-average_model

Richard,

Apart from some possible technicality, I fail to see how MarkR is wrong. An ARIMA process still forecasts based on some number of previous past data points in the time series. Therefore, as long as you have enough past datapoints, you should be able to test the model to see how it does for a period where we know what actually happened.

Richard wrote “Wikipedia?”

still a “no” then. Seriously Richard, if you really want to demonstrate your statistical prowess, give a detailed demonstration of where MarkR is wrong, your current approach is not creating a good impression.

The segmented regression approach is unlikely to have useful predictive skill simply because the segments of the regression do not necessarily correspond to physically meaningful “regimes” where there is e.g. some particular change in the forcings. If the data stopped in 1970, the resulting model would not have predicted the rise in temperatures that followed immediately after because it is accounted for in the model of the whole dataset by a new “regime”, not the ARMA/ARIMA component of the model.

@ Richard Tol,

I’ve checked through the equations on wiki and cross referenced against Mills and I still can’t see that I’m wrong, it still seems that the only information going into Mills’ forecast is (1) the parameters estimated from the full series and (2) the previous few years of data. Perhaps you could help me understand by explaining what the forecast woukd be:

1) we run the models from the end of the first “regime” in 1919, what’re the forecasts?

2) Or how about making an RSS forecast using the Jan 1998-Oct 1999 regime?

There’ll probably be something. We’ll spend a long time trying to get Richard to explain and when we work it out, it’ll turn out to be irrelevant, or something silly, like the wrong terminology. It won’t, typically, be worth the effort.

If anyone is interested in a more objective ‘segmented’ regression approach, or checking how well Mill’s cherry picked regimes stack up, I’ve been working on a “R-Shiny” app that does a form of Bayesian change-point analysis…

tanytarsus.shinyapps.io/changepoint/

Its a work in progress, and can be slow if you have a long time series and try for 2 change-points (if you upload monthly data, use decimal years as time, not month!…and include an AR term).

I would welcome constructive feedback (via the email given under help).

PPS it takes a few seconds to load up (loading R packages etc)

PS Obviously, this is not a forecasting tool! Just a way of objectively testing for past changes in trend.

jim,

That’s pretty impressive. What would it take to do the kind of projections that Mills’ report does?

@markr

AR is linear difference equation, but MA is the multiplicative inverse of a linear difference equation. In other words, the MA part uses the complete history in forecasts.

Richard,

Unless I’m mistaken, the MA part relates to the error terms, so how does that make MarkR’s point wrong?

thanks ATTP,

The simplest projection would be the final model averaged trend estimate and its uncertainty applied to future years, tacked on to the end of the fitted line.

I could add a ‘project’ feature that would do the calcs properly (model average the predictions from all possible models)…but apart from providing a counter-projection to Mills (using his own ‘ignore physics’ approach), is there any value in that?

Pingback: Terence Mills does not believe his “forecast” and other hits – Stoat

Also (and, of course, this may be intentional) I think MarkR’s point is that if you’re trying to do a forecast, then the unknown future term depend only on a few of the essentially known past terms, even if those past terms themselves depend on even earlier terms.

Probably not, given that it sounds like it’s just an obvious extension. I was just thinking it might be nice to see it illustrated.

Curious to try this out, I fit a time series regression model segmented as Mills describes, with ARIMA(4,0,0) errors. My results resemble but don’t match Mills’, not sure why. Regardless, experimenting with this approach and these data quickly reveals how much forecasts are controlled by the operator’s choice of break points. Leaving out 2015 data also helps to reduce the slope of the final segment and forecasts.

@ Richard Tol

What’s the forecast for those models at each of the other breakpoints? Say 1925 or 1975 in HadCRUT4 or 1999 in RSS? This would really help me visualise what’s going on.

FWIW, here is the HADCRUT projection based on a Bayesian changepoint model with up to 5 trend changes allowed (it finds only 3 up to 2015)…

The CI’s for the projected trend (and predicted values, in blue) reflect the fact that the prior says more change-points are possible, and the data says they do occur occasionally, so for the future it averages over random, low prob. change-points of random magnitude and direction.

@ jimt

So that means that a statistical model that allows change points that are automatically selected based on some criterion (criteria?) would currently project continued warming, which is different from the GWPF-type approach in which the author chooses their own preferred change points?

exactly MarkR.

GWPF choose changepoints to ensure non-positive trends.

When the change points are chosen objectively, in this case based on likelihood functions (see “About” under help section here…https://tanytarsus.shinyapps.io/changepoint/), there is no evidence of a changepoint at any time since 1970….

so if you must use recent past trends to project future trends, you’d be projecting continued warming…(which in this case happens to agree with physical reality)

if you know R

http://robjhyndman.com/hyndsight/forecasting/

I’ll post something up later if I get time

for now..

http://robjhyndman.com/hyndsight/structural-breaks/

If I understand correctly,

– we don’t need to study the forcings to understand the temperature trend

– we get better trend lines if we create arbitrary “regimes” of variable length

– “regimes” are best chosen through pareidolic examination of a graph

– nothing is too wrong for Tol not to try and garner attention through defending it

Maybe a little late on this, but way up there Richard Tol said “CO2 is the most important greenhouse gas. It’s concentration has risen exponentially. Radiative forcing is the natural logarithm of CO2 concentration. A linear trend is therefore a reasonable approximation.”

This is commonly asserted, but false. The (roughly) exponential growth is in the fossil C we have added to the atmosphere. That is on TOP of a pre-industrial natural level of CO2. The formula would be something like log(A + B exp(c t)) – and this is NOT a straight line. In early stages (while B exp(ct) is much less than A) it is itself close to (constant plus) exponential. A linear trend for response to our CO2 forcing is therefore not a reasonable approximation over any long-enough period of time.

This cr– I’m sorry, this nonsense again??? Tamino has a lovely analysis of M. Beenstocks claim of a non-stationary climate, and it’s really quite laughable. All credit to him for the following:

https://tamino.wordpress.com/2010/03/11/not-a-random-walk/

https://tamino.wordpress.com/2010/03/16/still-not/

Making an (erroneous) determination of non-stationary behavior requires abusing the Augmented Dickey-Fuller test, which can only give a weak indication of unit roots and hence non-stationary behavior for variations around a linear trend. It completely fails if the forcings (as in the case of climate) are nonlinear, for example with all forcing included, not just CO2. The proper test, the Covariant ADF, clearly demonstrates that temperatures are trend-stationary with forcings. Or you could check with a number of other unit root tests, such as the Phillips-Perron test, of check explanatory power with the AIC or BIC tests – all of which reject non-stationary behavior.

In short, climate temps are trend- stationary with forcings, not a non-physical random walk that ignores the Conservation of Energy, and if you encounter a claim of random walks you can stop reading the paper right there and save yourself some waste of time. It’s nonsense through and through.

It’sa shame Mills has wasted so many years and papers on such twaddle.

It’s also interesting to note that (by my quick count) fifteen of the thirty-one references in Mills non-peer reviewed work (and no, I don’t count peers of the Realm as scientific peers by default) as being too Mills own work. That level of self referencing is never a good sign..

“GWPF choose changepoints to ensure non-positive trends.”

@jimt, make no mistake here: while the GWPF can be accused of many things, only one person is responsible for doing what he did, and that is Terence Mills. Even if the GWPF asked him to do what he did, ultimately he is still the one to blame.

And with Tol apparently also seeing the flaws in Mills’ work, the question once again arises which “peers” reviewed a GWPF paper.

@wotts, markr

You both have PhDs. You should be able to work out, as third year undergrads in economics can, that MA(1) = AR(inf). If you can’t, read Box & Jenkins.

Richard,

I’ve just had a look at Box and Jenkins. Are you referring to equation 2.2 which seems to be suggesting that AR(1) = MA(inf)? Also, I’m not sure how this is relevant as it appears to require an infinite number of points, which clearly we do not have.

jimt, nice! The lack of a breakpoint around 2000 is an interesting finding.

The second of Steven Mosher’s links is well worth reading. Structural break models are quite useful for “descriptive statistics”, but as I mentioned earlier there is no good reason to think the breakpoints actually mean something (unless you can put some physics behind it) and it does complicate the statistical testing procedure, especially if the breakpoints are chosen by hand.

So to summarise for the benefit of those who follow bull channels on graphs of stock prices but not global temperatures:

1) Prof. Mills magnum opus has no basis in physical reality

2) It is “curve fit” using all the available data, and thus the “backtest” employs no “walk forward analysis”

3) Consequently if you were to “bet the farm” on it you’d in all probability be sleeping in shop doorways shortly thereafter.

4) Using Tamino’s methodology instead, you’d get rich slowly:

@ Richard Tol

I checked before posting which is why I was confused. It seems that projections done in 1925 and 1975 are what I would describe as pretty shitty, consistently underestimating later warming.

But perhaps I did it wrong. With your economics insight perhaps you could correct me: what are the predictions if made in 1925 or 1975?

@wotts

I guess you’re looking at the original edition. In the revised edition of 1976, this is discussed in section 3.3.5, pp 72-73.

Lacking a complete history, you would need to make an approximation for the deviation between actual and equilibrium temperature before your first observation. People typically set this to zero, which is an accurate approximation (for a record of 160 years) unless one of your roots is on or beyond the unit circle.

Richard,

What you appear to be highlighting is that your time series depends on all previous data values, which is fairly obvious given that the value at

ndepends on the value atn – 1, but the value atn-1also depends on the value atn-2, etc. However, that doesn’t change that if you want to use your model to make a forecast, all you need to know are some finite number of past data values. That those past data values may technically depend on even earlier data values doesn’t make this not true. Hence you should be able to make a forecast from any time, as long as you have sufficient past data values. Therefore, you should be able to test the model by considering some earlier time period and comparing the forecast with what was known to have happened.Therefore, you should be able to test the model by considering some earlier time period and comparing the forecast with what was known to have happened.Precisely. “Walk forward analysis” as those who wander in technical trading circles call it.

Richard Tol wrote “@markr “Just read Box & Jenkins (1970) to discover how wrong you are.”

Richard Tol wrote “I guess you’re looking at the original edition. In the revised edition of 1976, this is discussed in section 3.3.5, pp 72-73.”

Mildly amused 🙂

“Therefore, you should be able to test the model by considering some earlier time period and comparing the forecast with what was known to have happened.”Indeed, if this were 2020, it would be very silly to say that we couldn’t have built the model in 2015 to see what it would have forecast as that is just what Mills actually did.

Mills projects out to 2020. The Times drew the graph out to 2100 and claimed “The global average temperature is likely to remain unchanged by the end of the century,” but they could have drawn it out much further and written “New study shows that global average temperature can never change”.

To be fair (not sure why I should be really) the uncertainty bounds grow such that it’s not the case that the temperature will not change, but rather that it will go up and down randomly with no expected persistent trend.

Arthur Smith said on February 25, 2016 at 2:22 am,

“Maybe a little late on this, but way up there Richard Tol said “CO2 is the most important greenhouse gas. It’s concentration has risen exponentially. Radiative forcing is the natural logarithm of CO2 concentration. A linear trend is therefore a reasonable approximation.”

This is commonly asserted, but false. The (roughly) exponential growth is in the fossil C we have added to the atmosphere. That is on TOP of a pre-industrial natural level of CO2. The formula would be something like log(A + B exp(c t)) – and this is NOT a straight line. In early stages (while B exp(ct) is much less than A) it is itself close to (constant plus) exponential. A linear trend for response to our CO2 forcing is therefore not a reasonable approximation over any long-enough period of time.”

Thank you so much for this. Every so often for some time now I have tried to get folks to see that the response does not need to be a linear one. This is especially so when thinking about it from a purely mathematical viewpoint.

I’ve noticed for some time now that one of the strains of thought in mainstream climate science denial seems to contradict some purely mathematical ideas in basic analysis. This strain of thought seems to involve two mathematically false ideas: If we “combine” an exponential function with a logarithmic function, then we get an approximation of a straight line, and that if given a logarithmic function, we actually need to “combine” it with an exponential function to obtain something like a straight line.

They seem to not get that it’s very easy to obtain increasing convex functions (which give upward accelerated graphs, graphs that grow faster and even much faster than a straight line) from logarithmic functions without having to “combine” exponential functions with these logarithmic functions at all. They seem to not get that we can “combine” logarithmic functions with merely polynomial functions (including polynomial functions that are merely linear) to “counteract” logarithmic functions to obtain these increasing convex functions – it’s one of the basic facts of analysis that we don’t need to “combine” exponential functions with logarithmic functions to “counteract” logarithmic functions to obtain these increasing convex functions.

That is, if it’s very easy to obtain a graph that grows even much faster than a straight line from “combining” a logarithmic function with a polynomial function (even just a linear one), then, with all the more force, it’s very easy to obtain a graph that grows even much faster than a straight line from “combining” a logarithmic function with an exponential function.

What all this means is this: That which seems to be some assumptions behind what Tol said (and what so many deniers of mainstream climate science say) are quite mathematically false. Especially from just a purely mathematical viewpoint, it is most certainly not reasonable to expect a linear function (or a linear trend) to result from “combining” exponential and logarithmic functions.

See such as this

http://www.math.uconn.edu/~kconrad/blurbs/analysis/growth.pdf

for some of the analysis in question.

(It’s easy to forget these basic ideas in analysis on growth of functions. Back in the late 1990s, I was working on a problem whose solution required the creation of a function that required no more than polynomial growth. I asked for some help from a gifted mathematician more knowledgeable than I in analysis. He spent most of a day creating some very complicated formulas covering the entire backboard at the front for the room. But later, as soon as I walked into the room and scanned all of that, I immediately realized that it could not possibly work due to some exponential functions present in some of the numerators, giving long term exponential rather than merely polynomial growth. Needless to say, he was pissed. All that work was for naught because some basic analysis did not occur to him when he saw and used the formulas I gave to him to work from.)

No, that’s not what he’s getting at. An AR(p) series depends on the last p datapoints. An MA(q) series depends on the last q

error terms. But we don’t have the error terms, only the datapoints. We have to estimate the error terms, and those estimates will depend on all the data.This, on the other hand, is entirely correct. Tol’s digression into the properties of MA series is particularly pointless, given that the segmented trend model which MarkR was discussing

has no MA component. It’s technical, pedantic trolling completely devoid of any relevance to the substantive points under discussion. Sounds suspiciously similar to your earlier prediction, in fact. It’s almost as if Tol has a habit of doing this, or something.I do hope he’s not teaching any 3rd year undergrads that MA(1) = AR(inf), though, since it’s not true in general; it applies only in the case of invertible processes.

The Times gets around that by dropping the uncertainty bounds. Although the fitted segmented trend model doesn’t seem to have expanding uncertainty so maybe they are not misrepresenting Mills.

Martin,

Yes, okay, that makes sense. Thanks.

Well, yes, and he’s pretty good at it too.

In fact, the ARIMA(0,1,3) forecast from HadCRUT is consistent with the lower end of RCP8.5.

Keep in mind that a physical process constrained by the Conservation of Energy cannot vary it’s temperature in a random walk – it will always be trend stationary WRT the forcings.

Mills post on climate is therefore twaddle, and all subsequent discussion on that topic boils down to asking how many angels can dance on the head of a pin.

@Martin

We indeed tell our students about the virtues of invertibility.

@james

“To be fair (not sure why I should be really)”

because your mother taught you

There is a new charitable foundation that will have a mandate to inform policy-makers, media outlets, and the general public about walrus populations.

It’s called the Global Walrus Population Forum.

Our most current publication presents the results of a thorough and quantitative study of recorded walrus sightings.

Based on the assumption that walruses are not hunted by humans, we have concluded that the global walrus population will remain stable in the future.

Models in which ‘hunting’ variables have been included in this framework have been considered, with some success, when used to explain observed behaviour of walrus populations. Their use in forecasting, where forecasts of the hunting variables are also required, has been much less investigated, however: indeed, the difficulty in identifying stable relationships between hunting and other walrus population variables suggests that analogous problems to those found in elephant and whale population studies may well present themselves here as well.

Although we suggest that the the global walrus population will remain stable in the future, we object to the use of linear approximations because they are a common ploy in the walrus literature. In fact, since our Professor Chris Sussex dies the existence of global averages, we find the very idea of a global walrus population to be deeply problematic.

It is difficult not to wonder whether a parallel with modern climatology will arise. Like the walrus population, the climate is a deeply complex system that defies simple representation. Giant computer modelling systems have been developed to try and simulate its dynamics, but their reliability as forecasting tools is proving to be very weak.

Foreword by Professor Kitrick McMoss.

Keywords:

Walrus, Semolina Pilchard, Eiffel tower, Elementary Penguin, Hare Krishna, Edgar Allen Poe

KR: Well said, Bravo!

The question is how much can we save the world from Richard Tol by keeping him busy on places like this?

Tol said “Hamilton apart, no one on this thread have offered any valid argument as to why Mills is wrong” and seemed unconcerned by my questions about whether the prediction works when you test it against known results. I’m not arrogant enough to assume I know more about time series analysis so I didn’t want to leap in too hard to begin with. After a response from Professor Mills and checking MartinM’s ARIMA figure against what I worked out I’m now more confident that the prediction is bullshit.

In 1925 Mills would have predicted no warming, but then it warmed. In 1975 Mills would have predicted no warming, but then it warmed. The only way it appears to match past observations is if you wait until the real world happens and then manually add in global warming by “eyeballing” trends. And in the most basic sense the ARIMA model’s uncertainties balloon so much that it’s hard to go outside its bounds, but that’s true of any prediction with large uncertainties, bullshit or otherwise.

I’m still hoping I’ve got the wrong end of the stick here, maybe Richard Tol could point out that actually the predictions do work when we know the answer, say from 1925 or 1975. But if I’m right then it’s embarrassing that any competent reviewer would let this through if the purpose of their review is to ensure typical scientific standards.

@MarkR

As I said, read Box and Jenkins. That would tell you that the best forecast from a ARIMA without trend indeed (rapidly) converges to a constant value.

Which probably tells us that it’s completely unsuitable for determining forecasts for a systems that responds to external stimuli.

Dismissing the Terence Mills contribution to climate science as (allowable) academic stupidity ignores the fact that since 2003, (in between econometric analysis that notably failed to predict the financial crash), Mills has been publishing some sort of paper on the uncertainty of climate prediction predicated on unit-roots, level-shifts or other mathtubation at least once a year.

So Richard Tol, I was correct that either with ARIMA or Mills’ arbitrary selection of trends, he would have failed to predict the warming that happened after 1925 or 1975.

Since it contradicts physics and the real world works on physics I think “useless” might be a charitable description. Do you agree?

Richard wrote “As I said, read Box and Jenkins. That would tell you that the best forecast from a ARIMA without trend indeed (rapidly) converges to a constant value.”

The question is not what the best forecast from an ARIMA model without a trend is, but whether Prof. Mills method gives sensible, to quote MarkR:

“So back around 1970 this method would have forecast flat temperatures. Same if you’d started around 1900. Can anyone show that I’m wrong?”. AFAICS he isn’t wrong, it is just that Richard has substituted a different question from the one that was actually asked.One of Richard’s skills. As long as Richard can construct a question using the words you used, then he seems to feel entitled to answer the question he’s constructed, rather than the one you actually asked.

He is so mild and modest in the way he goes about it though that nobody minds at all! ;o)

I’d like to say that it must be what his mother taught him, but that just seems wrong 😉

Ask not for whom the Tol bells…

I’m sure that Richard is delighted that this thread is almost entirely about him. Well done Richard!

I keep forgetting about the Tol’s 2nd blog Law.

Meanwhile, back in the real world…

The world was a vastly different place 250 years ago. There weren’t 50 states, Taylor Swift feuds or viral videos anywhere in sight.Another thing that was also less plentiful: carbon dioxide in the atmosphere. Since then, CO2 has risen and with it, a host of other impacts have befallen our planet. That includes the rapid acidification of our seas at a rate unseen in at least 300 million years.Scientists Turned Back the Clock on Climate Changeby Brian Kahn, Climate Central, Feb 24, 2016and

If the world hopes to avoid the most catastrophic effects of climate change, humanity must emit less than half the carbon dioxide than previously thought in the coming years, a new study shows.In order to keep global warming to no more than 2°C (3.6°F) — the basis for the Paris climate agreement struck last year — scientists have devised a “carbon budget” for how much carbon can be emitted before warming crosses into catastrophic territory.Study Calls For Leaner ‘Carbon Budget’ to Slow Warmingby Bobby Magill, Climate Central, Feb 24, 2016Bill Gates Q&A on Climate Change: ‘We Need a Miracle’>/b>http://www.bloomberg.com/news/articles/2016-02-23/bill-gates-q-a-on-climate-change-we-need-a-miracle

Joe Romm wasn’t that impressed.

Speaking of Bill Gates…

After Bill Gates explained his strategy for boosting energy access while limiting climate change in a videotaped interview we published on Tuesday, readers were invited to submit questions for the Microsoft co-founder, philanthropist and investor.Below are his answers to a few of the hundreds of questions he received on The Times and on Facebook, covering everything from artificial meat to Americans’ gas guzzling driving preferences (with some light editing of his dictated responses):Bill Gates Explains How to Make Climate Progress in a World Eating Meat and Guzzling Gasby Andrew C Revkin, Dot Earth, New York Times, Feb 25, 2016Why does everyone keep feeding the tol?

The Gates ‘Breakthrough Energy coalition’ sounds like the French response to the advent of the steam engine.

The French got together a group of experts to research the beat, optimal design and deployment of this new technology. This was when the Newcomen atmospheric engine was pretty much state of the art.

Inevitably there was a disincentive for any private investment in existing technology while the authoritative expert group was still deciding on the best type.

In the UK, without such official oversight there was investment in existing machines, with minor tweaks and fundamental improvements arising out of the practicalities and competition of the deployment.

Its a fable of evolutionary development versus intelligent design. French utilisation of Steam power was delayed by several decades.

@MarkR

ARIMA has been classified as an agnostic model. Agnostic models are useful when you know little. They are less useful in this case, as we know a lot.

Ditto for breakpoints in trends. Wonderful tools to describe the past, but without a model to predict the next trend break, fairly hopeless for forecasting the future (or indeed understanding the past).

Izen –

Problem is that in this case, there is no incentive without government.

With no regulation/tax/subsidy of some sort, the cheapest way to supply electricity is by burning coal with an absolute minimum of waste treatment. Possibly natural gas at the moment. And given that we already know the ‘menu’ of power sources, unless physics is wrong, someone really does have to make an expert decision. Indeed, pretending that a magical new energy source will solve the problem is dangerous in itself because it justifies inaction.

Allowing markets to refine any solution – fine. They are an optimising function. But there is no natural economic incentive to avoid CO2 emissions.

But isn’t Izen’s point that really you need to just get on and try to do things. A committee deciding, in advance, on the best solution is likely to not realise the many problems that will likely be encountered, and that a more proactive approach encounter and solve along the way.

@izen

But where the French experts were fleecing the taxpayers of France, Gates and co intend to fleece the taxpayers of the world.

Putting together a coalition of experts wouldn’t preclude new developments from being made outside that coalition.

Gates in the Bloomberg interview:

“I do think with some tuning, the Breakthrough Energy Coalition group that we’re putting together will have some characteristics of a venture fund to invest in these breakthrough ideas.”Doesn’t sound like the French steam engine commission.

Pingback: Le Oche con mandria - Ocasapiens - Blog - Repubblica.it

“Ditto for breakpoints in trends. Wonderful tools to describe the past, but without a model to predict the next trend break, fairly hopeless for forecasting the future (or indeed understanding the past).”

Except that’s what scenarios are for and they are far more useful than linear projections into the dark. Forecasting is oversold.

Also, the physics of these breakpoints are getting better understood – the criticality of the climate system will soon be diagnosable. For instance, we are in a shift at the moment but we don’t have a great idea physically of what’s going on. We will know in hindsight.

And this “The challenge is to explain why trend-stationarity — which corresponds to a greenhouse signal — is rejected in favour of non-stationarity — which corresponds to natural variability. None of the commenters above rises to this challenge”

Has it occurred to anyone that both sides of the argument are wrong? Colleagues and I are trying to publish on this and keep getting knocked back by people who have a stake in one side or the other. The bias in the science community isn’t a lot different to that in the contrarian community -after all, they’re all people.

@roger

The fight between the temperature-has-unit-root and temperature-is-stationary-around-a-trend camps has been long and bitter. Writing that both are wrong is bound to land you in hot water.

How about something interesting? We know Mills’ price. What was Richard”s? Is there a statement of outside income that can be FOIed?

@eli

You know the way. Our FOI team loves you.

I think Richard does it for the delusional egoboo. Prove me wrong!!!!

It seems that people are forgetting, or choosing to violate, Tol’s 2nd Law of Blog. Since the discussion of outside money has come up, this question was asked and – I think – answered.

I say, “I think” because of course the answer implies that Richard wasn’t (common practice) but doesn’t say that he wasn’t. I will assume he wasn’t until I learn otherwise.

Too trusting. All they said is they didn’t pay him for an intro. Not that they didn’t pay him.

True, and technically they didn’t even say that they didn’t pay him for the intro. However, I’ll still give the benefit of the doubt till shown otherwise, too trusting or not 🙂

Pingback: A little surprised it took this long | …and Then There's Physics

For completeness.

Poor Richard, they’ve even given me some money 🙂

Pingback: Think tank throws out centuries of physics, climate scientists laugh, conservative media fawns | Dana Nuccitelli – Enjeux énergies et environnement

Pingback: Lord Lawson thinktank’s report ignores everything we know about climate science – The Guardian | Dude Times

Pingback: Lord Lawson thinktank’s report ignores everything we know about climate science – The Guardian | Online News of The World

Pingback: Lord Lawson thinktank’s report ignores everything we know about climate science – The Guardian | Only Share News of the World

Pingback: Lord Lawson thinktank’s report ignores everything we know about climate science – The Guardian | Today News Update

Pingback: Lord Lawson thinktank’s report ignores everything we know about climate science – The Guardian | News Arab Update

“@roger

The fight between the temperature-has-unit-root and temperature-is-stationary-around-a-trend camps has been long and bitter. Writing that both are wrong is bound to land you in hot water.”

It already has, but I’m not giving it away

Pingback: GWPF throws out centuries of physics, climate scientists laugh, conservative media fawns | Daily Green World

Pingback: An unchallengeable strategy? | …and Then There's Physics

Richard Tol wrote (February 24, 2016 at 9:09 am) “Mills explicitly tests a linear trend (greenhouse forcing) against natural variability, and comes out in favour of the latter. His forecast follows immediately from his test, so you should find fault with his test (unless of course you want to argue, pre-Enlightenment, that you reject the method because you don’t like the result).”

Unfortunately, Mills runs this test against the full length of the temperature record. The rising forcing from GHGs has only became significant since the middle of the 20th century. When considering the full length of the CET record (350+ years) or HadCru4 (160+ years), it is not surprising that the linear trend is lost in the noise – one isn’t expecting such a trend in 85% or 60% of the data. It is not surprising that the statistical model he derives describes unforced, not forced variability, From his 2009 paper mentioned above, however, Mills knows that radiative forcing contributes 2+/-1 K/doubling of warming to the HadCru4 record. There is pretty of room for responsible skepticism about CAGW in his value of 2+/-1 K/doubling. It is irresponsible of him not to include this knowledge in his statistical forecasts.

This leaves the RSS record. Since the troposphere has a much lower heat capacity that the surface, unforced variability is greater. If the surface temperature record has occasional 0.5-1.0 degC warming spikes from major El Nino events and the record were only 35 years long, one might expect that unforced variability could overwhelm any forced linear trend. Unfortunately, Mills chooses to focus on an analysis with a two breakpoint during the 1997-88 El Nino. A breakpoint implies that factors driving the forced and unforced variability of temperature changed in 1998, which appears to be an absurd suggestion. It is not absurd to suggest that new (anthropogenic) factors became important some time in the first half of the 20th century.

@franktoo

Beenstock agrees with you: The time series of radiative forcing is more than linear.

Pingback: Il modello econometrico del clima – OggiScienza

Pingback: Matt Ridley doesn’t understand free speech | …and Then There's Physics

Pingback: How do Climate Change Denialists rationalise their position? - Skeptical Science

Pingback: It woz El Nino wot dunnit! | …and Then There's Physics

Pingback: Clutching at straws GWPF style | …and Then There's Physics