Andrew Dessler’s paper (technically Dessler and Forster), which he mentioned in this comment, has now appeared as a pre-print. Essentially, they use an energy balance approach to estimate equilibrium climate sensitivity (ECS), but – as I mentioned in this post – they use the tropical average temperature at 500-hPa, rather than using global average surface temperatures.

Typically, one can estimate the ECS by using that the planetary energy imbalance is given by

where is the change in external forcing, is the change in surface temperature and relates to the ECS through

is the change in forcing due to a doubling of atmospheric CO_{2}.

The problem, though, is that it seems that – due to internal variability – the planetary energy imbalance, , correlates poorly with changes in surface temperature, . It appears, however, that there is a better correlation with changes in 500-hPa temperatures in the tropics, . Hence it seems that it is better to use

where is the equivalent of . The ECS can then be estimated using

where the latter term is because the ECS refers to the surface, not to tropical temperatures at 500-hPa. I’m also slightly glossing over what we would estimate to be from actual observations, and what it would be after a doubling of atmospheric CO_{2}

The bottom line, however, is that they estimate the ECS to likely be between 2.4^{o}C and 4.5^{o}C (17-83% confidence interval), with a mode of 2.9^{o}C and a median of 3.3^{o}C. It seems, therefore, to be another example of an analysis suggesting that the ECS is probably above 2^{o}C. However, unlike the recent Cox et al. paper, it doesn’t rule out some of the higher values (around 4^{o}C, or slightly higher). This is maybe interesting, given another recent paper that suggested greater future global warming.

However, I don’t think we should read too much into this. I think the key points are that it again seems to largely ruling out an ECS less than 2^{o}C, finds a best estimate of around 3^{o}C, and does not yet confidently rule out ECS values of around 4^{o}C, or higher. Would be nice if we could do the latter, but it would be wrong to think we have, when we probably haven’t.

I think there’s a problem in the last equation in your post. Units don’t work out.

Andrew,

I think I’ve now fixed that (I had , rather than – simply a latex error). Is that now correct?

Yes, now correct.

Thanks.

Andrew Dessler (@AndrewDessler) says: We use inter annual variability to estimate equilibrium climate sensitivity (ECS)

Good to see a second descriptive model put up overcoming those “we will have to wait til the C02 doubles” concerns.

Not happy about not using the 10 bad climate models out of the 25, One feels you should have done an estimate with the lot and then one with the good models.

The time period 2000 to 2017 seems to suffer the same problems as Clive’s simple box model,

Some people expect ongoing bracket creep for a further 200 years meaning you may be underestimating as well.

Thank you for pointing out the difficulties that lie in trying to make these estimations.

And for putting in a where might the errors be section.

As a skeptic I note that unlikely to be < 2.0 C translates to either a 5% or 11% probability in a

13-87% range.

That keeps me happy.

Do I understand the basic approach correctly: If there are no climate feedbacks, a global temperature increase of x, would correspond to a change in energy balance of y. And if feedbacks are strong enough to double the warming response then temperature increase of x should correspond to a change of energy balance of 2y?

We then look at small changes in temperature over short time frames, and find a relatively large change in energy balance, suggesting a relatively large climate sensitvity?

But aren’t short term changes in global temperature driven by ENSO variability? So increases in temperature in the short term are usually associated with a distinctive pattern of changes in SSTs, cloudiness, balance of radiation over land vs sea, rates of mixing of subsurface ocean heat content etc. These changes are probably not going to happen in response to increases in temperature over the long term, rendering the approach suspect? Especially if limited to the tropics? And global temperature seems to correlate better with ENSO in the mid atmosphere, hence stronger response of satellite global temps to ENSO than surface temp series?

Michael,

The limit isn’t 2y. If we consider climate change specifically, then the no-feedback response is about 1.2K (i.e., the no-feedback response to doubling atmospheric CO2). This is largley because the Planck response is 3.2W/m^2/K and the change in forcing due to a doubling of atmospheric CO2 is 3.7W/m^2. Feedbacks can more than double this (the best estimate is a total change of about 3K), but we can’t rule out (with high confidence) changes of just over 4K.

In principle – I think – this shouldn’t matter if this a good correlation between the temperature changes and the planetary energy imbalance. However, because of internal variability (and how it impacts both the temperature and the feedback response) there isn’t a good correlation between the planetary energy imbalance and changes in surface temperature. That’s why Dessler and Forster suggest it is better to use the tropical temperature at 500-hPa. This seems to show a better correlation.

Hi Ken,

are you aware of this yet another paper on ECS?

“We report the actual ECS from multi-millenial simulations of two GFDL general circulation models (GCMs), ESM2M and CM3 of 3.3 K and 4.8 K, respectively. Both values are ~1 K higher than estimates for the same models reported in the Fifth Assessment Report of the Intergovernmental Panel on Climate Change obtained by regressing the Earth’s energy imbalance against temperature.” http://onlinelibrary.wiley.com/doi/10.1002/2017JD027885/abstract

There are number of papers which consider +3°C/2xCO2 with non-zero and quite significant probability. Looking at paleo-data, I am not sure if less than 3 °C/2xCO2 (not to say 2°C!) really is supportable…

best,

Alex

P.S. Oh well, here is another ECS paper derived from observed temp. increase and well, its not a sleep-well message (for some reason, it is not included in that famous Knutti review paper “Beyond ECS…”):

” A key finding is that the sensitivity can be constrained by harmonising historical records of land and ocean temperatures with observations of potential climate-change drivers in a non-steady state, energy-balance equation via a least-squares optimisation. The global temperature increase, for a CO2 doubling, is found to lie (95 % confidence limits) between 3.0oC and 6.3oC, with a best estimate of +4oC. Under a business-as-usual scenario, which assumes that there will be no significant change in people’s attitudes and priorities, Earth’s surface temperature is forecast to rise by 7.9oC over the land, and by 3.6oC over the oceans, by the year 2100.”

https://www.cambridge.org/core/journals/earth-and-environmental-science-transactions-of-royal-society-of-edinburgh/article/climate-sensitivity/5FFF37C02923D92C3054B87448FB512A

And also this, sorry if it was mentioned before. New study from early Eocene, claiming models are (significantly) underestimating polar amplification (and also ECS (ESS?)) even using the CONSERVATIVE data (i.e. deep sea temps.):

“We find that tropical SST are characterized by a modest warming in response to CO2. Coupling these data to a conservative estimate of high-latitude warming demonstrates that most climate simulations do not capture the degree of Eocene polar amplification.”

http://www.pnas.org/content/early/2018/01/12/1714744115

best,

Alex

Here is another study I missed few days ago in Nat Comm:

“By capturing the timing and thickness of the daily cloud cycle on a global scale, however, Yin and Porporato have provided scientists with a tool for confirming if climate models aptly portray cloud formation and the interaction between clouds and the atmosphere.”

https://www.princeton.edu/news/2018/01/10/spotty-coverage-climate-models-underestimate-cooling-effect-daily-cloud-cycle

Alex

Alex,

I hadn’t seen those. Thanks, I’ll have a look at them.

You are welcome.

Here is another recent paleo-study, which I did not find here (just a brief search though). It claims: “Nevertheless, this paleodata-based analysis suggests that the equilibrium climate sensitivity for present-day is more at the high end with respect to reported values in the IPCC AR5 report (e.g., Thematic Focus Element 6 in Stocker et al., 2013).”

Full paper here (Koehler et al., 2017): https://epic.awi.de/46076/1/koehler2017p.pdf

Alex

Alex,

Thanks. Should probably read that one, given that I published a paper with Peter Koehler last year 😉

I have it on good authority from a skeptic that the Koehler paper is a joke. He claims emphatically that he is a scientist.

There was a bit of a lag, but I recently noticed that the spreadsheet kindly provided by Karsten Haustein and co-authors (11/26/17 ATTP blog) provides an “observation-based” method to estimate ECS which better accounts for ocean mixing. The paper used HadCRUT and a 2-box model with time constants of 4 and 209 years.

The spreadsheet coefficient relating man-made forcing to temperature is: 0.87 deg C of warming per W/m2 forcing. So the expected temperature for 2XCO2 forcing = 3.7 W/m2 x 0.87 = 3.2C per 2XCO2.

The higher ECS for the 2-box model vs. classic EBM is caused by the relative weighting given recent observations. In classic CBM, the base and current periods are given equal weighting. As lag increases in box models, more weight is given to recent periods. Note that 1-box models, discussed in Mark Richardson’s 1/17/18 blog, provide an intermediate result of around 2.5C.

A second benefit is obtained from giving the recent data more weight: the recent observations and forcing estimates are of higher quality. It appears that the early data is degrading instead of improving ECS estimates.

angech:

“Not happy about not using the 10 bad climate models out of the 25, One feels you should have done an estimate with the lot and then one with the good models”

We did exactly that — see Fig. 6 and accompanying text.

“The time period 2000 to 2017 seems to suffer the same problems as Clive’s simple box model,

Some people expect ongoing bracket creep for a further 200 years meaning you may be underestimating as well.”

Take a look at our other recent paper (https://www.atmos-chem-phys-discuss.net/acp-2017-1236/) to see some details about the $\Theta$ parameter we use. In models at least, this parameter seems robust and does not creep — in fact, that’s one of it’s primary advantages of the traditional energy balance framework. As far as 17 years of data goes, the shortness of the record is incorporated in our uncertainty estimate; see Fig. 1.

Michael Hauber:

“But aren’t short term changes in global temperature driven by ENSO variability? So increases in temperature in the short term are usually associated with a distinctive pattern of changes in SSTs, cloudiness, balance of radiation over land vs sea, rates of mixing of subsurface ocean heat content etc. These changes are probably not going to happen in response to increases in temperature over the long term, rendering the approach suspect? ”

Yes, this is correct. That’s why we specifically account for the difference between interannual variability in our analysis (see Eq. 4 and 6 and accompanying text).

To follow up on what Andrew has said, I simplified things in this post. The full equation for estimate the ECS (i.e., Equation 6 in the paper, rather than the last equation in my post) is actually

which introduces a correction that accounts for the measured/observed being slightly different to what it would be were we to wait for atmospheric CO2 to double.

> We did exactly that — see Fig. 6 and accompanying text.

Doc has a knack for having an opinion on stuff he hasn’t even read.

Willard: “Doc = angech”?

JH,

Yes.

Andrew E Dessler says:

“angech:“Not happy about not using the 10 bad climate models out of the 25, One feels you should have done an estimate with the lot and then one with the good models” We did exactly that — see Fig. 6 and accompanying text.”

Thanks.

“Willard says: Doc has a knack for having an opinion on stuff he hasn’t even read.”

–

Cut a bit of slack this time, Willard.

I looked at the pre print in numbered form.

I read it all until the Bibliography and thought that was the end [normally is]

I now see the graphs are appended after that.

–

Including the 2 “bad” graphs from GISS does change one of the conclusions a tiny bit.

–

The take home message though is that Andrew does feel there are ways to try to assess ECS without waiting hundreds of years.

This process is very input and opinion dependent however he has put in a good framework for more than a starting point.

Not sure why you think it rules out lower values. That’s only a 67% CI. Difference between lower bound and median is 0.9 C. So that means that the lower bound of the 95% CI is around 1.5 C.

> I read it all until the Bibliography

You might have missed lines 163, 170, 177, and 191, Doc.

Do you have a quote that may explain your unhappiness about “not using the 10 bad climate models out of the 25”?

You might have missed lines 163, 170, 177, and 191, Doc.

]see Fig. 6 and accompanying text.]

Did not miss them, No graphs there though

–

Do you have a quote that may explain your unhappiness about “not using the 10 bad climate models out of the 25”?

Several.

“There is also a puzzling peak below 1°C. These low values come from the GISS models (Fig. 7a) and if they are removed from the ensemble, the bump below 1K disappears .

We find that 15 of the 25 CMIP5 models produce estimates in agreement with the CERES

observations. If we limit the distributions to just those models , we obtain the ECS distribution in Fig. 6c (hereafter referred to as the “good” distribution).

We consider the “good ” ECS distributions to be the best estimates of ECS from this analysis.

Those ECS distributions have 17-83% confidence intervals (corresponding to the IPCC’s

likely range) of 2.4-4.4 K ”

–

Do you see why ” If we limit the distributions to just those models” might be a concern?

-1,

Yes, but I said

largelyand I was meaning that compared to other estimates it has increased the lower bound of the 67% CI to above 2K.“Cut a bit of slack this time, Willard.”

angech, when you repeatedly indulge in intellectually dishonest activities, such as misleading selective quoting, even after it has been pointed out to you before that this kind of thing is dishonest, why would anyone cut you some slack at this point?

Why am I not surprised that angech is being a bit selective in his quoting yet again?

The bits that angech omitted more strongly suggest to me (I’m no expert) that the “bad” models were left out because they have a property that is inconsistent with observed values, a property that is required for low ECS. That is a pretty good reason for leaving them out as it implies their ECS estimates are unreliable. Reading angechs selection, one might think they were left out

becausethey had low ECS.“Do you see why ” If we limit the distributions to just those models” might be a concern?”No, it looks to me like an attempt at scientific diagnosis of an anomaly in the models and a reasonable approach to preventing that from biasing the conclusions.

Angech – either quote in full and

emphasisethe bits you want to highlight, or at least annotate your edits to show where you have edited the quote, e.g. use … to show something is missing – especially if it is within a sentence.“We consider the “good ” ECS distributions”

I am willing to put that down a problem with the greek symbol, it should actually be “We consider the “good $\Theta” ECS distributions”, which makes it clear that it isn’t the ECS that makes the model good, but the $\Theta$, which seems an important distinction, so please check your quotes.

This method is independent of other ECS estimates and uses recent observations and model metrics that haven’t been used before so a nice add to the weight of evidence. In general agreement with several other recent studies using novel approaches.

I agree with Angech that you have to look carefully whenever people throw out data that gives them weird results. This kind of post hoc data screening can lead to really bad science (I’ve called Roy Spencer out on exactly that kind of chicanery). That said, we have a good reason to question those models — they do disagree with observations of $\Theta_{iv}$ and the bump below 1 K is almost certainly unreasonable. In any event, we provide both calculations, so if you don’t believe the good-model calculation, you can rely on the all-model calculation, which yields (generally) similar values — the main difference is the 5% percentile value, with a smaller difference for the 17% percentile values.

“Not sure why you think it rules out lower values. That’s only a 67% CI. Difference between lower bound and median is 0.9 C. So that means that the lower bound of the 95% CI is around 1.5 C”

Per Table 3, the lower bound of the 5-95% confidence interval is 1.9 and 2 K for the good-$\Theta$ calculations. Note that the ECS distributions are not normal and they fall off rapidly as one heads to low ECS. In addition, the IPCC typically uses the 66% confidence interval, so our estimated ECS range is narrower than the IPCC’s.

> Do you see why ” If we limit the distributions to just those models” might be a concern?

No, Doc. I don’t. Show me.

I hope your unhappiness rests on something more tangible than a rhetorical question.

“In any event, we provide both calculations, so if you don’t believe the good-model calculation, you can rely on the all-model calculation, which yields (generally) similar values — the main difference is the 5% percentile value, with a smaller difference for the 17% percentile values.”

now folks understand why one asks for the code ( the exact calculations do the same thing)

I have not found an analysis that is assumption free. When you supply the actual math you used

you give others the ultimate power they need to question your assumptions. They can do it for themselves, choose different assumptions, defend them, and hopefully clarify ( not necessarily resolve) the issues.

whats not to like?

My bet is Dr D is a great classroom teacher..

Read through the paper today. There are a number of issues with it, which makes their claim of saying this paper rules out ECS < 2K unjustified.

1. They completely ignore ENSO. ENSO causes short term warming, which is greatly amplified in the tropical troposphere compared to the surface. Given that their time period starts with a La Nina and ends with a strong 2015-2016 El Nino, this time period should cause an overestimation of ECS. Even worse, not taking into account ENSO variability should cause them to be overconfident in their estimates.

2. They essentially assume a constant ECS/TCR ratio. Which is clearly ridiculous and strongly falsified by climate models. ECS/TCR ratio increases with sensitivity. so if climate models are oversensitive then they are likely overestimating this ratio.

3. The treatment of autocorrelation is could be better, which likely results in overconfident estimates. They use OLS everywhere, even though in all cases it is inappropriate due to strong temporal autocorrelation. They try to correct for autocorrelation in 1 of the many regressions they do using an approach by Santer et al. But there are so many other places where autocorrelation is neglected.

4. Usage of 67% CI as I stated earlier.

-1,

I may have mentioned this before, but you’re remarkably confident for someone who has never done any published research.

1. I’m not sure this is the case since I think they’re using a metric that correlates quite well the the planetary energy imbalance.

2. I don’t know where this comes in. The ratios they use are the feedback ratios and and the ratio of change in surface temperature to change in 500 hPa temperature.

3. Again, not sure the relevance of this; they not estimating the uncertainty in the temperature trends.

4. I think Andrew has already responded to this; the distribution is skewed.

Willard

“No, Doc. I don’t. Show me”.

Not unhappy.

Both Andrew here and Mosher in the past have given good reasons why some sets of observations could be excluded from results.

Outliers in particular.

But one has to have an adequate observational range to know what constitutes a genuine outlier.

The problem here, as DM alludes to, is that the theory behind working out an ECS causes two data sets to have results that appear as outliers, not that the observation sets themselves are considered to be outliers.

Several choices, work with the observations that agree with your theory or rework your theory or accept the fact that natural variation at times obscures the theory.

I believe you may be being disingenuous here, why I am not sure but I do not think there is any ability to productively discuss the matter.

DM

I am not trying to upset you. Please shoot down the ideas or the construct as you see fit and proper. We have simply made different choices on our assessment of how to best treat observation v theory differences.

Plus I cannot highlight sentences.

“but you’re remarkably confident for someone who has never done any published research”

How would you know?

“I’m not sure this is the case since I think they’re using a metric that correlates quite well the the planetary energy imbalance.”

It also correlates well with ENSO

“I don’t know where this comes in.”

They use lamda_iv / lamda_2xCO2 and theta_iv / theta_2xCO2. This usage is effectively assuming constant ECS/TCR ratio.

“Again, not sure the relevance of this”

I’ll give you an example, in one of their approaches they linearly detrend the temperature time series and then use that time series in the calculation were they actually try to take autocorrelation into account. They don’t say if they take into account the uncertainty in the estimate of the line of best fit in the OLS during the detrending. But assuming they do, this estimate of uncertainty will be an underestimate since they neglect autocorrelation in the temperature time series during the detrending. If the autocorrelation factor between the residuals of consecutive years is rho, then they are underestimating the variance of this detrending line by a factor of 1/(1 – rho^2).

Anders –

I know you’d rather keep this thread free of this shit, but I thought this might merit mention (feel free to delete, if course).

angech –

Lest anyone not see this comment of yours:

https://judithcurry.com/2018/02/05/marvel-et-al-s-new-paper-on-estimating-climate-sensitivity-from-observations/#comment-865699

Including such nuggets as this :

I see numbers of otherwise sensible scientific bloggers making irrational statements.Do David Young complains that Anders bans anyone who is knowledgeable and disagrees, and you, who writes comments here in disagreement, feel no obligation to correct his error, but instead respond with a conspiratorial screed.

Should we conclude that you aren’t banned by Anders because you aren’t knowledgeable?

The range in people’s notion of integrity and accountability is always fascinating

-1=e^iπ says:

It’s your lauded highly-credentialed climate skeptics such as Tsonis and Lindzen that got all these cyclic phenomena completely wrong, and if consensus science is behind the curve on any of this, blame them.

Of course it could be better. Any cyclic behavior is by definition autocorrelated, and so you should again be pointing fingers at Tsonis and Lindzen for derailing the research.

-1,

It was just a comment and I may well be wrong, but I think I know who you are and I think you’ve never published any research.

Yes, but I think the issue is more to do with how this then impacts the planetary energy imbalance, rather than the correlation with ENSO events.

Joshua,

Nothing really surprises me anymore.

I’m sure someone can correct me if I’m wrong, but I thought including auto-correlation when estimating the errors in the linear temperature trends reduced the uncertainty, rather than increased it.

-1=e^iπ:

1. They completely ignore ENSO. ENSO causes short term warming, which is greatly amplified in the tropical troposphere compared to the surface. Given that their time period starts with a La Nina and ends with a strong 2015-2016 El Nino, this time period should cause an overestimation of ECS. Even worse, not taking into account ENSO variability should cause them to be overconfident in their estimates.

No, we don’t ignore ENSO. ENSO is most of the observed signal, as we discuss prominently in the paper. And we do regressions of TOA flux vs. temperature, not trend calculations, so starting in a La Nina and ending in an El Nino is irrelevant. This seems so far from accurately describing our analysis that I don’t know how to respond further.

2. They essentially assume a constant ECS/TCR ratio. Which is clearly ridiculous and strongly falsified by climate models. ECS/TCR ratio increases with sensitivity. so if climate models are oversensitive then they are likely overestimating this ratio.

I don’t think this is right. I can’t really respond further because I have no idea where one could even get that impression.

3. The treatment of autocorrelation is could be better, which likely results in overconfident estimates. They use OLS everywhere, even though in all cases it is inappropriate due to strong temporal autocorrelation. They try to correct for autocorrelation in 1 of the many regressions they do using an approach by Santer et al. But there are so many other places where autocorrelation is neglected.

Can you name one or two places where we ignore autocorrelation?

4. Usage of 67% CI as I stated earlier.

Asked and answered.

ATTP: Including autocorrelation has the effect of reducing the number of degrees of freedom in a data set, thereby increasing the uncertainty.

-1=e…:

“I’ll give you an example, in one of their approaches they linearly detrend the temperature time series and then use that time series in the calculation were they actually try to take autocorrelation into account. They don’t say if they take into account the uncertainty in the estimate of the line of best fit in the OLS during the detrending. But assuming they do, this estimate of uncertainty will be an underestimate since they neglect autocorrelation in the temperature time series during the detrending. If the autocorrelation factor between the residuals of consecutive years is rho, then they are underestimating the variance of this detrending line by a factor of 1/(1 – rho^2).”

Yes, we use detrending in ONE approach. What you forgot to add was that we also do the $\Theta_{iv}$ calculation a different way, which doesn’t require detrending (the “R-F method”, see the paper for a discussion). You get the same answer either way. Thus, this is clearly not an issue.

Andrew,

Thanks, not that surprised that I had it the wrong way around.

> I have no idea where one could even get that impression.

That’s the problem when ClimateBall players mismanage quotes and cites.

> Not unhappy.

Here’s what you said:

***

> I do not think there is any ability to productively discuss the matter.

To “discuss” implies a bit more than rhetorical questions about some unidentified concerns based on misreading the paper, Doc.

Autocorrelation can mean something completely different to a physicist than to a statistician. For example, to a physicist, the power spectrum of a signal is the Fourier transform of the autocorrelation of the data series. That data series could be in the time domain or in the spatial domain. For the latter, a diffraction pattern is a Fourier power spectrum, so by removing the autocorrelation from the measured pattern, one is essentially removing everything! Quantum mechanics autocorrelates everything during diffraction, so one has little control over that.

However, to a statistician, the definition has to be much narrower in that it indicates some nuisance part of the data that they want to get rid of. So that if there is a relatively clear deterministic signal, but it is sitting on top of red noise, it is beneficial to somehow remove (or estimate the content of) that red noise to better isolate the magnitude and character of the targeted signal.

The complication comes about when what you think is red noise, either an unbounded random walk or a bounded Ornstein-Uhlenbeck random walk (physics terms for the continuous autoregressive models that statisticians use), is actually a deterministic process with a known time-series behavior. These parts can easily be compensated out of the signal. It could be a daily signal (highly autocorrelated and mean value deterministic), a seasonal (same), a tidal signal (same), and even a volcanic signal (well-known after the fact). And the same can be said about the ENSO signal, which is just a deterministic lunisolar-driven dipole, easily separated from the data, and not close to red noise.

So what’s left after all this is some indeterminate noise signal, some hint of multidecadal fluctuation, and what appears to be a gradual trend up due to CO2. In this case, all the statistical analysis is in determining whether this latter gradual trend is some vestige of a long-scale random walk term, instead of being the deterministic response to a CO2 forcing. Of course, to somebody that wants to play games – like our imaginary friend e^iPi here – he will say that one has to account for the “strong temporal autocorrelation”. All this means is that there is some character to that trend that may have a random walk component. Yet, from the residual fluctuations, it doesn’t appear to have a real strong red noise character at all — since all those other contributions from ENSO, etc have already been compensated out. In that case, they will claim that it is another unknown random walk that has a fractal property with a Hurst exponent, which can undergo jumps via a fat-tailed probability distribution. I think this is what Mr.Imaginary is referring to because ordinary least squares (OLS) is thin-tail only, and Dessler says they are accounting for the miniscule red noise remaining just by looking at the frequency response in the residuals. Dessler is right that this does increase the uncertainty, because some fraction of the trend could now be a random walk component, but I am certain that i-man will come back with the Hurst argument.

The issue is that each one of these additional assumptions invokes some premise that leads one down a more and more implausible path. All they have left is the uncertainty from this one fractal noise term that skeptic Koutsoyiannis favors, who Lovejoy essentially shot down with a paper called: “Why the Warming Can’t be Natural: The Nonlinear Geophysics of Climate Closure”

I recall Lovejoy posted something over at Curry’s blog a while ago on this and got all the regulars there in a snit.

https://judithcurry.com/2015/10/23/climate-closure/

– and –

https://judithcurry.com/2015/11/03/natural-climate-variability-during-1880-1950-a-response-to-shaun-lovejoy/

Lacking actual data on historical forcings, I find Lovejoy’s analysis to be a bit weak. It gives us a plausible model for disambiguating natural variability from forced warming, but, you know what they say about parameters and elephants. Just because the model is plausible doesn’t mean it’s true. Curry can just as easily claim that the climate varies on multi-decadal or centennial timescales, and that these variations are a partial explanation for the current warming.

Lovejoy’s arguments would be strengthened somewhat if we had better paleoclimate data —

thenwe could make firmer claims about the range of natural variability (whether forced or unforced).Alternatively, if we can understand the physics of internal unforced variability, we’d also be able to rule out (or in) the possibility of large, unforced centennial variations. From a physics perspective, the evidence for these is lacking, but not conclusively so: we can’t rule them out or in.

Instead, we go with Occam’s Razor: we expect to see warming because of CO2, we do see warming and about as much as we’d expect, and we have no other plausible mechanisms right now —> CO2 is the best bet.

“But how do you

knowit’s CO2 and not internal variability?”We don’t. We can’t be 100% certain. It’s just very likely.

Ahhh, the uncertainty monster rears its head again.

Found some more issues.

1. delta_T_s / delta_T_a is estimated via linear regression. But given that it uses monthly time series data, this is going to have significant autocorrelation. As a result, its uncertainty is underestimated.

2. The methodology used to estimate error assumes that the errors from F_2xCO2, theta_iv / theta_2xCO2, 1/theta_iv and delta_T_s / delta_T_a are independent. This is wrong due to a few factors:

A. Models with higher sensitivity tend to have lower forcings and models with higher sensitivity tend to have lower theta_2xCO2 / theta_iv ratios. Therefore, F_2xCO2 and theta_iv /theta_2xCO2 are positively correlated and neglecting this correlation results in an underestimation of uncertainty.

B. The claim that theta_iv / theta_2xCO2 and 1/theta_iv are independent might initially be reasonable since they come from different sources (observations vs climate model data). But the ‘good’ estimates that use observations of theta_iv to restrict theta_iv / theta_2xCO2 throw this possible independence completely out the window. From figure 7 it is clear that a low theta_iv corresponds to a high theta_iv / theta_2xCO2. Thus theta_iv / theta_2xCO2 and 1/theta_iv are positively correlated and neglecting this correlation results in an underestimation of uncertainty.

A better method is to estimate theta_iv / theta_2xCO2 as a function of theta_iv (a linear function would probably suffice) and then use observational estimates to infer theta_2xCO2, all while adequately propagating error. Also, you want to take into account the positive correlations with F_2xCO2.

@ ATTP –

“I think I know who you are and I think you’ve never published any research.”

Wow, am I famous?

“I think the issue is more to do with how this then impacts the planetary energy imbalance, rather than the correlation with ENSO events.”

You want to take into account the changes in TOA flux explained by ENSO when trying to estimate the theta_iv. This is especially true if you use a very short observational period that starts with a La Nina and ends with a very strong El Nino.

“I thought including auto-correlation when estimating the errors in the linear temperature trends reduced the uncertainty, rather than increased it.”

No

@ Andrew E Dessler

I’m glad you are participating in these comments, because now we have the opportunity to point out issues and suggest improvements. It would be nice to see what the more appropriate estimates are.

“we do regressions of TOA flux vs. temperature, not trend calculations”

Please put an ENSO index in the regression. I think that will improve things and make the results more reliable.

“I don’t think this is right. I can’t really respond further because I have no idea where one could even get that impression.”

The fact that the ECS/TCR ratio increases with climate sensitivity is well understood and expected based upon the physics of feedbacks. It is surprising that you are not aware of this.

If you would like to verify it for yourself, I suggest you take the data of tables 1 and 2 from http://onlinelibrary.wiley.com/doi/10.1002/jgrd.50174/full. It takes 1 minute in excel to verify this relationship. Furthermore, you can easily verify that radiative forcing (specifically non-CO2 radiative forcing such as from aerosols) is negatively related with climate sensitivity.

Please instead treat theta_2xCO2 as a function (linear will probably suffice) of theta_iv, and then use theta_iv estimates to infer theta_2xCO2, I think that makes more sense than multiplying theta_iv / theta_2xCO2 and 1/theta_iv. Also taking the positive correlations between F_2xCO2 and 1/theta_2xCO2 into account instead of treating them as independent is a good idea.

“Can you name one or two places where we ignore autocorrelation?”

The estimate of delta_T_s / delta_T_a is particulary concerning.

“Yes, we use detrending in ONE approach.”

Yes, so please correct for this approach by taking autocorrelation into account when estimating the error caused by linear detrending. Or just stick with the forcing approach, which is probably better anyway.

Actually, maybe treating F_2xCO2/theta_2xCO2 as a function (say linear) of theta_iv would be even better (and simpler) as this would allow you to more directly obtain estimates while taking a lot of these correlations into account.

-1,

No, I don’t think so.

A few comments. Rather than moving on to a bunch of new issues, why not resolve the ones you already claim to have found, which appear not to be issues. Also, you really should probably avoid following the Roger Pielke Sr approach of telling other people what they should do. At best, this is a discussion, not a chance for you to suggest that Andrew writes a completely different paper.

-1

Please put an ENSO index in the regression. I think that will improve things and make the results more reliable.Does that mean that you were in error when you wrote?:

1. They completely ignore ENSO.Windchaser said:

“Alternatively, if we can understand the physics of internal unforced variability, we’d also be able to rule out (or in) the possibility of large, unforced centennial variations. From a physics perspective, the evidence for these is lacking, but not conclusively so: we can’t rule them out or in.”This makes a lot more sense than whatever the imaginary root guy is mumbling about.

The largest of the multidecadal variations has been suggested by scientists at NASA JPL as being correlated to the length-of-day (LOD) deviation. They think it is either a slow momentum transfer in the ocean, or perhaps glacial mass, or some other active mantle process which is associated with a slowly varying global temperature change. At some point Curry and her student latched on to this idea and rebranded it as a “Stadium Wave”, since the delta LOD also seemed to track longer cycles of the PDO and AMO with a specific phase lag. Just about all the other variations in LOD are directly or indirectly tied into lunisolar tidal cycles, but this one has a longer variable period of 60 years and is obviously disconnected from the shorter cycles. It appears that the magnitude has yet to exceed +/-0.1 C when correlated to LOD so it’s not close to making up the 1C excursion yet.

“No, I don’t think so.”

Famous enough for you to claim to know who I am, apparently.

“the Roger Pielke Sr approach of telling other people what they should do”

That was not my intention. I was merely suggesting improvements that could be made. Dessler’s paper is interesting and it’s nice when people try to find new and creative ways to estimate things such as climate sensitivity. Surely we all want better estimates in order to get closer to the truth.

-1:

Taking autocorrelation into account only affects the uncertainty, not the central value. Given that models provide centuries of data (compared to 17 years for CERES), the error bars (with or without autocorrelation) are tiny, so we don’t consider uncertainty in our calculation of the ratio. It’s probably reasonable for us to add something about that to the text.

Any correlation that exists is likely a quality of the model ensemble. There is no reason to think that these quantities are actually correlated in real life, so no reason to correlate them in our calculations. We did some tests on correlating various model parameters in our calculations and it tends to reduces, not enlarge, the uncertainty (although the effect is small).

That makes zero sense to me. I think you need to look at Fig. 7 again.

That’s an emergent constraint approach. Me no likey.

This doesn’t make any sense. What we are trying to get from the regression IS the response to ENSO, our $\Theta_{iv}$

I don’t see how this has any relevance for our analysis.

That’s essentially an emergent constraint approach. Per above, me no likey that.

See response to first point above.

-1,

Doesn’t follow, but anyway.

Okay.

A reasonable way forward, in my view, is for you to go back to your first set of issues, and check which of them have been addressed by Andrew, and then acknowledge that. Once those have been resolved, one could then move on to some new points of discussion,

angecj wrote

“The problem here, as DM alludes to, is that the theory behind working out an ECS causes two data sets to have results that appear as outliers, not that the observation sets themselves are considered to be outliers.”I alluded to no such thing. Please refrain from assigning opinions to me that I don’t actually hold. This is not the first time I have asked you not to do this!

I believe you may be being disingenuous here, why I am not sure but I do not think there is any ability to productively discuss the matter.I don’t believe Willard is being remotely disingenuous here, just that the text you quoted (especially if you put back the bits you edited out) are not grounds for objecting to models being left out of the analysis. In fact the text gives a good reason for leaving them out.

DM I am not trying to upset you.Well repeatedly using dishonest selective quoting is likely to upset people. It shows utter disrespect for the people you are talking to (it suggests that they don’t deserve honest treatment). If you don’t want to upset people, try treating them with a bit more respect. I’m actually not that upset on this occasion, just sad that we can’t have a rational discussion about science on blogs without this sort of behaviour.

Please shoot down the ideas or the construct as you see fit and proper.I did, but you have singularly failed to deal with the fact that the text (especially the bits you left out) of the quote gives a good reason for leaving out some of the models. What is the point in me “shooting down” the arguments if you just carry on repeating them?

We have simply made different choices on our assessment of how to best treat observation v theory differences.No, you gave a dishonest selective quote,

thatis the issue.“Plus I cannot highlight sentences.”Yes, you can. Just write the characters >b< before at the start of the text you want to highlight and >/b< at the end (hopefully I have implemented that correctly here).

Not being able to highlight is no justification for editing out the parts of the quote that destroy your argument – that is just dishonest.

Oops, should have been <b> and </b> in the penultimate paragraph.

PP wrote “However, to a statistician, the definition has to be much narrower in that it indicates some nuisance part of the data that they want to get rid of. ”

This is not true. Whether the autocorrelation is a nuisance or the property of interest depends on the purpose and nature of the analysis. Please can we give the uncharitable caricatures of other fields a rest? Unless of course they are amusing.

ATTP wrote

“A reasonable way forward, in my view, is for you to go back to your first set of issues, and check which of them have been addressed by Andrew,[and then acknowledge that.“emphasismine]If there was one thing that would improve the discussion of science on blogs it would be that.

How not to do it: Dessler writes:

-1 responds:

DM

Thanks.Unfortunately I am in the position where just commentating causes upset.

Two things that might help are to know that I do try very hard not to comment on your opinions since being a nuisance 18 months ago though I sometimes reply to criticism or constructive comments when you directly involve me. I will try harder to only respond to the constructive comments and help that you give me in future.

If I appear rude in not being able to change my style of writing, pasting etc that is my personality and ability, which should be accepted, just like my weight etc. Advice is good, tolerance is much appreciated.

Double thanks.

Highlight tip worked .

I only put it around the thanks and have no idea why the whole field has highlighted!

[Mod: fixed. You have to close the bold section using the section of the HTML commands the Dikran presented in his earlier comment.]Calling something a “nuisance” is not being denigrating. A nuisance variable is a technical term that is well-defined, e.g. here in Wikipedia —

” fundamental to the probabilistic model, but that is of no particular interest in itself or is no longer of interest”or“used in the context of statistical surveys to refer information that is not of direct interest but which needs to be taken into account in an analysis”.To me it is still jarring to see how the phrase “account for autocorrelation” is used considering that the forcing response due to CO2 is entirely autocorrelated. That’s why I said that this is a narrow (or perhaps selective) definition that statisticians use.

Hoping that this would help others to see how different fields understand concepts.

“Calling something a “nuisance” is not being denigrating.”

nobody said otherwise. The thing that is uncharitable is that a statistician would only have one view of autocorrelation – i.e. that it is a nuisance part of the data

“However, to a statistician, the definition has to be much narrower in that it indicates some nuisance part of the data that they want to get rid of. ”.Of course they only have that view of autocorrelation. A statistician is not going to calculate the autocorrelation of a structure to determine for example a diffraction pattern — that’s the domain of a physicist or materials scientist. Wikipedia says

“Different fields of study define autocorrelation differently, and not all of these definitions are equivalent.”A statistician is going to use the term autocorrelation in terms of statistical analysis, whereas a physicist is going to use it in terms of the fundamental property or behavior of the target being studied.

“Of course they only have that view of autocorrelation. ”

I

ama statistician (more specifically I work in machine learning, which is essentially computational statistics), there is more to statistics than regression analysis. The way in which a statistician would use autocorrelation would depend on the purpose of the analysis (and that often involves engagement with the physics of the problem, c.f. Tukey’s quote about the best thing about statistics being that you get to play in everybody’s back yard). Sometimes the autocorrelation structure is the information you are looking for, sometimes it is a nuisance (hint: there is a considerable overlap between time series analysis and signal processing), see the list of applications at the end of your Wikipedia page.“Wikipedia says “Different fields of study define autocorrelation differently, and not all of these definitions are equivalent.””

A different mathematical definition does not imply that different fields cannot use something in more than one way.

If you want a specific example, you could try using linear predictive analysis to estimate the resonant modes of a cavity from signals recorded from inside. If you are interested in bats, then the autocorrelation is probably a nuisance, but if you are interested in the cave, then the autocorrelation is the signal.

“A different mathematical definition does not imply that different fields cannot use something in more than one way.”That’s why I brought the topic up. When the imaginary one said

“They use OLS everywhere, even though in all cases it is inappropriate due to strong temporal autocorrelation.”and I have no idea what his background is, then it’s time for some elaboration. If anything shows a strong autocorrelation, it’s the CO2-influenced trend. The trend fits to ln(CO2) with OLS as a simple fitting technique, so the inappropriateness of using this is specifically defined in a statistical sense, not in terms of the basic physical model.A good example of this is for tidal analysis. There might be 6 or 7 main tidal factors and one can use multiple linear regression to create a composite signal to match the tidal time-series. But, according to the imaginary one, this can’t be done because tides show “strong temporal autocorrelation”. Of course tides show this! That’s the entire point! If there was no autocorrelation, you wouldn’t be able to deterministically model the tides at all. There might be other autocorrelations hiding in there, and if those are unknown, then of course appropriate statistical approaches should be used, but for tides, that’s a second-order effect.

It’s truly an overloaded term and when put into the hands of deniers that want to murk things up, it will lead to lots of confusion.

Why is it that acknowledging a mistake is so difficult on blogs? You gave an unfair caricature of statisticians, then doubled down, and now are evading the key point that statisticians do not necessarily view autocorrelation as a nuisance component.

Why is it that having to win an argument is so important in a blog comment section?

You didn’t win the argument so I suggest that you let it go.

> Why is it that having to win an argument is so important in a blog comment section?

How many comments from you on Judy’s physics book should I quote to deflate that rhetorical question, Web?

The “this” in DM’s “this is not true” refers to the narrowness of the definition of autocorrelation, hence why he said:

Again, quotes can do marvel.

Gives you all a great chance to show that a lunisolar-forced model of ENSO is a statistical artifact instead of being clearly deterministic using only a minimal number of degrees of freedom (3 lunar periods plus the annual cycle).

I do have a dog in this hunt. Curry will be welcome to criticize my book when it comes out later this year. If she finds something wrong, I will add an erratum sheet and also put it on the blog.

PP wrote “Why is it that having to win an argument is so important in a blog comment section?

You didn’t win the argument so I suggest that you let it go.”

The juxtaposition of those two sentences is somewhat ironic! ;o)

It isn’t about “winning”, it is about trying to get things right. Your characterization of statisticians as only viewing autocorrelation as a nuisance component is incorrect. I even gave you a concrete example where that wasn’t the case, what more do you want? People get things wrong all the time, it isn’t a big deal, a better approach is just to acknowledge the error and let it go, rather than dig the hole deeper still.

”

Why is it that having to win an argument is so important in a blog comment section?

”

Petitio principii at 12 o’clock!

”

I do have a dog in this hunt.

”

Safe to assume that the squirrels are already in training, Web.

If you are as lucky as the IPCC, Curry may not even read your book before declaring that while it may not be ‘wrong’, it is very uncertain, and most of it is not very useful.

However, I’d pay serious attention to criticisms from the marsupial.

> Gives you all a great chance to show that a lunisolar-forced model of ENSO

Not that “but ENSO” again, Web.

One drive-by per thread ought to be enough, don’t you think?

“Your characterization of statisticians as only viewing autocorrelation as a nuisance component is incorrect.”True that often autocorrelation (as defined in statistical analysis) is considered to be a nuisance variable. It is also true that often autocorrelation in the residual can be taken as evidence of unmodeled, but potentially new sources of variability. There is a sliding scale of how certain one is in regard to a signal. If you are trying to dig out a known signal from a background, then it’s a nuisance variable (as defined). The second is simply the physics definition, in that physicists are always trying to figure out what is causing some odd or unknown behavior, or characterizing it by looking at the autocorrelation or pair-correlation function.

Also a fact by definition is that white noise is the only source for a truly non-autocorrelated data series. In this case, the autocorrelation is unity for no displacement and it vanishes asymptotically to zero for any other displacement. The reason that a zero’d autocorrelated residual is cause for excitement is that all that is left is white noise, and any further searching for a source for variability become exceedingly less plausible.

What I have noticed when looking at skeptic blogs, over and over again, is the knee-jerk response that calling out autocorrelation always reveals. They always claim victory when any residual autocorrelation is found, much like imaginary# was trying to infer. It’s actually quite difficult to get a completely white noise residual. Even the most accurate tidal analysis can have autocorrelated residuals, and it’s why they can increase the number of tidal periods from a handful to a hundred.

> What I have noticed when looking at skeptic blogs, over and over again, is the knee-jerk response that calling out autocorrelation always reveals.

I blame the Auditor and his army of econometrists.

***

> They always claim victory when any residual autocorrelation is found, much like imaginary# was trying to infer.

That phenomenon may be autocorrelated with the fact that contrarians always do:

https://andthentheresphysics.wordpress.com/2018/01/11/can-contrarians-lose

I would like to retract my claim that El Nino causes a basis (or at least any significant bias) in the estimates of theta_iv. Putting an El Nino term when estimating theta_iv, as I suggested earlier, is the incorrect approach. What Dessler and Forster do in estimating theta_iv is the correct approach. I apologize for my mistake.

Rather, choosing a short time period that starts with a strong La Nina and ends in a strong El Nino causes issues because it makes the use of the delta_T_s / delta_T_a term inappropriate. delta_T_s / delta_T_a was estimated using climate models, and is the long term ratio of delta_T_s / delta_T_a. However, in the short term, the ratio delta_T_s / delta_T_a can vary significantly from its long term average. In particular, El Nino has radically different warming patterns than the long term average warming patterns due to CO2. El Nino has high localized warming in equatorial regions while the long term average has high localized warming in polar regions. Because the change in the El Nino index over the time period is significantly positive, this means that over the time period, delta_T_s / delta_T_a is in reality significantly smaller than it is in the long term. As a result, the suggested estimate is an overestimate.

To correct for this, using empirical estimates of delta_T_s / delta_T_a over the time period would be more appropriate.

“Taking autocorrelation into account only affects the uncertainty, not the central value.”

I never said it does. My concer was regarding underestimating the uncertainty. Although if you want me to get technical, autocorrelation causes the OLS estimate to be inefficient, so the most efficient linear unbiased estimator changes in the presence of autocorrelation.

Anyway, this point is moot anyway, because as I explains in my last post, El Nino causes the usage of climate model derived deta_T_s / delta_T_a to be inappropirate. The usage of the long term average deta_T_s / delta_T_a from climate models is higher than what occurs in reality over the period of interest due to the behaviour of El Nino.

“Any correlation that exists is likely a quality of the model ensemble. There is no reason to think that these quantities are actually correlated in real life, so no reason to correlate them in our calculations.”

In reality, they probably aren’t correlated. But the issues is ‘are the estimates of forcing and theta_iv/theta_2xCO2 that you use to estimate climate sensitivity correlated?’ The answer is yes.

“We did some tests on correlating various model parameters in our calculations and it tends to reduces, not enlarge, the uncertainty”

F_2xCO2 is positively correlated with theta_iv/theta_2xCO2. Therefore, not taking this correlation into account results in an underestimate of uncertainty.

“That makes zero sense to me. I think you need to look at Fig. 7 again.”

Figure 7 shows that climate models with high theta_iv have low theta_iv/theta_2xCO2. Therefore, if you use empirical estimates of theta_iv to justify constraining theta_iv/theta_2xCO2, then the estimates of 1/theta_iv and theta_iv/theta_2xCO2 are now positively correlated. Which means that neglecting their correlation results in an underestimate of the uncertainty of the estimate of their product.

In case that wasn’t clear enough, I’m not saying that the resriction of theta_iv/theta_2xCO2 using theta_iv caused bias. Rather, I am saying that is causes an underestimate of the uncertainty.

@ Paul Pukite –

“If anything shows a strong autocorrelation, it’s the CO2-influenced trend. The trend fits to ln(CO2) with OLS as a simple fitting technique so the inappropriateness of using this is specifically defined in a statistical sense, not in terms of the basic physical model.”

Yes, indeed the CO2-influenced trend does have strong autocorrelation. Therefore, usage of OLS to estimate this trend is inappropriate. Better to use something else like GLS, Cochrane-Orcutt, Prais-Winsten or maximum likelihood estimation.

With respect to your claim that autocorrelation in the temperature time series has no physical basis, this is completely ridiculous. The Earth’s surface has a heat capacity; this causes autocorrelation in temperature.

“It’s actually quite difficult to get a completely white noise residual.”

Why would you want completely white noise. If you got zero autocorrelation, then that would be completely unphysical as it would imply that the Earth has a heat capacity of zero.

PP wrote

“However, to a statistician, the definition[hasto be much narrower in that it indicates some nuisance part of the data that they want to get rid of.”emphasismine]“That’s why I said that this is a narrow (or perhaps selective) definition that statisticians use.”“Of course they[onlyhave that view of autocorrelation.”emphasismine]“A statistician is going to use the term autocorrelation in terms of statistical analysis, whereas a physicist is going to use it in terms of the fundamental property or behavior of the target being studied.”and then after repeated correction PP writes:

“True that often autocorrelation (as defined in statistical analysis) is considered to be a nuisance variable.”I suspect this rather small weakening of his position is the closest we are going to get to an acknowledgement that he was wrong and this is not the

onlyway a statistician would view autocorrelation, despite the fact I gave a concrete example where it might be the quantity of interest. This inability to simply admit error is a very bad sign in a scientist, especially one that writes:“Why is it that having to win an argument is so important in a blog comment section?”“What I have noticed when looking at skeptic blogs, over and over again, is the knee-jerk response that calling out autocorrelation always reveals.”That is not justification for misrepresenting statisticians as taking only a narrow view of autocorrelation as only a nuisance component, which simply isn’t true.

-1 wrote

“Yes, indeed the CO2-influenced trend does have strong autocorrelation. Therefore, usage of OLS to estimate this trend is inappropriate. “can you give an example of a

trendthat is not autocorrelated? It is correlatednoisethat causes problems for OLS.One point is that a statistician’s skills are invoked when problems associated with general issues are involved. But when the statistics are subsumed into the physics, it becomes the sub-discipline of

statistical mechanics. In terms of autocorrelation, I know this area well. For example, all forms (electron, x-ray, optical) of simple diffraction are analyzed by taking the Fourier series of the autocorrelation of the scattering structure. That’s just a definition and that’s why it has been subsumes as a part of physics knowledge and not statistics. If you think I am so confused by this topic, I have to wonder why my models for autocorrelated structures are being used in experimental characterization?http://sergey.gmca.aps.anl.gov/TRDS_sl.html

Autococorrelation is all applied physics. The definition of autocorrelation to a statistician is much narrower, otherwise you go crazy trying to figure out how it is routinely being applied by an economist such as the skeptic dude -1=e^iπ who comments here.

LOL, now the “no true Scotsman” ploy to avoid admitting your sweeping generalisation was incorrect. Incidentally, the concrete example I gave where the autocorrelation is the property of interest rather than the nuisance component clearly isn’t statistical mechanics, so it pre-bunks your ploy.

There has always been a big overlap between statistics and physics, e.g. Gauss, Bernoulli’s, Jaynes, Jeffrey’s etc, and only one of those is statistical mechanics. Statistics that doesn’t involve itself in the physics/biology/ecology etc. of the problem is liable to go badly wrong, it isn’t an analysis of absract numbers.

That’s why ENSO is such an interesting behavior. Going through the research literature, about half of the papers suggest that it is a stochastic behavior and the other half think it arises from a chaotic non-linear mechanism. How can statistics concepts help resolve this conflict?

Dikranmarsupial said:

That’s a spatiotemporal autocorrelation. Unless the bats are dead, they will end up flying in and out of the cave and so the impacts of the cave and the bats are easily separated by either filtering over time or differencing over time.

“That’s a spatiotemporal autocorrelation.”Actually I was thinking of audio signals, but so what, it is still an autocorrelation, so that is a red herring w.r.t. whether it is a nuisance component or the signal. Statisticians are not confined to viewing autocorelation as nuisance.

“Unless the bats are dead, they will end up flying in and out of the cave and so the impacts of the cave and the bats are easily separated by either filtering over time or differencing over time.”Whether it is easy or difficult is irrelevant to the question of whether autocorrelation is nuisance or signal, so that is a second red herring.

It is amusing, but at the same time rather sad that you would write

“Why is it that having to win an argument is so important in a blog comment section?”and then use transparent “no true Scotsman” and red herring ploys like these just to keep the argument going a bit longer and evade admitting your sweeping generalisation was incorrect.

An interesting spatiotemporal autocorrelation exists with respect to ENSO. The primary physical behavior of ENSO is the standing wave dipole observed in Tahiti and Darwin. The spatial autocorrelation is obvious with the reversed sign between Tahiti and Darwin. The temporal part is also autocorrelated but the origin of this behavior is what drives everyone up the wall.

That’s the quandary: How is it that the spatial aspect is so fixed, while the temporal behavior appears so chaotic?

Those that understand how to interpret resonant nodes of a cavity (i.e. Dikranmarsupial’s Bat Cave) realize that the spatial behavior often (but not always) can be separated from the temporal behavior. That’s essentially the case with standing nodes in a cavity — just as what happens with ENSO.

So that premise formed the basis of what I presented at the last two meetings of the AGU. What I tried to do was solve the Naver-Stokes equation along the equator, with the intent of cleanly separating the spatial component of the standing wave from the temporal component of the standing wave. I did come up with a closed-form, analytic solution. This only required a temporal forcing to sustain the standing wave oscillation. It didn’t take long to find that a tidal forcing signal would robustly match the temporal response observed, while keeping the spatial response fixed to the standing modes.

I din’t have to use any special autocorrelation techniques to make it this far, but I do realize that parts of the residual still shows a non-white-noise autocorrelation. So I didn’t use statistics up to this point, but I do realize that other statistical techniques will be needed to help verify the physical model and rule out that a spurious correlation may still be responsible for the entire match.

That is the way that I am applying my knowledge of autocorrelation to a current problem in climate science.

And now a non-sequitur that has nothing to do with whether statisticians only view autocorrelation as a nuisance component. I am not interested in seeing the hole PP has dug for himself reach it’s geographical antipode, so I’ll leave it there.

-1

You wrote:

Please put an ENSO index in the regression. I think that will improve things and make the results more reliable.Does that mean that you were in error when you wrote?:

1. They completely ignore ENSO.angech –

Unfortunately I am in the position where just commentating causes upset.I disagree. I don’t think it is a matter of “just” commentating that “causes upset.” That you would say so, looks to me, like playing the victim rather than holding yourself accountable.

As to whether the simple fact of you disagreeing is what “causes upset” might be, IMO, more reasonably arguable. However, relatedly, what is clear is that it isn’t simply a matter of disagreeing that gets one banned here, as David Young claimed over at Judith’s, a claim to which you responded but didn’t bother to correct.

Of course, David also added the condition of being “knowledgeable.” And I suppose it is possible that no correction of David would be required to maintain accountability – if you meant your lack of correction to imply a self-assessment on your part that indeed, you are not “knowledgeable.”

Should your lack of correcting David be interpreted as an acknowledgement of such on your part?

C’mon, angech. Step up to the accountability plate and take a stance in the batter’s box.

Accusation:

My response:

I said that sarcastically because I have had another argument with Dikranmarsupial on this blog comment section several months ago. The topic was in identifying deterministic versus random processes, and he demanded that I admit that I was wrong on something I claimed. Eventually, I said

“OK, I will admit I was wrong, as this is only a semantic argument which can be interpreted in many different ways.”But he apparently wasn’t satisfied with my response, as he then said:

“Basically you have just doubled down. It seems that this sort of tedious rhetorical BS is pretty much inescapable on climate blogs. I give up.”David seems to have problems with moderation at more places than just here.

https://scienceofdoom.com/2017/12/24/clouds-and-water-vapor-part-eleven-ceppi-et-al-zelinka-et-al/#comment-123671

Angech…

Appears to simultaneously admit to an inability to grasp the absolute basics:

https://scienceofdoom.com/2014/06/26/the-greenhouse-effect-explained-in-simple-terms/#comment-122625

Yet here claims to be reading the latest papers and to have a worthwhile opinion on them.

Better class of sceptics required. ( SOD is very good on the sceptic front, in the original meaning of the word)

Even XKCD knows your “narrower” was misplaced, Web:

It’s usually the other way around.

Your last “but ENSO” was your last one in this thread.

[

Enough machismo, Web. Go play elsewhere. -W]-1:

No. The point of multiplying by $\Delta T_S$/$\Delta_A$ is to convert the long-term warming of $\Delta T_A$ into the warming of long-term $\Delta T_S$. Thus, we want to use the long-term ratio. If you don’t like the model values, you can use the values from the reanalyses, which gives the same answer.

“No. The point of multiplying by $\Delta T_S$/$\Delta_A$ is to convert the long-term warming of $\Delta T_A$ into the warming of long-term $\Delta T_S$. Thus, we want to use the long-term ratio.”

You are correct. I was wrong. I apologize.

The previous paper you did with Stevens and Mauritsen says that delta_Ts/delta_Ta is 0.86 +/- 0.10 (1 standard deviation). Why not use that estimate since it tries to account for model error by comparing the estimates of different models.

In any case, from what I can tell, the amplification factor isn’t correlated with forcing or climate sensitivity, so the impact of this error is not very large.

Main issue to me seems to be that uncertainty of forcing is positively correlated with uncertainty of theta_iv/theta2xCO2, and that with your methodology for the ‘good’ estimates, theta_iv/theta_2xCO2 becomes positively correlated with 1/theta_iv.

@dikran –

“It is correlated noise that causes problems for OLS.”

Yes, that is what i meant. Sorry for not being clear.

Joshua

“I don’t think it is a matter of “just” commentating that “causes upset.” That you would say so, looks to me, like playing the victim rather than holding yourself accountable.”

I do not feel or claim that I am a victim. It is a bit like being in a workplace. Some people are just difficult for some to get on with. Not views, perhaps knowledgeability at times.

“what is clear is that it isn’t simply a matter of disagreeing that gets one banned here, as David Young claimed over at Judith’s, a claim to which you responded but didn’t bother to correct.”

David also added the condition of being “knowledgeable.”

Willard corrected the part about knowledgeable people being banned from commentating,

Nic Lewis and Clive Best should feel thrilled.

“And I suppose it is possible that no correction of David would be required to maintain accountability – if you meant your lack of correction to imply a self-assessment on your part that indeed, you are not “knowledgeable.”

Willard had fun as well emphasising this point as you do

“Should your lack of correcting David be interpreted as an acknowledgement of such on your part?

C’mon, angech. Step up to the accountability plate and take a stance in the batter’s box.”

Will the bean balls stop if I do?

–

“Why is it that acknowledging a mistake is so difficult on blogs?”

One example,

When one does this happens.

VTG

Angech…. “Appears to simultaneously admit to an inability to grasp the absolute basics:

“I thought I had “rationalised” it out.Sorry for the obtuseness.Back to thinking square one”

Yet here claims to be reading the latest papers and to have a worthwhile opinion on them.”

People take your admission of an error and extend it to everything else.

Insights into Atlantic multidecadal variability using the Last Millennium Reanalysis framework

Ouch:

Angech,

It’s not an admission of an error, it’s a consistent failure to understand the most basic principles. A failing which you are aware of. Yet you continue to make authoritative statements about cutting edge research:

If you want to be seen as posting in good faith you need to cone with a learning attitude, not a knowing attitude.

JCH interesting. I’ve just gone back and re-read the Cowtan et. al. paper on reconstructing SST from coastal observations. Their results indicate a large uncertainty in the pre-1900s SST estimates. Substituting their adjusted SST series for HadSST increases EBM TCR estimates by 20%, add in poor arctic coverage and aerosol uncertainty and the constraint on climate model predictions becomes rather loose.

“I’ve just gone back and re-read the Cowtan et. al. paper on reconstructing SST from coastal observations.”

That is a very good paper. I’m surprised there hasn’t been a post on it here.

I’ve been rather busy recently.

“The previous paper you did with Stevens and Mauritsen says that delta_Ts/delta_Ta is 0.86 +/- 0.10 (1 standard deviation). Why not use that estimate since it tries to account for model error by comparing the estimates of different models.”

Oh, I’m wrong again. I apologize. You do take this into account. I misread things.

And in summary, another iteration of the standard -1 playbook.

Begin by being confidently wrong:

And:

And:

Then start walking it back:

And:

And finally attempt to change the subject:

Gets old, quickly.

Then don’t be so over-confident.

Well, at least -1 has acknowledged their errors, which is better than most (agree, though, that a bit more initial circumspection would be preferable).

angech –

Some people are just difficult for some to get on with.Once again, playing the victim. It is not that you are “just” difficult for some to get along with. You have been criticized for specific reasons. Own up to why you have been criticized rather than handwave to some kind of mysterious factor that lies beyond control.

Be accountable.

I’ll be specific about my criticism.

I am criticizing you for playing along with David’s obviously incorrect broadside. Doing so displays a lack of integrity and accountability, IMO.

That is a choice you made. You could have chosen to comment in response to David that your continued participation here makes it clear that his attack was wrong.

I think that your continuing to participate here without making that correction lacks integrity.

There are paths forward from this point that would display integrity and accountability. They aren’t mysterious. You can step up to the plate. If you don’t do so, then it isn’t “just” a matter of “just” being difficult for “some” to get on with.

Dig in to the batter’s box.

-1

You wrote:

Does that mean that you were in error when you wrote the following?:

1. They completely ignore ENSO.I made two more mistakes. I forgot that theta_iv were negative on Figure 7. Really, I should have thought about the correlation between (-1/theta_iv) and theta_iv/theta_2xCO2 for the good estimates. These values clearly have a negative relationship according to Figure 7.

Digitizing the values from graph 7, I estimate that the correlation coefficient between (-1/theta_iv) and theta_iv/theta_2xCO2 is -0.69.

As for my second mistake, theta_iv/theta_2xCO2 has a strong positive relation to the ECS/TCR ratio (as opposed to TCR/ECS as I implied earlier). Using data from http://onlinelibrary.wiley.com/doi/10.1002/jgrd.50174/full, I estimate that the correlation coefficient between ECS/TCR and forcing is -0.53.

From this we can infer that the correlation coefficient between (-1/theta_iv) and forcing is about 0.37.

So overall, the approach by Dessler is overestimating uncertainty due to these correlations.

Based on values from the paper and digitization of graph data, I have (assuming normal distributions for simplicity and convenience):

theta_iv/theta_2xCO2: 0.99 +/- 0.40

theta_iv/theta_2xCO2good: 1.11 +/- 0.26

ta/ts: 0.86 +/- 0.10

this suggests ts/ta is 1.16 +/- 0.13

theta_iv: -0.975 +/- 0.15

this suggests (-1/theta_iv) is 1.03 +/- 0.15

forcing: 3.69 +/- 0.13

above uncertainties are standard deviations

(A +/- dA)(B +/- dB)(C +/- dC)(D +/- dD) is approximately

ABCD(1 +/- sqrt( (dA/A)^2 + (dB/B)^2 + (dC/C)^2 + (dD/D)^2 + (dA/A)(dB/B)corr(A,B) + (dA/A)(dC/C)corr(A,C) + (dA/A)(dD/D)corr(A,D) + (dB/B)(dC/C)corr(B,C) +/- (dB/B)(dD/D)corr(B,D) + (dC/C)(dD/D)corr(C,D)))

If I set all the correlations to zero and use the good theta_iv/theta_2xCO2 then I get 4.89 +/- 1.47 K. If I use the 3 above mentioned correlations I get 4.89 +/- 1.24 K.

Yes, this is a lazy approximation. But it suggests that Dressler and Forster are overestimating uncertainty by ~19% due to treating the 4 estimates as independent.

So if these correlations were taken into account, then the 95% CI would now exclude ECS values below 2K (as opposed to below 1.9L). So the claims by Dessler and Forster are correct.

gotta laugh

-1 spoils his earlier admission of error by responding to

“It is correlated noise that causes problems for OLS.”with

“Yes, that is what i meant. Sorry for not being clear.”However I was criticising his earlier comment

“-1 wrote “Yes, indeed the CO2-influenced trend does have strong autocorrelation. Therefore, usage of OLS to estimate this trend is inappropriate. “The CO2-influenced trend is the signal, not the noise, so clearly this is not a reason not to use the OLS estimate for the trend. Clearly either it wasn’t what you meant or you didn’t know what you meant.

@ dikran, the temperature trend still has autocorrelation even after you remove the influence of CO2.

Contrast angech’s sensitivity here with his rather robust approach elsewhere

[

emphasismine]This suggests that Joshua is correct and angech is “playing the victim” here.

I wasn’t going to mention this until I saw the above, but angech wrote:

No. In a scientific discussion we should not accept or tolerate dishonest or disingenuous arguments (e.g. misleading selective quoting). Science is a search for truth, and if you are not interested in being truthful in your arguments, you are not really interested in science.

Ciaran,

Weirder still, he does all this

whilst aware of his own inability to understand the contentBizarre behaviour.

> Hiding the real 10 year and longer flat pauses in the verbiage of a mendacious longer time period.

I’ve heard that “verbiage” word recently:

https://judithcurry.com/2018/02/05/marvel-et-al-s-new-paper-on-estimating-climate-sensitivity-from-observations/#comment-865749

I’m confused, why is this post “So if these correlations were taken into account, then the 95% CI would now exclude ECS values below 2K (as opposed to below 1.9L). So the claims by Dessler and Forster are correct.” visible, but the post before it (which the visible post is responding to) is not?

@ dikranmarsupial @ February 10, 2018 at 5:23 pm

Well that’s a shock. You could have knocked me down with a feather, etc.

-1,

Because I got home quite late last night and missed the earlier one.

Okay, thanks.

Now imagine if you had worked this out

beforelaunching a 20-post epic that began:Next time…?

Yesterday David Stern posted a non-linear fit to temperature and forcing data using statistical approaches developed by economists. Heat accumulating in the ocean is modeled like the accumulation of inventory. Spoiler – ECS=2.8C.

http://stochastictrend.blogspot.no/2018/02/a-multicointegration-model-of-global.html

Does anyone know anything about this… from the Stern paper?:

Though we managed to publish a paper in Nature early on (Kaufmann and Stern, 1997), I became discouraged by the resistance we faced from the climate science community….Joshua,

I suspect that it’s simply that these econometric methods mostly ignore physics and can often simply be curve-fitting exercises. Hence, they often aren’t accepted by those who prefer to use physically-motivated models, rather than statistical only methods (Doug Keenan, for example).

ATTP, I don’t know the history, but Stern’s approach appears to be as physically-motivated as any other energy-balance method. It has the same advantage as a box model, use of more recent delta T/delta F instead of estimating the deltas relative to an uncertain 1859-82. Even if the physics is over-simplefied, better use of the recent data improves the estimate.

Chubbs,

Yes, I’ve just had another look, and it does seem that way. It may, however, still have been a reluctance to consider analyses that appeared to be based more on statistical methods, than physical models.

Explain again why these econometrics models seem to proliferate in climate science. I don’t see them being used in any other areas of the hard sciences, where the problems aren’t any easier to solve. Take for example the concept of Granger Causality, which an economist “invented”. Look up its definition and it says this:

“According to Granger causality, if a signal X1 Granger-causes a signal X2, then past values of X1 should contain information that helps predict X2 above and beyond the information contained in past values of X2 alone.”That subsumes essentially every forced differential equation that has found practical use. It’s probably why other scientific and engineering disciplines don’t need to refer to Granger causality. They would essentially say “Are you freaking kidding me?”

If we look at this more closely, we could set up a Granger causality model such that the variates chosen are:

X1=CO2

X2=Temperature

But because of the Arrhenius outgassing properties of dissolved CO2, there is also this:

X1=Temperature

X2=CO2

This is all physics from that point on. Everyone knows that both causalities occur and it’s really more of a matter of constructing the correct physical models, with all the lags due to heat capacities, etc.

Again, my question is if Granger causality is so useful, why isn’t is being cited in multiple disciplines? It’s because economics is game theory, and economists will forever be chasing the causality tail. They will never be able to establish causality, as human behavior is full of gaming strategies such as reverse psychology that will always break predictive models. The model of Granger causality is about the furthest they can go, whereas the hard sciences are well beyond this in terms of maturity.

> Explain again why these econometrics models seem to proliferate in climate science.

Because, otherwise you can easily get spurious regressions in just about any time series. Start here:

https://climateaudit.org/econometric-references/

You’re welcome.

> if Granger causality is so useful, why isn’t is being cited in multiple disciplines?

That rhetorical question presumes something that is false:

https://scholar.google.com/scholar?hl=en&as_sdt=0%2C5&q=granger+causality&oq=granger

In fairness, auditors (e.g. HAS) over interpret it:

http://www.scholarpedia.org/article/Granger_causality#Personal_account_by_Clive_Granger

Re: Stern calculation. I hadn’t seen that, but I am very skeptical of calculations of ECS based on combining 20th century obs. with the global average linearized energy balance equation. The results can be significantly confounded by internal variability, as described here: https://www.atmos-chem-phys-discuss.net/acp-2017-1236/ or in Marvel et al. 2018.

PP writes:

It isn’t so straightforward to define what we mean by causality in a mathematical sense, rather than “i know it when I see it”. I would suggest that any mathematical definition/test for causality is going to have it’s limitations (we cannot unequivocally establish causation by purely empirical means anyway, but I’ll defer to Willard on that sort of thing). Granger causality is a useful concept, as long as you don’t conflate it with the everyday meaning of causality. Some years ago I helped (in a fairly minor way) run a machine learning challenge on causal feature selection, it is a fascinating subject, and obviously one of great utility, but rather difficult.

If I were to work on a mathematical definition of causality, it would be more to do with the ability to predict the effect of interventions to the system, rather than just sequential ordering.

If you want to know what Grainger causality is used for outside economics, it isn’t difficult to do. Seems to be used a fair bit in neurobiology here is one example.

It really isn’t a good idea to be disparaging about ideas in other fields without taking the time to understand what they are used for and

why, incredulity does not encourage self-skepticism. Sometimes there are good reasons why people research things that seem silly to you, and given they work on that topic and you don’t, it is much more likely that the misunderstanding lies with you rather than them, c.f. most of the discussion on climate skeptic blogs.It’s intellectually safe to criticize economic modeling since even the economists do it, as exemplified by Goodhart’s law, Lucas Critique, Campbell’s law, and Newcomb’s Paradox. They all revolve around the fact that once a model of economics is proposed, all the agents involved will take advantage of the knowledge of that model and use it to subvert the projections of that model to gain financial advantage. This follows from individuals trying to anticipate the effect of a policy and then taking actions which alter its outcome. This leads to a kind of Aha! moment to lots of people, and then it’s straightforward to make a connection to game theory.

This is a CompSci take on economic game theory: http://news.mit.edu/2009/game-theory

At best, all the stochastic econometric models (the quant models) are trying to do is gain subtle advantages over the other players in the game.

The connection to ECS and predicting AGW is that it is difficult to make projections on future emissions because that will largely be based on economics and how that plays into fossil fuel production economics. I have studied this as part of a book that I and my co-authors will have published by AGU/Wiley later this year called “Mathematical Geoenergy: Oil Discovery, Depletion, and Renewable Energy Analysis”. This is open for reviewers, and if anyone is interested they can volunteer and I can pass your name along to the editor.

“It’s intellectually safe to criticize economic modeling since even the economists do it, as exemplified by Goodhart’s law, Lucas Critique, Campbell’s law, and Newcomb’s Paradox.”

Climate modelers criticize climate models, particle physicists criticize models of particle physics, cosmologists criticize cosmological models. Therefore it is “intellectually safe” to criticize general relativity ;o)

This sort of inter-disciplinary hubris, like the “two cultures” thing, is rather silly IMHO and not a good advertisement for your book.

It’s not me with the two cultures, as per the link supplied by Chubbs

And I don’t understand how you are such an expert at what advertising entails for promoting of a book, and what is silly or not. The publisher suggests that

“The Authors will provide reasonable marketing assistance upon request of the Publisher”, yet I am not about to obtain an MBA just for that.You do know what IMHO means, don’t you (hint it isn’t claiming expertise).

Looking at his background, Stern is in the Ecological Economics field, which is much more in tune with geological constraints. In his paper on ECS, he references stock-and-flow diagrams, which are well known data flow models (related to compartment models) that have been used in system dynamics by Forrester and Meadows and others since the late 1960’s.

I can easily see how the use of these kinds of models for estimating ECS will work because what they are doing is capturing the flow of heat using simplified 2-box approximations. My comments would be that they aren’t doing the full diffusion so they don’t get the correct asymptotic fat tail

Variations of this article appear regularly, re: “misplaced hubris”

John Rapley, political economist, University of Cambridge, 9 February 2018

https://aeon.co/ideas/few-things-are-as-dangerous-as-economists-with-physics-envy