## Confounding ECS estimates

Kate Marvel and colleagues have just published an interesting paper on how [i]nternal variability and disequilibrium confound estimates of climate sensitivity from observations. Essentially they compare three different ways of estimating Equilibrium Climate Sensitivity (ECS):

• atmosphere-only simulations with observed sea surface temperatures, sea ice concentrations, and natural and anthropogenic forcings,
• historical simulations using the same forcings, but in which sea surface temperatures and sea ice are predicted, not prescribed,
• and abrupt 4xCO2 simulations that are run to equilibrium.

ECS estimates from abrupt 4xCO2 simulations (yellow), historical simulations (purple) and ones with observed sea surface temperatures and sea ice concentrations (red). (Credit: Marvel et al. 2018).

The basic result is shown on the right. The abrupt 4xCO2 simulations produe a broad ECS distribution with the highest ECS best estimate. The historical produce a lower estimate which suggests that feedbacks might become stronger in the future. The simulations with prescribed sea surface temperatures and sea ice concentrations produce even lower ECS estimates. This suggests that

recent decades appear to have experienced a pattern of sea surface temperatures that excited unusually negative feedbacks in tropical marine low clouds, leading to an even lower estimate of climate sensitivity than would have been expected under more usual historical conditions.

I don’t know that I need to say much more. This is another paper suggesting that observationally-based ECS estimates tend to be biased low, partly because feedbacks are likely to become stronger in the future, and partly because the pattern of sea surface warming in recent decades has lead to less warming than might have been typically expected. It doesn’t mean that this is necessarily right, but does suggest that

ECS estimates inferred from recent observations are not only biased, but do not necessarily provide any simple constraint on future climate sensitivity.

This entry was posted in Climate change, Climate sensitivity, Research and tagged , , , , , . Bookmark the permalink.

### 193 Responses to Confounding ECS estimates

1. John Hartz says:

Here we go again! 🙂

2. RickA says:

Yep. Recent observations suck!!!!

We should just compare models with each other.

That’s the ticket.

3. JCH says:

I’m laughing at RickA. It’s like, he just doesn’t want to know.

4. Jon Kirwan says:

Thanks for the heads-up regarding the paper. I’m very interested in analyses regarding simulation models, since I do create and use simulation models almost every day for other work I do. (Nothing to do with climate, but it does involve skills using multiple approaches to estimating complex problems that while less complex than climate are still pretty complex.) Sometimes, I learn something from seeing how others approach their work. (The conclusions aren’t nearly as important to me as “how they see what they see.”)

I know that no single paper, nor even some sub-selection of papers is enough. There’s always more to learn on what’s already been done and of course still more yet to understand that we don’t now. But I appreciate the work you (and others) do in selecting interesting work to consider. I’ve requested and loaded down the paper and I’m adding this one to my reading list. Heads-up appreciated.

5. Steven Mosher says:

i think the conclusion is rather..
observsational estimates tend to be lower than model based ones.

the word bias implies a knowledge of the true state.

still not seeing any information that suggests we can conclude that estimates over 3c are more probable than those less than 3c.

personally i think this is a metric that ought to be explicitly reported.

probability that the true value is greater than 3c.

6. Clive Best says:

Isn’t this paper really saying that the observed temperature record is anomalously low simply because it disagrees with CMIP5 models ?

7. Willard says:

The best way to answer your question would be a quote, CliveB. You have one?

8. JCH says:

What if this:

lasted just as long as:

9. KeefeAndAmanda says:

Thank-you ATTP for calling attention to this paper.

The general takeaway I get for educational purposes is that it shows yet again what everyone should know, which is that emphasizing observations over theory can be bad science, especially when we limit the observations to time frames that are way too short. Of course, emphasizing theory over observations can also be bad science. When we give both their due, we get good science, which we can also call mainstream science, the ongoing product of the professionally refereed scientific literature taken in its ongoing aggregate published in reputable journals.

The moral of the story for all educators as to what to tell all the laypeople of the word: Trust mainstream science. It’s the best that humanity will ever have at getting the truth or correcting mistakes and getting closer and closer to the truth not yet obtained about the physical world.

10. The Economist has taken a very dim view of the EPA”s Red Team approach to the parmetrization problem :

11. Michael Hauber says:

The comments by Rick and Clive suggest they have no understanding of the issues involved and make no effort to gain understanding beyond looking for a few key words such as models vs observation to then run the standard denier memes.

This sure makes it hard to have a sensible discussion on what evidence there may be for a sensitivity on the lower end. Or at least not on the higher end. I find it really hard to fully understand the issues involved in these studies and certainly cannot make blanket claims about what they do or don’t prove, but I’m trying my best.

The only thing I can come up with currently is the end of the abstract ‘One interpretation is that observations of recent climate change constitutes a poor direct proxy for long term sensitivity’. What other interpretations are reasonable? Is there reason to prefer one interpretation over the other? Would these question have obvious answers if I had time to do more than read the abstract and skim the start of the body…..

I’ve often thought that if the recent pattern provides lower sensitivity that the pattern may continue and sensitivity may then end up being lower. But this paper seems to address this by finding evidence that the specific reason that the current pattern exists is due to imbalances between land and ocean warming rates, so as soon as warming slows down, the pattern changes and the negative feedback disappear. Concerning to some level, but maybe not so relevant to the next century or so?

12. ATTP – to clarify the use of the term ‘historical’ (for those of us without access to the paper), does this mean ‘instrumental period’. It does not include I assume paleo history, which would give another answer again.

13. Richard,
Yes, in this case historical would mean the period over which we have instrumental data. In fact, the simulations with prescribed sea surface temperatures and sea ice concentrations covered the period 1979-2005 (which is, I think, the satellite era). I’m not sure if the historical covered the same period, or covered the period from about 1880-2005.

14. Clive,
No.

15. SM “still not seeing any information that suggests we can conclude that estimates over 3c are more probable than those less than 3c.”

Why 3C? As the loss function is non-linear and increases with ECS, it isn’t much of a comfort for the probability of estimates > 3C to be less than that for < 3C, unless it is much lower. I’d rather have the whole PDF as the whole PDF is relevant to a cost benefit analysis.

16. Steven Mosher says:

Why 3C?
Why Not?

“As the loss function is non-linear and increases with ECS, it isn’t much of a comfort for the probability of estimates > 3C to be less than that for < 3C, unless it is much lower. I’d rather have the whole PDF as the whole PDF is relevant to a cost benefit analysis."

1. You assume the loss function is non linear, that assumption of course comes with unstated uncertainties.
2. of course I too would rather have the whole PDF. Never said I didnt also want that.
3. Im not so sure cost benefit analysis is the right approach as it leads to the notion
that you can somehow find an optimal solution, and it often relies on a discount rate that
is highly debatable. I'd rather say, get to zero as soon as politically practicable.
4. Even if everything you say is true I'd still like to have the metric. NOTHING prevents
you from prefering a diffferent metric, In fact you prefer a metric which tell you 100% is
below X. I'm not denying you the right to that metric and not denying that it also might
be interesting. I'm merely saying I would like to see a specific metric quoted. If folks
gave all the data for the PDF as opposed to just a picture of it, I'd be equally happy.

A simple practical example will help. I'm looking at non linear growth. I have a bunch of factors
I can fiddle with. I have to pick some numbers. I pick them. I document why I picked them
The answer is I have a 50% chance of exceeding this number, given these choices. Everyone
in the room can see what I did and follow the decisons. They ask what ifs. They get answers.
When one of my assumptions changes I update. Everyone can follow the update. We now
have a 50% chance of being greater than Y. Its no different conceptually than picking a
2C limit. Somewhat arbitrary (we dont pick 1.95), somewhat grounded in some real consequences..
We update, everyone can follow the story. Its a way to keep all decision makers on the same
spreadsheet same story line, same playing field. It organizes a complex problem into a simple
set of numbers.. and its transparent about the simplification. There are other approaches. meh.

Nice chart. Nobody believes in the exact curvature of the lines. except the mathturbators… "so ya George there is a 72.3% chance we will exceed P. " I love george, we make the decisions that need to be made and he is still fiddling with the spreadsheet I gave him. Next week the curves change but the decisions dont.

get to zero.

17. Steven,

1. You assume the loss function is non linear, that assumption of course comes with unstated uncertainties.

Except, we can be pretty confident that there is a $\Delta T$ for which the loss would be everything and there is probably a $\Delta T$ for which the loss might be essentially zero. The former is also probably not far away from something like 10K. It therefore seems quite reasonable to assume that the damage doesn’t increase linearly with $\Delta T$ as – I think – is typically assumed.

18. 1 of course there are uncertainties, but that doesn’t mean that the loss function is linear. I am not an expert on economics, if you want to provide evidence that the loss function is linear, go ahead and provide it. Given evidence, I am willing to change my mind and admit I was wrong, but I
require evidence.

2. I didn’t say otherwise, I was just asking why the probability of being over a three degree threshold was a useful metric, and so far there is nothing to suggest it is.

3. “Im not so sure cost benefit analysis is the right approach as it leads to the notion
that you can somehow find an optimal solution, and it often relies on a discount rate that
is highly debatable. I’d rather say, get to zero as soon as politically practicable.”

I’d say that from a scientific/tecnological/economic viewpoint it is the right approach. The problem is a sociological one – we are not rational as a society enough to follow the rational course of action, but that doesn’t mean that cost-benefit analysis is irrelevant, because it is useful to know what the scientific/economics are eve if we choose not to act on it because of other considerations. The problem with deciding the discount rate (an input into the economics rather than a part of it) is a pretty good demonstration of that.

4. “Even if everything you say is true I’d still like to have the metric. NOTHING prevents
you from prefering a diffferent metric, In fact you prefer a metric which tell you 100% is
below X. I’m not denying you the right to that metric and not denying that it also might
be interesting. I’m merely saying I would like to see a specific metric quoted. If folks
gave all the data for the PDF as opposed to just a picture of it, I’d be equally happy.”

This is just evasion, nobody is preventing you from having any metric you like, I was asking *why* it is useful and you have provided no substantive argument whatsoever as to why a 3 degree threshold is relevant, beyond “why not”.

Of course if you want others to provide a particular metric, but can’t suggest a reason why it is useful (other than “why not”), don’t expect anyone to take the trouble to report it (although if we have the pdf, then you can calculate the metric for yourself).

19. As an example of non-linear losses, warming causes sea levels to rise. London has a tidal barrier to prevent the Thames inundating the city. While the rise in sea levels is below the limit the barrier can cope with, the incremental losses of AGW induced sea level rise is fairly low. As soon as the level rises above that the barrier can cope with, we are immediately face with a large loss, either due to flooding or the construction of further barriers. I don’t think there is much uncertainty in that.

20. Clive Best says:

It is interesting to contrast this paper with the Miller et al. Carbon Budget paper which is also based on CMIP5 models. They found that the remaining carbon budget before we reach 1.5C was ~3 times greater than models predicted. In other words the climate sensitivity with cumulative emissions is lower than models predict.

The one-third of ESMs that simulate the warmest temperatures for a given cumulative amount of inferred CO2 emissions determine the carbon budget estimated to keep temperatures “likely below” 1.5C. By the time estimated cumulative CO2 emissions in these models reach their observed 2015 level, this subset simulates temperatures 0.3C warmer than estimated human-induced warming in 2015 as defined by the observations used in AR5 (see figure below). It is this level of warming for a given level of cumulative emissions, not warming by a given date, that is relevant to the calculation of carbon budgets.

That simply means that Earth System Models have got the carbon budget wrong. If they have the carbon cycle wrong then they must also have carbon feedbacks wrong, which has an impact on the usual definition of ECS (doubling of CO2)

The Kate Marvel paper seems to be simply saying that the models are right, and slow feedbacks only kick in far into the future so there is no point in looking at the temperature record as a guide to ECS.

This is the Tablets of Stone argument which only an elite priesthood are qualified to interpret.

21. Clive,
I know I’ve tried to explain this to you before, but one of the reasons why Millar et al. got a carbon budget 3 times greater than other estimates is that we’re so close to reaching 1.5C that a small absolute correction can appear relatively large. In other words, if the total carbon budget is originally estimated as 700GtC and we’ve already emitted 600GtC, that leaves 100 GtC. If someone else comes along and suggests that the total should be 900 GtC, that means that what is left is now 3 times greater than originally thought. It doesn’t mean that the total carbon budget is 3 times greater.

Also, the Millar et al. result did depend on some assumptions that may not be agreed. For example, using HadCRUT4 data only (which suggests less global warming than we may have experience and, hence, that we’re further from 1.5C than other datasets suggest).

The Kate Marvel paper seems to be simply saying that the models are right, and slow feedbacks only kick in far into the future so there is no point in looking at the temperature record as a guide to ECS.

No, this isn’t what the Marvel et al. paper suggests (why don’t you actually read something before commenting on it?). It’s mainly suggesting that there could be reasons why the observational estimates lie below the model estimates. These are that models suggest feedbacks should become stronger in future and that the pattern of warming we’ve actually experienced has lead to less warming than we might have expected. The latter is actually quite reasonable given that we’re not yet in a position in which the forced response should completely swamp internally-driven warming/cooling.

22. verytallguy says:

This is the Tablets of Stone argument which only an elite priesthood are qualified to interpret.

This, on the other hand, is a rhetorical flourish entirely divorced from the facts.

23. Clive Best says:

ATTP,
That is not relevant here. The evidence from Miller et al. is that ESMs are not correctly modelling the carbon cycle. The sinks have not saturated as predicted and the airborne fraction remains approximately constant. There are immensely complicated biological and geochemical processes at work which ESMs simply parameterise.

Science is about comparing theory to experiments. It a s very dangerous track once you start arguing that observations are wrong because they disagree with models.

24. verytallguy says:

Science is about comparing theory to experiments. It a s very dangerous track once you start arguing that observations are wrong because they disagree with models.

This, again, is rhetoric rather than content.

First of all, it is perfectly legitimate to question measurements on the basis of theory. There are many examples where observations have been proved wrong after models questioned their validity.

Secondly, here, it’s actually questioning the model used in EBM based estimates, NOT the observations themselves.

It is a very dangerous track when you start arguing your understanding is right because it agrees with your rhetoric.

25. Clive,

That is not relevant here.

It is relevant to your point about it being 3 times bigger.

The evidence from Miller et al. is that ESMs are not correctly modelling the carbon cycle.

This is indeed a possibility, but let’s not fall into the single study syndrome trap. It is indeed possible that the sinks have taken up more of our emissions than was expected and, hence, that the carbon budget is somewhat bigger than we initially thought. It is also possible that this is not the case.

Science is about comparing theory to experiments. It a s very dangerous track once you start arguing that observations are wrong because they disagree with models.

Good thing noone is doing this then.

26. Just for clarification, I mentioned the Millar et al. carbon budget issue in this post:

If I have interpreted this correctly (which I may not have) the interesting question then becomes whether or not natural sinks are indeed taking up more of our emissions than expected and, if so, if we would expect this to continue. I don’t know the answer to this, and it would be good to get some clarification.

So, yes, I think this is an interesting aspect of Millar et al. I haven’t, however, yet had a clear sense of what this actually implies, or if this really does mean that we should expect the natural sinks to continue taking up more of our emissions than expected.

Also, bear in mind that I don’t think that Millar et al. corrected for coverage bias when they made their comparison. Their suggestion is that we should have warmed about 0.3C more than we did (given an emission of 545GtC). However, this is based on HadCRUT4 which suffers from coverage bias. If we correct for this (or use datasets that suffer from less coverage bias) the discrepancy is less than 0.3C.

27. Clive Best says:

http://clivebest.com/blog/?p=8232

28. verytallguy says:

A good example of theory being used to question observations, and proved right.

https://en.m.wikipedia.org/wiki/Faster-than-light_neutrino_anomaly

29. paulski0 says:

Steven Mosher,

i think the conclusion is rather..
observsational estimates tend to be lower than model based ones.

the word bias implies a knowledge of the true state.

I think putting in that way would be missing the point. The question it’s addressing is whether the sampling used to extrapolate ECS in simple energy balance model observational estimates is representative.

They address the question using GCMs and find two things:

1) That sampling fully-coupled historical simulations up to around the present tends to indicate a lower sensitivity using a simple EBM framework than the official model ECS estimates inferred from Abrupt4xCO2 simulations. In other words, effective sensitivity appears to increase over time and therefore sampling the present will produce low-biased ECS estimates.

2) That prescribing the particular spatial pattern of observed temperatures over the relevant period tends to produce weaker net global average feedback than when the GCM is running freely. Under the very reasonable assumption that this spatial pattern is more indicative of decadal/multi-decadal internal variability than a “normal” long term pattern, this also indicates that sampling the recent past using the simple EBM method will be biased low.

So, yes, this analysis does indicate that sampling used in EBM estimates appears to produce unrepresentative results. In particular it suggests those results are unrepresentative in the direction of a low bias. The GCMs could be wrong, which should be taken into account, but the results should be taken seriously. It would perhaps be better to think in terms of indicating greater uncertainty rather than bias through an EffCS-to-ECS translation range – so an EBM result indicating 2ºC EffCS could be compatible with 4ºC ECS, but it could also be compatible with 2ºC ECS. Even so, this would imply a higher median than suggested by simple EBM studies.

30. Clive Best says:

Einstein’s theory of general relativity makes exact predictions that can be tested experimentally. Quantum electrodynamics predicts the value of the anomalous magnetic dipole moment of the electron (g-2). Experiments have confirmed QED theory to a precision of 1 part in a trillion: g/2 = 1.001 159 652 180 85

Climate Models only ‘project’ ECS to be in the ‘likely’ range of 1.5 to 4.5C

31. verytallguy says:

Clive,

so, you agree on the point of principle that theory is a legitimate reason to challenge observations.

Do you also agree that the Marvel paper does not actually challenge any observations, but rather the EBM model used to interpret those observations?

32. Clive,

Indeed, and other datasets suggest a little more. Hence, the discrepancy is probably smaller than suggested in Millar et al.

33. Clive Best says:

Yes.

She is saying they are too simplistic. However she implies that long term feedback effects only emerge hundreds of years from now. Even if true it is hard react now to a future hypothetical problem if it cannot be tested.

34. clive wrote “Einstein’s theory of general relativity makes exact predictions that can be tested experimentally. Quantum electrodynamics predicts the value of the anomalous magnetic dipole moment of the electron (g-2). Experiments have confirmed QED theory to a precision of 1 part in a trillion: g/2 = 1.001 159 652 180 85

Climate Models only ‘project’ ECS to be in the ‘likely’ range of 1.5 to 4.5C”

some scientific issues are known with more certainty than others – news a 11.

ECS can be tested experimentally, it is just we only have one set of experimental apparatus and the experiment will take a very long time to run. So what?

35. Clive,

However she implies that long term feedback effects only emerge hundreds of years from now.

IIRC, the abrupt 4xCO2 simulations run for 140 years beyond the time that CO2 has doubled.

Even if true it is hard react now to a future hypothetical problem if it cannot be tested.

Yes, we know this is your view. There’s a quote (that I can’t find) which goes something like what’s the point of developing a science that can make predictions if we simply wait for them to come true. You would presumably regard that as the point?

36. Steven Mosher says:

yes some confidently assume there is a point where we might lose everything. and they assume everything is some big number,assuming as well that it maps onto numbers.

37. “Even if true it is hard react now to a future hypothetical problem if it cannot be tested.”

It is called “decision making under uncertainty” and people do it all the time, for instance whenever they buy insurance.

38. Steven,
Do I take it that you think that there isn’t some $\Delta T$ at which we would essentially lose everything?

39. SM that still isn’t evidence that the loss function is linear (or sub-linear).

40. Steven Mosher says:

dk.
i told you why i find it useful. i even gave you an example.
1. is easy to understand. basically a coin flip.
2. is easy to communicate, like the 2c boundary.
you are free to choose another metric. i wont even make you defend it. like i said ,day in and day out i find this approach raises no issues in ordinary discussions. and of course there are those who like the illusion of accuracy a continous pdf gives them. I like a different illusion. id say theres a greater than 50 percent chance you will respond.

41. Steven Mosher says:

never said it was linear.
i said you assume its non linear without stating uncertainties.
youre free to defend that assumption.

42. Easy to get a ECS of 3C. Take land-only measurements and assume that the other GHG’s get dragged along with the CO2. Voila, measure 3C right from the GISS/CO2 log relationship, and this agrees with the 3C known since the Charney report of 1979.

43. “i told you why i find it useful. i even gave you an example.”

You gave no reason why 3C was a useful metric. I argued that it wasn’t a good metric because the loss function is thought to be non-linear, any you questioned that, but are not able to provide any argument or evidence that it is linear or sub-linear. Raising specious objections is not a good thing in scientific discussion.

A metric that is easy to understand and easy to communicate but useless/meaningless remains useless/meaningless despite being easy to understand and communicate.

” i wont even make you defend it.”

I wasn’t asking you to defend it, I was asking you to explain it, when I don’t understand something and I may be missing something, I find asking questions is quite a good way of finding things out (and giving my position so you have an indication of where the disagreement may lie).

“of course there are those who like the illusion of accuracy a continous pdf gives them. ”

there is no illusion of accuracy, the whole point of a pdf is to give an indication of uncertainty. The accuracy of a PDF depends on its method of construction. Some are accurate, e.g. a PDF of peoples heights from a sample of 100,000 people, some less so. A paper that shows the PDF is different depending on the way it is estimated is specifically demonstrating that there is uncertainty in the PDF.

44. “never said it was linear.
i said you assume its non linear without stating uncertainties.”

You used the assumption as an argument against PDFs rather than your chosen metric. If you have no evidence or argument that the loss function is linear or sub-linear, that is just raising a specious counter-argument without caring about its validity. That is the very essence of bullshit, in the sense of Frankfurt.

45. Steven Mosher says:

of course you might lose everything.i dont know that there is a ‘we’ that agrees on a everything or even its value. some see their current state of being as everything and they discount the future.
however if you want to assume that we can put numbers on everything then i suppose if you drew a curve it could look non linear.

not sure of the usefulness of that approach.
lets look at how its worked.. hmm

its not that hard. as bbd will pop up and say ..its about 3c.
however if i note.. yup less than 3c is slightly more probable than more than 3c people have a cow.

its a framing folks dont like and not because its useless as dk suggests.

however if i told a story about ecs that was as improbable as the rcp 8.5 story.. what would the reaction be.. well you can talk about some tails but they have to be the right tails.

again. i prefer to hear how much of the pdf falls below 3c. just my way of tracking my personal benchmarks. 5 years from now you have my promise i will still look at the same way. it makes continued dialog upredictable.

not arguing you should use it. i prefer it.

46. Steven Mosher says:

predictable

47. JCH says:

The Easter Pacific, a vast area, cooled from around 1983 until 2013. That cooling is in the observations. It cooled because of anomalously intensified trade winds. It racked up a series of powerful La Niña events. Those trade winds have subsided. A recent paper indicated in the years since 2013, the GMST has gone up .24 ℃. That is in the observations. The 30-year trend rebounded rebounded and is, unless a powerful La Niña shows up between now and 2020, on the way to .2 ℃ per decade over the first two decades of the 21st century, which, sorry model bashers, is known in the Wild West where I grew up as a freakin’ bullseye prediction. This is actually a paper about simple physics. Push hard one way with something big enough, a cool Eastern Pacific, and things move that way: models are running hot. Stop pushing, they stop moving: models start looking more accurate. If we get a full decade of no pushing, 3 ℃ will be in the rearview mirror, and models will be running cold.

Clive needs to huddle up with Professor Curry and pray for the return of the divine wind. Or, there’s always the little North Atlantic’s stadium wave.

48. Clive Best says:

The insurance argument daft.

A nuclear war would annihilate most of civilisation within days. If the annual risk of an attack is estimated to be 0.1% does that mean we should spend 0.1% of global GDP (US\$0.78trillion/year) building nuclear bomb shelters ?

What is the probability that every Climate Model has bugs? The answer for sure is 100%. You cannot develop 1 million lines of code which are bug free.

The risk is unquantifiable.

49. angech says:

Paul Pukite (@WHUT) says:
“Easy to get a ECS of 3C. Take land-only measurements and assume that the other GHG’s get dragged along with the CO2. Voila, measure 3C right from the GISS/CO2 log relationship, and this agrees with the 3C known since the Charney report of 1979.”
Thanks. An argument with facts. One of the few I have ever seen.
Appreciated.
Mosher could do the same,he says, but won’t because it is too easy.
If I get smart enough to understand it Paul I will give you a pat on the back or argue with you about it. At the moment it is enough that you put up a reason.
ATTP.
“This is indeed a possibility, but let’s not fall into the single study syndrome trap”
If it is valid more studies should emerge with all the interest in the matter.
This would of course place pressure on Marvel’s assumptions as per CB.

50. verytallguy says:

Clive,

speaking of daft, so far we’ve been through your “Marvel is challenging obs” which was daft and we’ve established to be wrong and “Models can’t be used to challenge obs”, which was daft and we’ve established to be wrong.

Now we have a non-sequiteur, What is the probability that every Climate Model has bugs? and a daft answer, The risk is unquantifiable

Do we need to go through the same again, or would it be OK if we went directly to “wrong” this time without wasting our time debating exactly why first?

51. “The insurance argument daft.” well, that settles it then! ;o)

“The risk is unquantifiable”

This is the old “we don’t know anything exactly/we don’t know anything” conflation. There is uncertainty in the estimate of risk, but that doesn’t mean that the rational thing to do is not to act on our best estimate of the risks.

As I said earlier as a society we don’t act rationally, if we did we would end the ams race rather than build bomb shelters. The reason we don’t do either is not because they are irrational but that we can’t convince the majority electorate to support them. People tend to be slightly more rational when it comes to personal risks.

52. Chubbs says:

The Marvel paper is a nice build on the recent Dessler paper, which found a large spread in ECS estimates from EBM due solely to variation in initial conditions/evolution in a single model. Both of these papers indicate that the non-linear climate system doen’t behave in a convenient manner for EBM. Regarding the ability of models to be tested with observations, I would argue that the most recent warming spike was well flagged by climate models, while EBM proponents generally were caught flat-footed.

53. Magma says:

I see Clive Best’s strawman amplifier goes to 11.

54. Magma says:

A nuclear war would annihilate most of civilisation within days. If the annual risk of an attack is estimated to be 0.1% does that mean we should spend 0.1% of global GDP (US\$0.78trillion/year) building nuclear bomb shelters ? — Clive Best

Since global GDP is about US\$78 trillion, 0.1% would be US\$78 billion, or US\$0.078 trillion. Order of magnitude errors are a known risk when engaging in sloppy strawmanning.

But would it be irrational to spend that much on preventing nuclear war between major powers? (Summarizing this as “building nuclear bomb shelters” is an insult to readers of this blog.) Let’s see… the raison d’être of the UN was to prevent war between great powers. Its budget, which of course covers many other things, is \$5B/yr. NATO, a nominally defensive alliance, has a budget of \$2B/yr. The IAEA, whose core purpose is to limit nuclear proliferation, has a budget of \$1B/yr. And the U.S. spends about \$16B/yr on maintaining its nuclear arsenal, a cost that is only 3% of its(nominal) military budget.

55. Clive Best says:

@VTB

Some things we can quantify
1. The spread in Values from CMIP5 models gives a 95% probability that ECS lies between 1.5C and 4.5C with a most probably value of between 2.5 – 3.0 C.
2. All climate models have bugs. A known unknown
3. All climate models use linear parameterisations of various complex Biological, Geochemical and Human processes. Known Unknowns
4. Various Unknown Unknowns

Given all that the only sensible policy is to assume ECS = 2.8C

56. Clive,

Given all that the only sensible policy is to assume ECS = 2.8C

This is – I would argue – not obviously the only sensible policy.

57. I should add, though, that basing policy on ECS = 2.8C and then acting on that would probably be an improvement on what we’re currently doing.

58. My risk of a car accident is not exactly quantifiable either, but I would still have car insurance even if it were not a requirement. As I said, people manage “decision making under uncertainty” all the time without undue difficultly, we don’t require certainty in most aspects of life, which is a good thing as it generally isn’t available concerning the future.

59. paulski0 says:

I’d be surprised if that amount isn’t being spent on some forms of nuclear bunker around the world. Also lots of money spent on missile detection systems, spy satellite and aeroplane missions, development and enforcement of nuclear non-proliferation agreements.

60. verytallguy says:

Clive,

excellent, direct to wrong. That was easy 🙂

What a shame to end it with another bald assertion “Given all that the only sensible policy is to assume ECS = 2.8C”

Firstly, “ECS” is not a policy, it’s a metric. Secondly, it’s not at all obvious that assuming a single number is sensible, rather than a range. Thirdly, if we were to base policy on a single number, it’s far from clear that the median value is the most sensible for policy.

But if you meant that “given all that, a reasonable mid-point estimate for ECS to use in policy development is 2.8C”, then I’d certainly not argue with you.

61. Clive Best says:

@dk

The risk of a car accident for men (higher) and women (lower) within certain age groups is easily calculated. That’s the job of Actuaries and determines your premium.

62. Clive,
Dikran said “my risk”.

63. Joshua says:

Dikran said “my risk”.

Indeed.

This perhaps demonstrates how bad, even smart, knowledgeable people are in understanding risk. As just one example, confusing relative risk with absolute risk (I think a form of Clive’s apparent confusion), is pretty common.

64. Joshua says:

That said…

As I said, people manage “decision making under uncertainty” all the time without undue difficultly.

I think it is often quite difficult for people to manage that type of risk. I think evidence shows we have a lot of built in biases when we evaluate risk, that are rather difficult to manage.

65. Mitch says:

More obsession about ECS. As pointed out above, ECS is a metric of the models not an input parameter. The scatter shows that there is significant path dependence on the resulting ECS which is troubling. Nevertheless 3 deg C seems to be the center of the range.

To point out the obvious, the first order risk reduction is to do faster the things that need to be done eventually. For example, reduce fossil fuel use since the resources are finite. And, of course, more research to better define the climate system and the potential solutions.

66. “The risk of a car accident for men (higher) and women (lower) within certain age groups is easily calculated. That’s the job of Actuaries and determines your premium.”

As ATTP said, I am talking about MY risk, not the risk of a broad category of drivers that are a bit like me (most of the attributes other than claims having little to do with my driving style/ability). However, I don’t need an individually tailored premium that exactly matches MY risk (plus acceptable overheads) before I buy insurance as the ball park approach is good enough.

FWIW, I have actually taught survival analysis IIRC to a class that included actuaries. ;o)

Joshua – sure we have biases, but we do still manage to take rational (within our cognitive limitations) decisions under uncertainty.

67. BBD says:

Only because contrarians mistakenly imagine that they can use it as a rhetorical device to impede emissions policy.

68. “Given all that the only sensible policy is to assume ECS = 2.8C”

no that would be an obviously biased policy as the loss function is non-linear and neglecting the upper tail would be irrational under those circumstances. The better approach is to act on the best estimate of the PDF of ECS, or better still marginalise (average over) plausible candidate PDFS (weighted by their relative plausibility).

Acting on a point estimate of ECS is rather like assuming there is no uncertainty in ECS, rather than acknowledging it. We KNOW there is substantial uncertainty in our knowledge of ECS, so that is obviously an incorrect assumption to which the conclusions are likely to be substantially sensitive.

69. Eli Rabett says:

Clive best said What is the probability that every Climate Model has bugs? and a daft answer, The risk is unquantifiable

Actually Steve Easterbrook has published on that http://www.easterbrook.ca/steve/2010/11/validating-climate-models/

70. Eli Rabett says:

Chubbs said

The Marvel paper is a nice build on the recent Dessler paper, which found a large spread in ECS estimates from EBM due solely to variation in initial conditions/evolution in a single model.

Good trick. The Marvel paper was submitted in November 2017, the Dessler paper in January 2018. FWIW, Andy was heard to mutter after Marvel’s talk about having to speed up writing lest he be scooped.

71. Joshua says:

Dikran –

Joshua – sure we have biases, but we do still manage to take rational (within our cognitive limitations) decisions under uncertainty.

Rationality is a bit of a pickle there. Lots of folks make lots of decisions about risk that might be portrayed as “irrational.” I wouldn’t use that term, actually, but I think there’s a lot of evidence that people, often, reach outcomes in evaluating risk that are heavily influenced by biases.

I’m not sure how to reconcile your argument with stuff like this:

https://en.wikipedia.org/wiki/Prospect_theory

72. Everett F Sargent says:

p-values,1/20,5%,IPCC,17%,Median,50%,IPCC,83%,19/20,95%
degrees C
AMIP,1.09,1.38,1.82,2.47,3.02
Historical,1.42,1.77,2.33,3.12,3.90
Abrupt 4XCO2,1.87,2.31,3.15,4.13,4.66

Taken from the included Figure 1, kernel density estimates overplotted for visual clarity (e. g. the curve fitted lines).

73. Clive Best says:

Eli Rabett

I was really talking about coding errors rather than wrong parameterisation. This varies on average from about 20 errors per 1000 lines of code to in the best cases 10 errors per 1000 lines. Some can be eliminated by careful testing. Others remain hidden for ever.

So I estimate each model probably has at least 1000 coding errors yet to be discovered. The other problem if that the programmer who wrote the original code no longer works because they have left or found another job. The routine is embedded in the system and no-one really understands it anymore. Legacy FORTRAN code must be a huge problem for models that have been developed over the last 30 years or so.

74. verytallguy says:

I was really talking about coding errors…

Really? I could have sworn it was squirrels that you were *really* talking about.

75. Hyperactive Hydrologist says:

I can tell you the from working as a flood risk modeller that the loss function is certainly non-linear.

76. John Hartz says:

Out of curiosuity…

Have climate scientists/modelers developed an index comparable to ECS for other components of the Earth’s climate system such as the cyrosphere?

77. Joshua, are you arguing that people are not generally being rational in buying insurance?

78. Clive’s comments on the software engineering of climate models seems rather facile, and casually dismissive of what is I believe, a remarkable achievment.

I worked on a 50 person team to create ASIC chip design software, and the code volume was at least as great as for climate models. Some of the core modules written on FORTRAN were so stable and long-standing, we hardly touched them, because they dealt with the core behaviours, and had been tested many times in multiple models and contexts.

But even where there were defects, that does not make it is impossible to use, or manage the impact of defects. There are numerous strategies for dealing with defects. If that was not the case, then society would collapse (electricity networks, flight control systems, telco billing systems, etc.). The existence of defects does not mean uncontrollable errors. For flight control systems, one can go as far as proving the logic in the software, but this is not a pre-requisite, even for mission critical systems.

Further to Eli’s reference … In Easterbrook & Johns (2008) analysed the software engineering practices at the Met Office.

The findings were very interesting. For example, with large commercial software packages the code size eventually levels off (it reaches a functional saturation or complexity limit), whereas the MetOffice climate software grew linearly, despite its inherent complexity. They found a lot in common between the Met Office practices and agile development, but on a larger scale. One of the things that helps in managing defects is that the team(s) do a lot of regression testing, intermodel comparisons, etc., and of course, unlike say Oracle e-Commerce, they have physics to help them in spotting and resolving defects. And of course, modules can be tested to ensure they work independently, and that they obey fundamental laws.

On the error rate, they said …

“We have not attempted a detailed analysis of post-release code defects. Over the last six releases, there were an average of 24 “bug fix” tickets per release, against an average of 50,000 SLOC [Source Lines Of Code] touched per release, suggesting that 2 defects per 1,000 changed lines of code made it through the test- ing and review process for the previous release (and were subsequently discovered) [his suggests a defect density for the current release of 0.03 defects per KSLOC (≈ 24 latent defects in 831,157 SLOC)]. However, these numbers must be interpreted carefully because many of these “bug fixes” represent defects that were detected, but treated as accept- able model imperfections in previous releases.”

Radically less than Clive’s wet finger in the air.

79. Chubbs says:

Eli – In hindsight, complimentary would have been a better choice to describe the two papers.

80. Joshua says:

Dikran –

I would say that with regard to buying insurance, there is more that comes into play than just rationality.

Peace of mind comes onto play. An ethical calculation – about whether I would want someone else to not have access to proper medical care if i don’t carry insurance and don’t have enough money to cover their costs should they try to sue me – comes onto play.

From a purely “rationality” frame, I suppose it might not be “rational” to carry insurance, as the very fact that insurance companies make money is evidence that as a matter of pure risk assessment, they are going to pay out less than what they take in. Chances are it’s a losing contractual agreement for me.

Is that rational? Only if I consider wanting to have peace of mind to be rational.

Hard to assess, which is why I think the question of rationality gets complicated. A lot depends on subjective evaluations and attitudes towards risk (risk aversion). Is it rational to risk my money buying a lottery ticket? Invest in cryptocurrency?

I’m mostly thinking about how stuff like availability bias and recency bias come into play. I think they are quite prevalent biases in how people reason about risk in the face of uncertainty. They certainly come onto play, a lot, in how people approach the risks of climate change, it seems to me.

81. MarkR says:

Clive Best, have you read the paper?

A simple model: T=T_F + T_N, i.e. temperature is forced + natural. Also given T you think you can estimate ECS via P(ECS|T).

Marvel et al. use real-world T_N data for a like-with-like model-observation comparison. A standard method of doing P(ECS|T) underestimates ECS because the real-world sample of T_N is somewhat of an outlier.

From:
A- Marine stratocumulus expands when inversion strength increases
B- The inversion strength change is an extreme temporary variation linked to changes in the Pacific until around 2012, and not part of a new normal
It follows:
C- historical estimates of ECS are biased low

A just comes from the laws of physics and has been repeatedly confirmed by aircraft & satellites. B seems pretty likely and the burden of proof is on showing otherwise. So long as these hold and without some extreme other error, C follows.

82. Magma says:

Clive’s comments on the software engineering of climate models seems rather facile, and casually dismissive of what is I believe, a remarkable achievement. — Richard Erskine

Best’s entire comment on software quality appears to be more cartoonish handwaving with numbers being pulled out of thin air. Researchers such as Steve Easterbrook have examined the software quality of several leading climate models and found them to be exceptionally high. A few links, for those interested.

83. Eli Rabett says:

Chubb, agreed on complimentary. What was interesting was that Dessler at the AGU was surprised by how far Marvel had gotten.

84. Eli Rabett says:

Another talk from Steve Easterbrook on ESM errors

http://www.easterbrook.ca/steve/2014/05/tedx-talk-should-we-trust-climate-models/

—————————

When I started this study, I asked one of my students to attempt to measure how many bugs there are in a typical climate model. We know from our experience with software there are always bugs. Sooner or later the machine crashes. So how buggy are climate models? More specifically, what we set out to measure is what we call “defect density” – How many errors are there per thousand lines of code. By this measure, it turns out climate models are remarkably high quality. In fact, they’re better than almost any commercial software that’s ever been studied. They’re about the same level of quality as the Space Shuttle flight software. Here’s my results (For the actual results you’ll have to read the paper):

85. “I worked on a 50 person team to create ASIC chip design software, and the code volume was at least as great as for climate models. “

That’s probably the worst comparative example to apply. That is logic design, not algorithmic floating point models. Totally different testing strategies, often using assertions, and then to top it off, simulators to test the code virtually and then in silicon with hardware. Plus huge dependence on independently tested library logic blocks. Perfectly controlled experiments with absolutely known input and output preconditions and postconditions.

I go for simple models in climate science, and see how far one can take them.

86. izen says:

@-Joshua
“….as the very fact that insurance companies make money is evidence that as a matter of pure risk assessment, they are going to pay out less than what they take in.”

This may perpetuate a common misconception of how insurance companies work.
They utilise the steady income from premiums to earn interest on investments and savings. The pay-outs come from that income stream. Premiums rise when rates of return on the invested premiums fall, rather than be set by the size of payouts.

Insurance is a version of the ‘Prisoners Dilemma’ game used to test rational choice at the individual/Group level.
It may be rational at the individual level to defect from insurance, you save the premiums for a low risk.
But as a member of a group, the small individual sacrifice is of such benefit to the Group that communal pay-off is greater.

87. Joshua says:

Izen –

Yeah, good point.

Still, your point notwithstanding, I’m not sure that as a purely “rational” decision, insurance holders as a group are likely to break even when comparing premiums they pay to payouts they receive. (Maybe you know the answer to that?) I think my point about the complicated nature of determining rationality still applies – there are subjective factors like risk tolerance, the subjective value of peace of mind, etc.

88. Joshua “I would say that with regard to buying insurance, there is more that comes into play than just rationality.”

Of course, but that doesn’t mean that people do not manage tolerably rational decision making under uncertainty, which was my basic point. We don’t need proof or high levels of certainty when making decisions. Of course our cognitive biases are perhaps partly there as short-cuts to help us make those decisions (and may not be optimal in modern civilisation), but that does not make buying insurance irrational or non-rational.

89. izen says:

@-Joshua
Very few policy owners make a ‘profit’ on their insurance and get back more, directly, than they pay in. I am sure that emotional reasoning affects insurance choices, especially cultural compliance.

The group benefits of insurance are best revealed by those places where they are absent or partial. Events that are easily survivable in one Nation may be the largest cause of bankruptcy and home foreclosure in another where the insurance system is ineffective.
What value car would you drive if there was no possibility of re-couping any loss from accident or theft ?

90. MarkR says:

I’m confused by the bug argument.

5 days before Sandy whomps New York and the ECMWF is said that the storm will make a turn and smack into the coast. There are bugs in all computer code, so does that mean that Sandy didn’t impact New York?

91. “From a purely “rationality” frame, I suppose it might not be “rational” to carry insurance, as the very fact that insurance companies make money is evidence that as a matter of pure risk assessment, they are going to pay out less than what they take in. Chances are it’s a losing contractual agreement for me. ”

This shows a lack of understanding of insurance from the buyer’s perspective as well as from the sellers. Of course I am not going to break even from an insurance policy, even in the long run! If I cause a car accident, the liability might plausibly exceed my lifetime income, so I cannot possibly afford it, but I can afford insurance (because the likelihood of that is very small and spread across many policies). Likewise “peace of mind” is something I am buying from the insurers, so I should expect to have to pay for that – I am buying a service.

I think you are over-thinking this. Nobody is claiming that insurance policies are bought as the result of purely rational calculations, just that humans manage this sort of (o.k. “reasonably”) rational behaviour (I didn’t say optimal either) in problems involving decision making under uncertainty.

92. The definition of a trivial program is one that you know doesn’t contain any bugs. The code for GISS models has been publicly available for years, how many bugs have climate skeptics found?

Easterbrook’s video is excellent (if it is the same one I remember), great t-shirt as well. I gather he has a book in preparation, which I am looking forward to reading.

93. Paul – with respect, it was not ‘logic design’. We were not designing chips, but designing a process and system for the design of any chip, including a high level language for behavioural design; a meet-in-the-middle process for ASIC chip design. And the reference to FORTRAN was key – many systems still use large volumes of (very reliable) linear programmining. My initial point was that there are many large s/w systems and that defects do not need to be fatal. BUT that was a preamble really, the bulk of my comment was based on the published paper (Easterbrook & Johns (2008)) that analysed the Met Office situation and provided data, whereas Clive made no references and simply waved his arms around.

94. verytallguy says:

I’m confused by the bug argument.

It’s not an argument. It’s an attempt at distraction.

95. Clive Best says:

MarkR,

Thanks for the clarification. I don’t have access to the full paper so I accept that me making sweeping generalisations based only on reading the Abstract was a little hasty. However, I can’t help having the feeling that there is a campaign to undermine any low results for ECS obtained from the data, such as those like those of Nic Lewis and Judith Curry.

96. Clive Best says:

Mark,

I think there is a big difference between Weather Forecasting and Climate Modelling. The former uses data assimilation in real time to re-normalise the forecasts for the next 24 hours up to the next week. Climate models are trying to forecast the climate over the next 100 years. They also include modelling of far less certain phenomena.

Quote from “The Art and Science of Climate Model Tuning”

While the fundamental physics of climate is generally
well established, submodels or parameterizations are
approximate, either because of numerical cost issues
(limitations in grid resolution, acceleration of radiative
transfer computation) or, more fundamentally, because
they try to summarize complex and multiscale processes
through an idealized and approximate representation.
Each parameterization relies on a set of internal
equations and often depends on parameters, the values
of which are often poorly constrained by observations.
The process of estimating these uncertain parameters
in order to reduce the mismatch between specific observations
and model results is usually referred to as
tuning in the climate modelling community

97. Clive,

However, I can’t help having the feeling that there is a campaign to undermine any low results for ECS obtained from the data, such as those like those of Nic Lewis and Judith Curry.

It’s called doing research. The goal of research is to try and understand something. Discrepancies can be particularly interesting. Just because the conclusion doesn’t suit what you might like, does not somehow mean that the intent is to undermine.

98. verytallguy says:

I can’t help having the feeling that…

The post is about quantitative estimates of a metric.

And your reason to disregard research findings is a “feeling” that seems to amount, frankly, to paranoia?

That’s really very poor. Even by “sceptic” standards. Why should anyone pay the slightest regard to these “feelings”?

99. “However, I can’t help having the feeling that there is a campaign to undermine any low results for ECS obtained from the data, such as those like those of Nic Lewis and Judith Curry.”

How would you differentiate between that and a campaign to evaluate different approaches and determine their advantages/disadvantages/validity (i.e. normal scientific progress)? How would they differ?

It shouldn’t be much of a surprise that ideas that go against the mainstream view are subject to scrutiny and criticism. Incorrect ideas that attract interest need to be rapidly evaluated to prevent misleading others and wasting their time. If the ideas are valid then passing strict scrutiny is the best advertisement they could have and it should be welcomed.

100. Clive Best wrote “Climate models are trying to forecast the climate over the next 100 years. ”

No, they are trying to simulate climate over the next hundred years, which is not the same thing. The code base is often very similar (or the same), so the bugs cast doubt on both. I suspect weather models also have parameterised components, but would be interested to hear otherwise. If that is the case, your quote seems irrelevant.

101. JCH says:

“However, I can’t help having the feeling that there is a campaign to undermine any low results for ECS obtained from the data, such as those like those of Nic Lewis and Judith Curry.”

There was also a campaign to take the results of Nic Lewis and Judith Curry and have them rammed down the world’s throat. She practically had the chairman – “stay tuned” – of the house science committee on speed dial. They tried to do the congressional version of a career lynching.

Nope, no advocacy or politicized science in that story.

102. -1=e^iπ says:

I don’t have time to read the paper right now. How are sea ice and sea surface temperatures predicted.

103. -1,
Not sure what you mean. In one set of simulations (amip), the SSTs and sea ice are set by observations. In the historical runs, they are not. I think that the SSTs are simply allowed to evolve throughout the simulation. I’m not sure if the sea ice is fixed or also allowed to evolve.

104. “Paul – with respect, it was not ‘logic design’. We were not designing chips, but designing a process and system for the design of any chip”

That’s even worse, because you were designing a process for human interaction. That’s far removed from software that models physics. I am interested in what makes you think that there is a connection.

105. dikranmarsupial says:

PP processes for human interaction very rarely involve linear programming. Sounds to me like a computer aided design (CAD) program for designing integrated circuits, which are likely to involve a lot of maths for placement, routing, capacitance, timing etc. not just logic design (the signals are not nice 0s and 1s on the chip when it is running fast).

106. Paul, you are trying to extrapolate from what I said. I was talking about the issue of faults in large volumes of code, of whatever sort, not equating the types of s/w designs; which is not what I was saying.

107. Magma says:

@ dikranmarsupial

The NWS/NOAA lists these as being some of the physical processes/parameters that are typically parameterized in weather modeling (order as listed in their brochure):

2. Scattering by aerosols and molecules
3. Absorption by the atmosphere
4. Reflection/absorption by clouds
5. Emission of longwave radiation from Earth’s surface
6. Condensation
7. Turbulence
8. Reflection/absorption at Earth’s surface
9. Snow
10. Soil water/snow melt
11. Snow/ice/water cover
12 Topography
13 Evaporation
14. Vegetation
15. Soil properties
16. Rain (cooling)
17. Surface roughness
18. Sensible heat flux
19. Deep convection (warming)
20. Emission of longwave radiation from clouds

108. Magma – Thanks for sharing the talk by Easterbrook. There was an interesting quote by someone in the audience regarding the interplay of data observation systems and the models. I tracked down the paper and quote (Bengtsson & Shukla (1988)):
“… a realistic global model can be viewed as a unique and independent observing system that can generate information at a scale finer than that of the conventional observing system.”
https://journals.ametsoc.org/doi/pdf/10.1175/1520-0477%281988%29069%3C1130%3AIOSAIS%3E2.0.CO%3B2
While not directly related to the Marvel paper, this interplay between observation systems and the climate models is instructive, and rather demolishes the attempt to place either theory or observation in primacy; they work in synergy.

109. Cheers Magma, IIRC the MET office used to use (essentially) the same program for weather forecasting and climate work, so it seemed natural that the parameterisations were necessary for weather forecasting as well, hence Clive’s point is incorrect.

110. Magma says:

@ dikranmarsupial and others

As you might guess, I get annoyed when those who can and should know better engage in lazy or bad faith arguments (strawman arguments, red herrings, making wildly inaccurate ‘guesstimates’ rather than looking things up, etc.).

Tol and Best are capable of better work; they should ask themselves why they often don’t bother.

111. The Very Reverend Jebediah Hypotenuse says:

Talking about global climate change risks in frequentist terms and as a risk-management problem is just dumb.

The insurance analogy is particularly inappropriate. Insurance only works because because there is a large number of premium-paying policy-holders, not all of whom will claim against their policy-holders simultaneously.
Fractional-reserve financing only performs when there is no large scale run on the assets.

– RCP 8.5 is “improbable”,
– ECS less than 3 C is “more probable” than ECS greater than 3 C,
offered up with all the confidence of the engineering team that designed the Chernobyl Nuclear Power Station, I cannot help but cringe.

We ain’t got but one planet to frack up.
RCP 8.5 only has to happen once.
If ECS is in fact 5 C, the fact that it seemed unlikely to be 5 C in 2018 will be a very small comfort.

We’re not going to get multiple trials with the global climate, and since N=1, “within 5% 19 times out of 20” is not applicable.

Hedging our own existential bets on likelihood assessments of future scenarios is probably not the best way to go.

Planning for the worst case may be more prudent.
There are substantial uncertainties in our knowledge of carbon cycle, and pretending that we have a solid grasp on its future evolution is nothing but hubris.

Here are two things we know for sure:
1) The last time the global climate was changing at the rate is right now, there was a mass extinction event.
2) Mother Nature is not affected by the gambler’s conceit.

112. John Hartz says:

The Very Reverend Jebediah Hypotenuse: Well said! Time is not on our side!

113. magma – yes, me too. It would help if they were to acknowledge it when such dodgy arguments are refuted, but I am not going to hold my breath. Sadly on-line discussions tend to be about “winning” some argument, rather than actually trying to get to the truth of the matter, and it gets very boring rather quickly.

114. TVRJH “The insurance analogy is particularly inappropriate.”

FWIW I only brought up insurance as an example of decision making under uncertainty, I wasn’t using it as an analogy for action on climate change. There is a place for risk analysis, but it isn’t itself a full solution. The real problem is getting people to do something about it, especially when it largely isn’t those that have caused the problem that will suffer the worst of the impacts.

115. John Hartz says:

In case anyone needs to be reminded that the computation of global mean surface temperatures is not the “end and be all” of climate science…

The temperature in Siberia rose 100 degrees. The northern U.S. may pay a frigid price. by Jason Samenow, Capital Weather Gang, Washington Post, Jan 31, 2018

116. John (Hartz) – Yes! … and to underline this further, the new Communications Handbook for IPCC Scientists states in Principle 2 (of 6):
“Although they define the science and policy discourse, the ‘big numbers’ of climate change (global average temperature targets and concentrations of atmospheric carbon dioxide) don’t relate to people’s day-to-day experiences. Start your climate conversation on common ground, using clear language and examples your audience is more likely to be familiar with.”

117. “I was talking about the issue of faults in large volumes of code,”

Again, it gets back to designing software for an extremely controlled environment, which is what ASIC chip design is about. You have complete control over every aspect and everything is characterized to the greatest detail. Nothing in the semiconductor industry proceeds unless the characterization and process control is nearly 100% repeatable. If it’s not repeatable, the yield goes way down. So while engineers are designing this software, they have many ways of evaluating the correct operation of their source code.

Contrast that to climate science, where very little is under control and there are very few environments in which to do controlled experiments. There are very few realistic test harnesses for anything. The software can be buggy and more importantly, the physics algorithms can be wrong. The latter is the most interest to me, which is why I am interested in simplifying climate science and geophysics models.

I am also trying to find out if these climate scientists have some secrets that they can divulge on how they can avoid all the shortcomings of the lack of a controlled environment. It seems like it is much more of an organic process, where the “blob of correctness” is some amorphous target that wanders around and gets course corrected very gradually.

Contrast that to the infamous Pentium FDIV bug which was a single flaw due to an improperly devised shortcut in a lookup table. That is like a pinpoint of correctness in comparison.

118. Clive Best says:

The aim of research is to improve our understanding of physical phenomena based on theory matching experimental data. It is not to justify why theory will be proved right if only we wait long enough.

119. John Hartz says:

Clive Best: Endless discussions about research by people who may or may not be knowledgeable about the subject matter has an opportunity cost for each person participating in the discussions.

120. Clive Best says:

Richard,

“… a realistic global model can be viewed as a unique and independent observing system that can generate information at a scale finer than that of the conventional observing system.”

That is true for weather forecasting General Circulation Models. It is also true of reanalysis data, because both are driven by observations (satellite and stations). However it is not true when extrapolating climate many years into the future.

121. BBD says:

The aim of research is to improve our understanding of physical phenomena based on theory matching experimental data. It is not to justify why theory will be proved right if only we wait long enough.

MarkR’s already explained why what you just said is confused.

122. “However it is not true when extrapolating climate many years into the future.”

climate models are not extrapolating climate, they are SIMULATING climate, that is not the same thing.

123. > That’s far removed from software that models physics. I am interested in what makes you think that there is a connection.

That interest is interesting, Web, since it mostly fills the void of your repeated empty assertion.

Testing software is testing software. In “ASIC chip design software,” the operative word is “software.” If someone tells you about ways to “manage the impact of defects,” you focus on that. Your “but floating points” doesn’t cut it anyway – it’s quite possible to use logic-based tools (like COQ) to verify functional properties of floating-point arithmetic. Search for “Hoare logic” for more. Ironically, semantic problems arise from specific implementations, e.g.:

Current critical systems often use a lot of floating-point computations, and thus the testing or static analysis of programs containing floating-point operators has become a priority. However, correctly defining the semantics of common implementations of floating-point is tricky, because semantics may change according to many factors beyond source-code level, such as choices made by compilers. We here give concrete examples of problems that can appear and solutions for implementing in analysis software.

https://hal.archives-ouvertes.fr/hal-00128124/document

That our scientific apparatus has defects is known since at least Galileo. Now that I mentioned Galileo, we need to conclude that CliveB was right all along. Why?

Because, contrarians always win.

124. The Very Reverend Jebediah Hypotenuse says:

The aim of research is to improve our understanding of physical phenomena based on theory matching experimental data. It is not to justify why theory will be proved right if only we wait long enough.

Setting aside your confusion over the conclusions of Marvel et al, those two research aims are not mutually exclusive, Clive Best.

Given the content of your comments so far on this thread, presuming to lecture others here about the aim of research is, well, somewhat ironic.

You don’t like Marvel et al.
That’s OK.
But your thinly-veiled attempts to imply malpractice or malfeasance are ugly, and frankly, quite boring.
Consider the possibility that you might not be the smartest and most informed person in the room.

Constructive criticism is welcome in science – Concern-trolling is not.

125. Clive – I think you completely missed the point of the quote. Without theory, the data is meaningless. Especially with for sparse data, theory makes senses of the data and can indeed fill in gaps in observational record. This data can in turn help with climate models over extended time series. Your comment is a complete non sequitur.

126. Clive Best says:

John
Endless discussions about research by people who may or may not be knowledgeable about the subject matter has an opportunity cost for each person participating in the discussions.

Look if you want to have a discussion on this site where everyone agrees with each other, then that’s just fine by me. I’ll simply check out because I also have other things to do.

127. Clive Best says:

@Richard,

My last comment: The weather forecasting models have been extremely successful and are a great achievement. However there is a fundamental limit of forecasts to about 15 days because of the growing effects of chaotic processes that cannot be modelled.

128. BBD says:

Clive

Confusing weather (models) with climate (models) is a newbie error. Tell me you didn’t just do that.

129. > if you want to have a discussion on this site where everyone agrees with each other,

You must be new here, CliveB. In fact, you must not have read this very thread. Many disagreements are going on as we speak: Mosh vs otters regarding the proper metric to wedge the luckwarm position into the public’s mind (he’s despicably right, IMO), Joshua vs Dikran on the rationality of insurance buyers (both are right and wrong, IMO), Web vs RichardE on testing software that contains physics (a strawman if you ask me), etc.

On the other hand, you’re the only one here rope-a-doping from one talking point to the next:

– Isn’t this paper really saying that the observed temperature record is anomalously low simply because it disagrees with CMIP5 models ?

– It is interesting to contrast this paper with the Miller et al. Carbon Budget paper which is also based on CMIP5 models.

– Science is about comparing theory to experiments. It a s very dangerous track once you start arguing that observations are wrong because they disagree with models.

– Einstein’s theory of general relativity makes exact predictions that can be tested experimentally.

– Even if true it is hard react now to a future hypothetical problem if it cannot be tested.

– The risk is unquantifiable.

– Given all that the only sensible policy is to assume ECS = 2.8C

– So I estimate each model probably has at least 1000 coding errors yet to be discovered.

– However, I can’t help having the feeling that there is a campaign to undermine any low results for ECS obtained from the data, such as those like those of Nic Lewis and Judith Curry.

– I think there is a big difference between Weather Forecasting and Climate Modelling.

– However it is not true when extrapolating climate many years into the future.

I mean, come on.

You could peddle these talking points in just about any ClimateBall thread whatsoever.

In any case, here would be my take-home:

Thanks for the clarification. I don’t have access to the full paper so I accept that me making sweeping generalisations based only on reading the Abstract was a little hasty.

Thanks for playing!

130. verytallguy says:

Look if you want to have a discussion on this site where everyone agrees with each other, then tht’s just fine by me. I’ll simply check out because I also have other things to do.

Wot Willard sed.

Clive is a Grandmaster, it appears…

…like trying to play chess with a pigeon; it knocks the pieces over, craps on the board, and flies back to its flock to claim victory.

131. Windchaser says:

Yeah, the constant bouncing from one assertion to the next, dropping them the moment they’re challenged rather than backing them up with reason and evidence… well, that’s half of why I’m no longer a “skeptic”.

The scientists address arguments, discuss them, debate them, break them down piece-by-piece and understand things… while the “skeptics” are motivated to push a line, and will change the subject the moment it no longer becomes fruitful for that purpose, instead of following it to its conclusion.

TL;DR: Clive, make a point and stick to it. The discussion of observations vs models is a worthwhile discussion, and no, this paper does not toss out observations in support of models, but looks to understand a potential discrepancy.

132. A few favourite memes amongst some contrarians who should know better:

– All programs have bugs, therefore climate models can’t make projections;
– The climate is chaotic, therefore we cannot know the future, beyond what weather models do a week or so ahead;
– The warming response to increased CO2 is logarithmic, so things will plateau.

Do we really have to keep explaining why these are nonsense?

Clive, you have managed to deploy two of these in one thread. Are you going for the flush? It could be a new low.

133. Clive wrote “My last comment: The weather forecasting models have been extremely successful and are a great achievement. However there is a fundamental limit of forecasts to about 15 days because of the growing effects of chaotic processes that cannot be modelled.”

perhaps someone should tell climate modellers that? ;o)

This is now the third time I have said this, but climate models simulate weather (from which you can estimate its statistical properties, i.e. climate), they don’t predict it.

134. Clive,

The aim of research is to improve our understanding of physical phenomena based on theory matching experimental data. It is not to justify why theory will be proved right if only we wait long enough.

I would argue that the aim is to gain understanding through modelling, theory, experimentation, and observation. There can be many reasons why theory/models may not match observations/data. One doesn’t simply reject a theory (or throw away a model) simply because there is some discrepancy, or assume that the observations/data have issues. What you seem to be missing is that much of the work on climate sensitivity is trying to resolve a discrepancy. It’s not trying to justify a theory.

135. Clive wrote “Look if you want to have a discussion on this site where everyone agrees with each other, then that’s just fine by me. I’ll simply check out because I also have other things to do.”

I am happy with both agreeing and disagreeing with others (in most cases there is some truth/value in both sides), however there is little point in having a discussion where one side refuses to acknowledge their errors and merely repeats them, for example the fact we can only predict weather a short time in advance does not cast any doubt on our ability to make centennial scale projections of climate because climate models do no work by predicting weather but simulating it. Sure check out, but in doing so you are neglecting an opportunity to correct your misunderstanding and realise why you are wrong.

136. BBD,

Confusing weather (models) with climate (models) is a newbie error. Tell me you didn’t just do that.

Yup, I think he just did.

137. Magma says:

Without theory, the data is meaningless. Especially with for sparse data, theory makes senses of the data and can indeed fill in gaps in observational record. — Richard Erskine

That’s one of the reasons that it seems very unlikely to me that global temperature reconstructions miss significant departures from longer-term trends. Our historical observational record is short, but we know that large highly explosive or SO2-rich volcanic eruptions can lead to ~0.5 °C cooling for one to two years. We can reasonably extrapolate (and model!) those effects to larger plinian eruptions, longer-lasting degassing episodes from S-rich magmas, or closely spaced eruptions from different volcanoes. There is also the separate case of rapid drainage of enormous cold inland glacial lakes with their complex effects on oceanic currents

But as far as I know — and I’m open to being corrected — there are no known natural physical mechanisms for significant short to medium-term warming departures from long-term global temperature trends. That would undercut one of the contrarian arguments against recent AGW being unprecedentedly rapid because “scientists don’t know what happened in the past”. (In its stupidest form this reduces to the likes of “So how warm was it on July 3rd, 3015 B.C.?”)

138. Chubbs says:

Clive misses the fundamental difference between weather and climate models. Weather models are forecasting the details of chaos, hence the short time-window for good forecasts. Climate models focus on the long-term energy balance, not the chaotic details, an easier task in many respects.

A better question relative to the Marvel paper, is whether models get the structure of chaos right. The recent Cox paper gives observational support by showing that models with around 3C ECS match variability in the observational record. So Marvel, Dessler, and Cox are all complementary in showing what the observations are (and are not) telling us about ECS.

139. Chubbs,

The recent Cox paper gives observational support by showing that models with around 3C ECS match variability in the observational record.

Is this a fair summary of the Cox et al. paper? I thought it was more that it used the variability to infer that the best estimate for ECS was around 3K, and that the uncertainty was narrower than implied by the IPCC.

140. It’s testing software to make certain that it’s doing what you want it to do. That’s straightforward for engineered systems where the goal is to produce something like ASICs.

The issue with climate science and any other types of basic research is that you can’t direct the outcome in any one direction, instead you are trying to infer the laws of nature. Trying to find patterns, but avoiding patterns that are artifacts of the software.

Clive does lots of interesting work and has excellent physics insight. I just don’t understand his mild (by skeptic standards) antagonism against the AGW proponents. To me, the combination of peak oil and buildup of atmospheric CO2 is serious business.

141. verytallguy says:

Hilariously, Clive has, indeed, returned to his flock and claimed victory.

https://cliscep.com/2018/02/01/the-sensitivity-of-climate-scientists/

142. vtg,
Yes, I noticed that. Seems to think that it’s climate scientists who are the sensitive type?

143. verytallguy says:

His “argument” seems to be something along the lines of:

“When ‘sceptics’ misinterpret science, it shows that scientists have something to hide because they try to correct the misunderstanding.

And bugs exist!!”

At least that’s as much as I could glean from it.

It’s been a while since I’ve seen a gish gallop quite as impressive as Clive was on this thread.

144. vtg,
I’m not quite sure what it’s argument either, but it seems to be along the lines of it not being fair that scientists keep publishing papers criticising/contradicting work that Clive happens to like.

145. Chubbs says:

ATTP, Let me start by giving a qualifier, I haven’t read the paper, so my statement above is just based on the Figures. Several figures in the paper compare a variability parameter in models to observations. The models have a broader range in variability than the observations, but there is an overlap region, which is used to narrow the model ECS range. Judging from the figures the high ECS models are excluded, so presumably the low and mid-range ECS models are in rough agreement with the observations.

146. Chubbs,
Thanks. Yes, that makes sense now. I hadn’t quite appreciated that you could also look at it that way around.

147. The Very Reverend Jebediah Hypotenuse says:

Hilariously, Clive has, indeed, returned to his flock and claimed victory…

Wherein Clive says:

A real physical theory like Quantum Electrodynamics (QED) predicts the value of the anomalous electron magnetic dipole moment and experiments have confirmed QED theory to a precision of 1 part in a trillion. Climate Models on the other hand can only manage to ‘project’ ECS to be in the ‘likely’ range of 1.5 to 4.5C .

Of course – Real Climate Science must only deal in 5-sigma results.

As someone who has been around long enough to remember when the Hubble constant was know to only a factor of two, I can appreciate a good No True Scotsman fallacy when I see one.

BTW, Clive, regarding the anomalous electron magnetic dipole moment, the current experimental value and uncertainty is:

a(e) = 0.001 159 652 180 73 ( 28 )

Thus, a(e) is known to an accuracy of around 1 part in 1 billion (10^9).

But, hey, what’s a factor of a thousand between friends?

148. The Very,
Maybe it’s just me, but it would be pretty unscientific to claim a precision that was unwarranted. Maybe if climate scientists appeared more confident and claimed more precision, Clive would be happier?

149. I think it was my high school physics teacher who said that knowing the order of magnitude of a value is highly valuable. (And often hard to do.)

It was easier to have a positive view of humanity before the internet.

150. KiwiGriff says:

His post is full of little pieces of self deception like.
“Climate Models on the other hand can only manage to ‘project’ ECS to be in the ‘likely’ range of 1.5 to 4.5C .”
I was under the impression that climate models constrain the likely range to 2 to 4.5 as in AR4 it is later work like Lewis and Curry based on observations that resulted in a widening of the low end in AR5.
He even says as much with.
“Lewis & Curry and others all base their estimates of ECS on an energy balance model combined with observed temperatures. See also A new measurement of Equilibrium Climate Sensitivity (ECS)”

One would almost think he is trying hard to delude himself.

151. KiwiGriff,

I was under the impression that climate models constrain the likely range to 2 to 4.5 as in AR4 it is later work like Lewis and Curry based on observations that resulted in a widening of the low end in AR5.

Yes, climate sensitivity is an emergent property from climate models, not something that is prescribed. However, the overall IPCC range is set by a combination of all the evidence, which includes climate models, paleo estimates, etc. It is the case that the energy balance work (like that of Lewis and Curry) probably contributed to the lower limit dropping from 2C to 1.5C. However, it seems that more recently people are arguing that the lower limit should be back at 2C.

152. John Hartz says:

Re Clive Best’s concern trolling…

One of my buddies in grade and high school was wont to say,

Just because you brush your teeth with gunpowder in the morning, doesn’t mean you have the right to shoot off your mouth all day!

153. John Hartz says:

Re the difference between forecasting weather short-term and simulating climate in the long-term, the MET Office continues its efforts to forecast climate over a short time span…

154. JCH says:

Annan does not seem to think the lower number should move back to 2, but he also sort deferred to Dessler just recently. I’m waiting for the imminent Dessler-Mauritsen-Stevens paper using their new approach: 2.4 to 4.5.

155. JCH says:

JH – I’ve been following their decadal forecast since Smith et al came out. Given it is generally described as impossible to do, I think they have had surprising success.

156. JCH,
I was thinking of Royal Society report which says

A value below 2oC for the lower end of the likely range of equilibrium climate sensitivity now seems less plausible.

157. John Hartz says:

Possible fodder for a new OP…

Climate Impact Lab: measuring the social cost of climate change by Erica Chen, Research IT, University of California Berkeley, Jan 31, 2018

158. Steven Mosher says:

thanks willard.

here is what i found.
once i accepted my metric
i was committed to a path where i could change my mind as data came in.

i will come back to the convo.. fighting vpn and the great fire wall.

its a blessing.

that and no gunpowder toothpaste.

159. Steven Mosher says:

modelling chip performance is no joke.

gds out.

pray the fucker works.

160. Steven Mosher says:

Poor enginering kids never made million dollar bets before.
the simulation worked..ya close enough.
tape out and pray kid.
trust the old man.

161. niclewis says:

Marvel et al find that ECS estimated from models’ historical simulation data for the period 1979-2005 is lower than that estimated from abrupt4xCO2 simulations. However, I found that this is an artefact resulting from unbalanced volcanism during that period. When years affected by volcanism are excluded, the apparent low bias disappears: see https://climateaudit.org/2018/02/05/marvel-et-al-s-new-paper-on-estimating-climate-sensitivity-from-observations/

162. Nic,
I saw that. Haven’t had much of chance to look at it, or think about it. I noticed this, though

An alternative explanation for the models as a group misestimating the actual temporal evolution of SST change patterns is that the models as a group are imperfect. To my mind that should be the null hypothesis, rather than that internal variability over the last few decades results in an unusually low estimate of ECS.

In my view, this is a somewhat unfortunate use of the idea of a null hypothesis. As you should know a null hypothesis is something against which you test hypotheses, it’s not really an assumption about the status quo. If we knew absolutely nothing else, then maybe we would assume that some set of observations were the best guess as to what would be expected. However, sometimes we know more and have indications that that assumption may be wrong. In a sense, Marvel et al. are providing plausible explanations for a potential discrepancy and producing a result that includes all these uncertainties. We shouldn’t – in my view – narrow our confidence interval until we’re pretty certain that we’re justified in doing so.

163. Nic,
Okay, so if your criticism is valid, it might resolve the discrepancy between the historical runs and the full GCM ECS values (which seems a bit odd, given that the models suggest that the feedback should change as we warm to equilibrium). However, it still doesn’t (as far as I can see) resolve the discrepancy between the amip runs and the historical runs.

164. niclewis says:

ATTP,
“which seems a bit odd, given that the models suggest that the feedback should change as we warm to equilibrium”
Marvel uses the Caldwell et al 2016 ECS estimates, which are derived from regression over years 1-150 of abrupt4xCO2 and are lower. I calculated my ECS values on the same basis, but I made allowance for fact that 4xCO2 forcing is nearly 5% more than double 2xCO2 forcing (Byrne & Goldblatt 2014; Etminan et al 2016). Averaging across all CMIP5 models for which I have data, I would only expect the historical run ECS estimates to be ~5% lower than my year 1-150 abrupt4xCO2 regression based ECS estimates. That there is in fact no shortfall in the historical ECS estimates may be due to ECS estimation over such a short period being very noisy; there is also only a subset of models involved.
If Marvel had instead used regression over years 21-150 of the abrupt4xCO2 simulations to estimate long-run ECS, as e.g. Armour 2017 does, then (on my 2xCO2 vs 4xCO2 conversion basis) I would have expected their long-run ECS estimates to be ~10% higher than those from the historical simulation data.

“However, it still doesn’t (as far as I can see) resolve the discrepancy between the amip runs and the historical runs.”
Agreed. That discrepancy seems to me, based on current evidence, to be due not the pattern of historical SST warming being caused by particularly unusual internal variability, but to it being a forced response that the CMIP5 coupled models fail to simulate.

165. Nic,

That discrepancy seems to me, based on current evidence, to be due not the pattern of historical SST warming being caused by particularly unusual internal variability, but to it being a forced response that the CMIP5 coupled models fail to simulate.

Except, it seems clear that interval variability can play a role on decadal timescales and there’s no reason why the real world should be expected to lie somewhere in the middle of all possible worlds (i.e., the range of possible outcomes given different initial conditions). So, it’s not entirely unreasonable that what we’ve actually experienced has led to somewhat less warming than might have been expected. Unless I’m mistaken, you don’t have any particular evidence that it is more likely to be a forced response that CMIP5 coupled models fail to simulate (it could be, as I think is acknowledged in Marvel et al., but it may well not be).

166. Andrew E Dessler says:

I would love to know what evidence Nic Lewis has that the pattern of warming is forced and not internal variability. Please do share.

167. Andrew,
Indeed, I would quite like to know too.

168. niclewis says:

Andrew

I plan to write an article setting out evidence that lower estimates of ECS from historical observations than from CMIP5 models are not due to unforced variability affecting SST warming patterns in an unusual way over recent decades.

In the meantime, I have a question on your new paper estimating ECS from interannual variability. You say that all data used in it are publicly available on the internet, but that is not so for the radiative forcing estimates that you use. You say that yours were based on AR5 forcing estimates, and give a general explanation in lines 267-276 of various changes that you made to the AR5 estimates, but you do not give sufficient detail to enable actual values to be confidently derived . It is not even fully clear whether you revised AR5 values for years up to 2011 (I imagine so) as well as extending them thereafter. Moreover, you refer to 14 different forcing terms in 2015, but AR5 Table AII.1.2 only lists 11 terms.

Please can you respond by stating the 2011 and 2016 annual mean central estimates for each of your forcing terms. In case you have any concerns, I am not planning to use this information to criticize your paper.

169. Nic,

I plan to write an article setting out evidence that lower estimates of ECS from historical observations than from CMIP5 models are not due to unforced variability affecting SST warming patterns in an unusual way over recent decades.

Can you briefly illustrate what evidence supports this suggestion? There’s little risk of someone stealing your ideas. In a sense, I’m interested because it’s hard to see how you could make such an argument without applying some kind of physical/modelling framework. So, either you’ve developed some kind of model to test your idea, or you’ve found a way to do so without much in the way of physics/modelling, or something else altogether. I’m interested in how you’re likely to support your suggestion.

170. Chubbs says:

ATTP, I don’t think the low ECS estimates are due to recent decades either. Recent warming is proceeding about as forecast by climate models and faster than CBM. As pointed out in several recent papers, problems arise from: limitations in observations and aerosol forcing estimates, and the non-linear ramp due to ocean induced lags. Note that merely substituting BEST for HadCRUT increases CBM TCR/ECS by 21%. As I pointed out in another thread, using a non-linear fit which gives recent years a heavier weighting, allows observation-based methods to match model predictions.

171. Nic Lewis: “In case you have any concerns, I am not planning to use this information to criticize your paper.”

Bwahaha, rubes.

172. John Hartz says:

This just in…

[05. February 2018] On the basis of a unique global comparison of data from core samples extracted from the ocean floor and the polar ice sheets, AWI researchers have now demonstrated that, though climate changes have indeed decreased around the globe from glacial to interglacial periods, the difference is by no means as pronounced as previously assumed. Until now, it was believed that glacial periods were characterised by extreme temperature variability, while interglacial periods were relatively stable. The researchers publish their findings advanced online in the journal Nature.

173. Andrew E Dessler says:

Nic Lewis: Indeed, we did accidentally leave the RF out. I’ve updated the pre-print to include a link, and you can download the RF data here: http://bit.ly/2FRE83c.

I certainly understand not wanting to talk about work in progress. I normally don’t even like to circulate pre-prints. We posted the one on EarthArxiv as an experiment to see if we get useful feedback. Thus, I’d encourage you to criticize the paper — that’s why we posted it. To make sure I see them, please email any comments to me.

If you can separate the pattern of forced warming from internal variability, that would be a huge scientific coup. Good luck! Until you do, however, you should realize that no one in the scientific community is going to take low values of ECS from the 20th century seriously.

174. John Hartz says:

Andrew E Dressler: Out of curiousity, when did climate scientists/modelers first begin to use the ECS index?

175. Andrew E Dessler says:

Arrhenius 1896 is generally credited as the first attempt to calculate the warming response to CO2. First person to talk about response to doubling CO2 (that I know about, at least) is Hulbert (10.1103/PhysRev.38.1876) in 1931.

176. JCH says:

January NOAA PDO solidly up.

This paper is also new, and might be somewhat on topic:

Disentangling global warming, multi-decadal variability, and El Niño in Pacific temperatures

Abstract

A key challenge in climate science is to separate observed temperature changes into components due to internal variability and responses to external forcing. Extended integrations of forced and unforced climate models are often used for this purpose. Here we demonstrate a novel method to separate modes of internal variability from global warming based on differences in timescale and spatial pattern, without relying on climate models. We identify uncorrelated components of Pacific sea-surface temperature (SST) variability due to global warming, the Pacific Decadal Oscillation (PDO), and the El Niño–Southern Oscillation (ENSO). Our results give statistical representations of PDO and ENSO that are consistent with their being separate processes, operating on different timescales, but are otherwise consistent with canonical definitions. We isolate the multi-decadal variability of the PDO and find that it is confined to midlatitudes; tropical SSTs and their teleconnections mix in higher-frequency variability. This implies that midlatitude PDO anomalies are more persistent than previously thought.

It’s all about the Eastern Pacific.

177. JCH says:
178. John Hartz says:

Andrew E Dressler: Thank you.

179. niclewis says:

Andrew Dessler
Thanks for making your study’s forcing values available. I expect there is a reason why for 2015 there are only June to December monthly values. I’m not fussed; I only mention it in case the missing data was omittted by mistake.

“… no one in the scientific community is going to take low values of ECS from the 20th century seriously.”

I’m sure you’re right that most people employed in the field of climate science prefer to trust GCM based ECS estimates to instrumental observation based ones.

180. BBD says:

I’m sure you’re right that most people employed in the field of climate science prefer to trust GCM based ECS estimates to instrumental observation based ones.

Of course an EBM isn’t a model. They really should change the acronym. And of course the ‘observational’ prefix to ‘estimates’ isn’t misleading rhetoric…

No wonder your brief moment in the sun is nearly over, Nic.

181. Jon Kirwan says:

Nic Lewis:
I’m just trying to follow along right now and I’d like to ask you if I’ve understood your writing here so far, by putting it into my own words and seeing how close you feel I am to following your comments so far. Let me summarize what I think I am hearing. (I want to do this because of comments others are also making, which do not entirely seem to capture what I think I’m reading from you, overall.)

So here are my quoted thoughts, initially: “You will be making available at some point your thoughts which show that (assumed true) claim of ECS(historical obs) < ECS(CMIP5 models) isn't caused by 'unusual recent SST warming patters due to unforced variability'. This doesn't necessarily mean (to me) that you have an alternative explanation that you plan to communicate. It may merely mean that you feel you can show it's not due to the reasons given."

May I fairly conclude that those suggesting (or assuming in what they suggest) you have a mechanism here may be misreading what you actually said here? Or do you expect to propose a specific mechanism explaining the 'assumed true comparison,' as well? Or do you expect to undermine the comparison, itself, and show that it isn't true and therefore any resulting conclusions from assuming it is true are themselves entirely irrelevant due to unsound reasoning from incorrect assumptions?

The reason for this last question above is that you just wrote (perhaps wryly) something that may appear to question the preference of CMIP5 model ECS values vs instrumental observation based ones. Which is entirely a different matter from the idea that you accept some shared idea that ECS(historical obs) < ECS(CMIP5 models) is true.

Just curious what exactly you are saying, here. It's not yet clear to me into which path ahead you are pointing your finger.

182. verytallguy says:

BBD,

A suggestion that as Nic has done what virtually no other “sceptics” have and publish his findings in the literature, he should be made welcome here, along with prof Dessler.

183. Andrew E Dessler says:

Nic Lewis: Let’s please dispense with the fiction that your estimate is based purely on observations. In fact, it is based on its own model – the linearized energy balance model. I think there are many good reasons to conclude that that model, which parameterized TOA flux in terms of global average surface temperature, has deficiencies that can lead to substantial errors.

This is the tip of the iceberg of problems with these estimates, as I’m sure most of you know. Other issues include: ECS increases with warming, so 20th-century ECS is low biased, uncertainty in forcing, differences in forcing efficacy can bias ECS, biases from geographically incomplete or inhomogeneous observations. Taken as a whole, I simply can’t put much weight in these estimates.

As a scientist, I don’t want to rely on any one estimate. Rather, I look at ALL of the estimates and try to find the most parsimonious explanation for them all. To the extent that there are disagreements, my goal is to explain them. Since most of the estimates of ECS are > 2 K, and there are lots of problems with the “observational” estimates, the most parsimonious is that the low ECS estimates are wrong.

Of course, I’m open to being convinced otherwise. I look forward to seeing your research separating forced changes from interannual variability. And, for the sake of future generations, I hope you’re right that ECS is low.

Yes, those months are not in the file.

184. Windchaser says:

I’m sure you’re right that most people employed in the field of climate science prefer to trust GCM based ECS estimates to instrumental observation based ones

To calculate ECS, you have to be able to distinguish internal variability from forced. In GCMs, this is relatively simple. In observational models, it isn’t.

That’s the problem, Nic — your preferred methodology doesn’t really give us a clean picture of ECS. If you could do as good of a job separating internal from forced as the GCMs do, then you’d actually be accomplishing something quite significant.

185. Joshua says:

Willard –

Does passive-aggressiveness cross over all domains in the matrix, or does it deserve its own node? If the latter, might suggest a certain eponymous label.

186. Clive Best says:

Andrew,

You write: “If you can separate the pattern of forced warming from internal variability, that would be a huge scientific coup.

but isn’t that exactly what was claimed in the AR5 Attribution analysis ?

187. dikranmarsupial says:

Clive, I think the point was that Nic plans to do so without using a model of the physics (i.e. a GCM).

188. Clive, I think the point was that Nic plans to do so without using a model of the physics (i.e. a GCM).

I’m not sure that we know how Nic plans to do it. However, I think the AR5 analysis does try to constrain this, but there is still quite a wide range (the best estimate is – I think – that internal variability has suppressed the forcing warming by about 10%).

189. A suggestion that as Nic has done what virtually no other “sceptics” have and publish his findings in the literature, he should be made welcome here, along with prof Dessler.

Agreed.

Nic,

I’m sure you’re right that most people employed in the field of climate science prefer to trust GCM based ECS estimates to instrumental observation based ones.

Others have already pointed this out, but even the instrumental/observational based estimates are still essentially model based, it’s just a pretty simple model. Also, these estimates typically have assumptions (linearity of feedback response, efficacy of forcings, internal variability influence) that may well not be true. As Andrew suggests, the goal should really be to produce the best possible estimate of reality, which should involve considering all the evidence, not only some of the evidence.

190. paulski0 says:

Chubbs,

ATTP, I don’t think the low ECS estimates are due to recent decades either. Recent warming is proceeding about as forecast by climate models…

There is a fairly wide range of rates for recent warming across climate models, so I don’t see that observed warming proceeding within that range necessarily precludes some impact from internal variability on the Earth.

Also, rate of warming is not especially well-determined by ECS. Surface warming rate should be generally proportional to TOA net flux, which will be determined by both forcing and feedback. Over the period in question, forcing is perhaps a larger factor. However, the rate of surface warming relative to TOA net flux is expected to vary with spatial pattern, which can be thought of as ocean heat uptake variability. It’s entirely plausible that you could get a spatial pattern which causes a reduced feedback strength, meaning a lower rate of warming, as well as reduced heat uptake rate, meaning a higher rate of surface warming. There is then little net effect on surface warming rate, but feedback strength and calculated effective sensitivity is lower.

Nic Lewis appears to argue against the idea that the spatial internal variability pattern over 1979-2005 caused reduced feedback strength by invoking AMO as a positive internal variability factor in that period. To the extent that AMO would affect these things positively, the most obvious impact would be a higher rate of warming due to presence of warmer surface waters in areas connected to the fast-responding NH continents. However, there’s no obvious reason why that would alter net feedback strength positively. It could well be the case that AMO, perhaps in concert with Pacific variability, caused both an enhanced surface warming rate and reduced feedback strength.

One caution I would give against the AMIP results is that I believe the SST data they use is either Reynolds OISST or HadISST1. Both produce markedly less warming over recent decades compared to HadSST3 and ERSSTv5, partly due to lack of ship-buoy adjustment. I’d like to see how results would be affected by using corrected datasets.

191. JCH says:

Coldest month in years: AMO torching; Eastern Pacific very cold. The AMO is for fibbers.

192. JCH says:
193. Chubbs says:

Paulskio, Thanks for the comment. Agree with your points about internal variability. What I was trying to convey is that the relatively low ECS produced by EBM is mainly driven by the 1800s baseline data, since warming recently then has been faster than EBM would indicate. The Otto et. al. paper showed that adding a warm recent decade doesn’t do much to the CBM estimates, since they are anchored by the 1800s data.

As covered in recent papers, EBM have the following issues: 1) Insufficient 1800s energy balance data, 2) Poor coverage and missed warming by HadCRUT, 3) Large, uncertain, and globally non-uniform aerosol effects since the 1800s and 4) Slow response of oceans, particularly the southern ocean, with knock-on feedback effects.

As I pointed out in another thread using 1 and 2-box models to estimate ECS from observations produces higher ECS estimates than standard CBM. For instance the recent Haustein et. al. paper’s 2-box model with 4 and 209 year time constants implies an ECS of 3.2C. The box models are providing a better estimate by giving the recent temperature observations and forcing estimates more weight, which could help improve all 4 of the CBM shortcomings listed above.

This site uses Akismet to reduce spam. Learn how your comment data is processed.