Lewis and Curry, again

I should probably say something about the new Lewis & Curry paper. It’s mostly an update to their earlier paper that I’ve discussed before. Bottom line; there are reasons to be cautious.

The basic formalism is that one can use an energy balance approach to estimate the transient climate response (TCR) and the equilibrium climate sensitivity (ECS). Given the change in forcing, \Delta F, the change in surface temperature \Delta T, and the change in system heat uptake rate, \Delta Q, we get

TCR = F_{2 \times CO2} \dfrac{\Delta T}{\Delta F},

and

ECS = F_{2 \times CO2}\dfrac{\Delta T}{\Delta F - \Delta Q},

where F_{2 \times CO2} is the change in forcing due to a doubling of atmospheric CO2.

If new estimates suggests that the change in forcing is slightly higher, and the change in system heat uptake rate is slightly lower, than previously thought, then the estimates for the TCR and ECS go down, which I think is the main difference between the new paper and the old one.

As I’ve already mentioned, there are reasons to be cautious about these results. One other issue I have with the Lewis and Curry result is that they suggest a TCR-to-ECS ratio of about 0.8, which I think is probably too high. Given the heat capacity of the oceans, and the current planetary energy imbalance, a ratio below 0.8 seems likely. I have wondered if there would be a way to incorporate this into the analysis, but it may not be straightforward.

However, a key problem with the Lewis & Curry approach, which Andrew Dessler has already discussed on Twitter, is that it essentially assumes that the one-dimensional energy-balance approach exactly represents what will happen. The implication is that if there is some change in planetary energy imbalance (or change in system heat uptake rate), driven by some well-defined change in forcing, this will then lead to a very well-defined change in surface temperature.

The problem, as is discussed in this paper (Dessler, Mauritsen & Stevens) is that this isn’t correct. Internal variability confounds the relationship between changes in system heat uptake rate, and changes in surface temperature. In a sense, this is obvious. We know that the climate system is very complex and that the exact path that it follows depends on the state of the system. If you average over long enough timescales, this becomes less significant, but over periods of many decades, it can be reasonably substantial. We don’t expect that the relationship between change in surface temperature, change in forcing, and change in system heat uptake rate over a period comparable to that of the instrumental temperature record will exactly represent the long-term sensitivity of the system.

In a sense, Lewis & Curry are taking one realisation of reality and assuming that it is an exact representation of the typical response of the system. It probably isn’t. This doesn’t mean that climate sensitivity can’t be low (even mainstream estimates do not rule this out). It simply means that we should be cautious of assuming that it is low based on an estimate that can’t fully account for how internal variability may have influenced the path that we’ve actually followed.

Advertisement
This entry was posted in Climate sensitivity, Judith Curry, Research, Science, The scientific method and tagged , , , , , , , , . Bookmark the permalink.

691 Responses to Lewis and Curry, again

  1. Chubbs says:

    L&C 2015+18 predict current warming of 0.12 – 0.13 C/decade; but, warming over the past 40 years has been 0.18C per decade. Recent warming is much closer to climate model predictions than energy-balance estimates.

  2. Chubbs,
    That’s related to an issue I have. We’re only likely to double atmospheric CO2 by the middle of the century. If the TCR is as low as suggested by L&C, then either we’re under-estimating the net change in anthropogenic forcing (i.e., we will have effectively doubled it soon), or we’ll warm slowly in the next few decades (even slower than the current long-term trend).

  3. frankclimate says:

    chubbs, L/C don’t tell anything about the warming trends in time. They predict a warming of about 0.35 °C/ 1 W/m² Forcing.

  4. frankclimate,
    Essentially they do, because it will clearly take longer than 120 years to double atmospheric CO2 and they’re suggesting an TCR of around 1.4K. The rough trend would then be 1.4/12 = 1.2K/decade.

  5. I should probably have added that even if we ignore the internal variability issue, another is that they don’t (I think) lag the temperature and forcing. Even though the upper ocean equilibrates quite quickly, it is still not instantaneous. Therefore, there should be a lag of few years that take this into account (for example, average the change in forcing over a decadal period offset by about 5 years compared to the period over which you average the temperature change). This won’t be a huge effect, but it not completely negligible.

    There’s also this paper (Cawley et al.) that uses a similar simply model, but actually fits to the temperature data (and includes a small lag). They get a TCR of 1.66K.

  6. JCH says:

    Observations of Local Positive Low Cloud Feedback Patterns, and Their Role in Internal Variability and Climate Sensitivity

    Abstract
    Modeling studies have shown that cloud feedbacks are sensitive to the spatial pattern of sea surface temperature (SST) anomalies, while cloud feedbacks themselves strongly influence the magnitude of SST anomalies. Observational counterparts to such patterned interactions are still needed. Here we show that distinct large‐scale patterns of SST and low‐cloud cover (LCC) emerge naturally from objective analyses of observations and demonstrate their close coupling in a positive local SST‐LCC feedback loop that may be important for both internal variability and climate change. The two patterns that explain the maximum amount of covariance between SST and LCC correspond to the Interdecadal Pacific Oscillation (IPO) and the Atlantic Multidecadal Oscillation (AMO), leading modes of multidecadal internal variability. Spatial patterns and time series of SST and LCC anomalies associated with both modes point to a strong positive local SST‐LCC feedback. In many current climate models, our analyses suggest that SST‐LCC feedback strength is too weak compared to observations. Modeled local SST‐LCC feedback strength affects simulated internal variability so that stronger feedback produces more intense and more realistic patterns of internal variability. To the extent that the physics of the local positive SST‐LCC feedback inferred from observed climate variability applies to future greenhouse warming, we anticipate significant amount of delayed warming because of SST‐LCC feedback when anthropogenic SST warming eventually overwhelm the effects of internal variability that may mute anthropogenic warming over parts of the ocean. We postulate that many climate models may be underestimating both future warming and the magnitude of modeled internal variability because of their weak SST‐LCC feedback.

  7. Chubbs says:

    Frankclimate – TCR is the warming after a 70 year constant 1% per year forcing ramp which produces 2X CO2. So TCR is a warming rate. For the past 40 years forcing has increased by close to 1% a year. Much better to estimate TCR from the ramp over the past 40 years, when very good data is available, than to go back to one or two decades in the 19th century when data is very limited.

  8. Roger Jones says:

    Surface temperature change is not a direct result of increasing greenhouse gases in the atmosphere (the atmosphere cannot warm independently of the ocean – the natural condition is a steady-state relationship). All available additional heat is absorbed by the ocean. Warming occurs when the ocean-atmosphere system cannot do enough work to transport heat to the top of the atmosphere and the poles, and heat is released from the ocean as part of regime change, and the ocean redistributes its own heat to maintain a warmer state.

    The basic mistake in both this work and in conventional interpretations of global warming is that heat accumulated through radiative forcing and heat distributed through dissipative mechanisms are one and the same thing. Not true. They are separate processes, and by definition the Lewis and Curry paper must be wrong because it models the response as a single process. Added heat is trapped in the atmosphere but because the atmosphere cannot warm independently of the ocean, it is stored in the ocean until instability leads to reorganisation and release.

    However, all the analyses on the climate side that continue to insist that atmospheric warming is a gradual process that conforms to a trend are also wrong. Climate change is enhanced climate variability and is a series of stable states separated by regime changes.

  9. frankclimate says:

    ATTP: “Essentially they do…”. No, they do not. The core message is: +0.35K/W/m² Forcing. This was observed during the last about 150 years and deduced with the T-record C/W and the latest, best available forcing data. With the known forcing for doubling CO2 ( 3.8 W/m²) this gives a TCR of 1.33. This is very strait ahaead and very clear and simple. Ocams razor.

  10. Dave_Geologist says:

    frank
    My car can go from 0 to 60 in 12 seconds. So after a minute it will be doing 300mph.

    This is very strait ahaead and very clear and simple. Ocams razor.

  11. frankclimate says:

    Chubbs: No, it’s not a good idea to take only the 40 years long periodes because as shorter the time spam as more the observations are influenced by the internal variability which means more “noise” in a result where one only wants to get the forcing dependend warming. AFIK in L/C 18 (full: https://niclewis.files.wordpress.com/2018/04/lewis_and_curry_jcli-d-17-0667_accepted.pdf ) one can find the uncertainities which are introduced from some sources.

  12. frankclimate says:

    Dave: please lt me know after what time of acceleration your car will take over the light!

  13. Chubbs says:

    Frankclimate: If 40 years is too short of a time period for reliable results how can 1869-1882 produce a reliable baseline?

  14. Curry is an expert at everything. She can predict ECS, TCR, the time of the next El Nino, all hurricane paths, everything about cloud physics, blah blah blah 🙂

  15. Willard says:

    Let’s try to focus on the paper, please.

  16. frankclimate,
    Yes, I know what they did. However, it is interesting that their result might suggest lower trends than have been observed. I also highlighted the Cawley et al. paper, because that is also a simple 1D model, but it tries to match the observed temperatures with a forcing time series and a short lag. They recover a TCR of 1.66K.

  17. Roger,
    Yes, I think what you describe is essentially the argument being made in Dessler, Mauritsen and Stevens; there isn’t a simple relationship between changes in system heat uptake rate (planetary energy imbalance) and changes in surface temperature.

  18. Steven Mosher says:

    “We don’t expect that the relationship between change in surface temperature, change in forcing, and change in system heat uptake rate over a period comparable to that of the instrumental temperature record will exactly represent the long-term sensitivity of the system.”

    1. why would we not expect that? The instrumental period is nearly 170 years long?
    2. Can we expect that Paleo estimates will not be subject to the same objection?

    If would have been one thing if scientists in 2000 had objected to observationally derived
    estimates with this objection. But when Those papers — oh take gregory– had results consistent
    with models and paleo, nobody was skeptical of the approach.

    That said, There is an argument to be made that over “short” 100, 200 x hundred year periods
    that the actual realization of the planets physics might not capture ALL of that physics has
    up its sleave ( whereas paleo might), but then its up to science to identify explicitly what
    the actual realization of physics is missing. No fair appealing to unicorns.

    “In a sense, Lewis & Curry are taking one realisation of reality and assuming that it is an exact representation of the typical response of the system. It probably isn’t. This doesn’t mean that climate sensitivity can’t be low (even mainstream estimates do not rule this out). It simply means that we should be cautious of assuming that it is low based on an estimate that can’t fully account for how internal variability may have influenced the path that we’ve actually followed.”

    That’s a fair well balanced take on the issue. Comes the question How exactly Might have “internal variability” have turned out differently? in the end it cannot create or destroy energy?

    I suppose I would like to see Dresslers pattern argument made more explicit and detailed.

  19. The model used in the Cawley et al. paper also included an ENSO signal as one of the inputs to the model, so it has some limited ability to account for the transfer of heat between ocean and atmosphere. I’ve only skimmed the LC paper and they seem to do some ENSO “matching”, but I don’t really understand what they did yet. As pointed out in the paper, this perhaps biases the model towards low TCR/ECS as any forced response that is correlated with the ENSO signal may be mis-attributed to ENSO, rather than AGW. Personally I would be wary of quantitative results from a model as simple as this and would put more (subjective) confidence in GCMs, which include more of the physics, even if they have problems of their own. “Everything should be made as simple as possible, but no simpler!”.

  20. Steven,

    1. why would we not expect that? The instrumental period is nearly 170 years long?

    The whole point of the Dessler, Mauritsen and Stevens paper is to using (using Climate models) that if you run many realisations of the same system (with different initial conditions) and there infer climate sensitivity in a way comparable to what is done with these observationally-based methods, you get a range of answers. In fact (IIRC) even the best estimate from this range tends to be lower than the actual sensitivity.

    2. Can we expect that Paleo estimates will not be subject to the same objection?

    I’m not 100% sure, but I would think this would be far less of a problem if you consider timescales over which the system can reach a new quasi-equilibrium such as is the case in most poleo estimates.

  21. Willard says:

    > nobody was skeptical of the approach

    We’re on the Internet. Everyone is skeptical of everything.

    ***

    > I would like to see Dresslers pattern argument made more explicit and detailed.

  22. “Comes the question How exactly Might have “internal variability” have turned out differently? in the end it cannot create or destroy energy?”

    If we had perfect knowledge of surface warming and OHC, we would in principle be able to work out the planetary energy budget irrespective of internal variability, but sadly we don’t. Internal variability affects the distribution of heat between the ocean and the air/land surface, so if there is any difference between the quality of atmospheric and oceanic measurements, internal variability will propagate that through to produce variability/uncertainty in the estimate of TCR/ECS. That would be my guess anyway.

  23. Marco says:

    “If would have been one thing if scientists in 2000 had objected to observationally derived estimates with this objection. But when Those papers — oh take gregory– had results consistent
    with models and paleo, nobody was skeptical of the approach.”

    That’s maybe because Gregory hasn’t presented his work as superior, but clearly discussed several shortcomings. That is, he and his co-authors have made it clear to the community that appropriate caution about their results is required – readers, be skeptical of our work!

  24. JCH says:

    Internal variability affects the distribution of heat between the ocean and the air/land surface, so if there is any difference between the quality of atmospheric and oceanic measurements, internal variability will propagate that through to produce variability/uncertainty in the estimate of TCR/ECS. That would be my guess anyway.

    1st 1/4 OHC number should be reported any day now. All La Niña months. Up or down?

  25. Dave_Geologist says:

    fravk

    Dave: please lt me know after what time of acceleration your car will take over the light!

    Based purely on my observations – a very long time plus-or-minus a very long time.

    Based on a physics-based model courtesy of Einstein – never. But you’d never have guessed that based solely on observations up to 60mph, 300mph or even 3,000mph.

    In case I’m being too obscure, Dessler’s point AFAICS is that if you take a suite of physics-based models, where you know the ECS in advance, and try to calculate it the way LC13 and LC18 do, you get the wrong answer. And not just any old wrong answer, but a systematic underestimate. So they’re not measuring what they think they’re measuring. Or the physics is wrong. But consilience. Or indeed, Ockam’s Razor. Everyone’s out of step but them, and they’re the only ones who are right. Or alternatively, they’re wrong. Given that there are physically coherent explanations for why they’re wrong, I’m pretty sure William would side with me.

    Just as I know the top speed of my car, or can work it out from physics-based models involving engine power, gearing and wind resistance. And it’s not 300mph. Even though I can reach that conclusion by measuring its 0-60 time and taking a derivative.

  26. I think Marco has it right. I don’t have a problem with these estimates. I think they add to our overall understanding. However, they’re quite simple and I don’t agree with suggestions that they’re somehow superior. In my view, if this was a less contentious topic, these estimates would be regarded as generally supporting the other estimates, rather than being used to imply that the other estimates are somehow wrong. In other words, if a simple estimates gives you a result that is in the same ballpark as more detailed estimates, that gives you some confidence in the more detailed estimates. If there was some discrepancy, you would try to understand why, but that has been done in this case too.

  27. JCH says:

    Lewis and Curry went straight to the United States Congress. How could it get more political? There were no clothes on that bare naked end run. Dressed in a camouflage of complaints about politicized science and those awful activist scientists.

  28. Steven Mosher says:

    “In case I’m being too obscure, Dessler’s point AFAICS is that if you take a suite of physics-based models, where you know the ECS in advance, and try to calculate it the way LC13 and LC18 do, you get the wrong answer. And not just any old wrong answer, but a systematic underestimate.”

    The question
    is why are the models wrong.

    Just kidding.

    If this were any other field people would just live with the two different estimates and over time the tension would be resolved.

  29. Fully agree with what ATTP has just written. Expecting a simple model to do more than put you in the ball park is unreasonable, we have many ways of estimating TCR/ECS and we shouldn’t ignore any of them, but judge their consilience, whilst considering their advantages and disadvantages (and ignore their effect on taxation ;o).

  30. Steven Mosher says:

    “Lewis and Curry went straight to the United States Congress. How could it get more political? There were no clothes on that bare naked end run. Dressed in a camouflage of complaints about politicized science and those awful activist scientists.”

    I believed Dressler covered Congressional appearence in his paper.. oh wait I thought we were discussing science.

  31. Willard says:

    L18 only mentions Dessler for Kummer & Dessler, 2014. No mention of Dessler in that short review:

    Considerable effort has been expended in attempts to reconcile the observationally based ECS values with values determined using climate models. Most of these efforts have focused on arguments that the methodologies used in the energy balance model determinations result in ECS and/or TCR estimates that are biased low (e.g., Marvel et al. 2016; Richardson et al. 2016; Armour 2017).

    Not sure that I’d say three papers amount to “considerable effort,” but I guess that’s how lichurchur rolls.

    I don’t see any timestamp regarding the editorial process of that paper.

  32. Steven,

    If this were any other field people would just live with the two different estimates and over time the tension would be resolved.

    Indeed, which would mean accepting that the simple estimate doesn’t somehow rule out the more complex estimate. You would need, for example, to show that internal variability could not have an impact on climate sensitivity over timescales comparable to the timescale of the instrumental temperature record.

  33. Dave_Geologist says:

    “Comes the question How exactly Might have “internal variability” have turned out differently? in the end it cannot create or destroy energy?”

    Imagine the doctor is taking your temperature with an infra-red thermometer. You bury yourself in a duvet. Your apparent temperature has dropped. Was energy created or destroyed?

    A bunch of heat went into the oceans during the “pause”. It came back out again in the last few years. Was it created or destroyed? Did it impact surface temperatures? If the answers are no and yes, is an energy balance calculation for the Earth, using a proxy which can bounce around with no net addition of energy to the planet, and even fall during a net addition of energy, a particularly reliable approach?

  34. Andrew Dessler says:

    Mosher: a lot of your objections are implicitly or explicitly addressed in our ACP paper. According to our model ensemble, 155 years is not enough to eliminate the impact of variability on the estimate of ECS. Also, you asked how internal variability could’ve turned out differently. Well, that’s exactly what our model ensemble tells us. And the answer is that it can turn out differently enough to confound our estimates of ECS. I realize that people may be distrustful of model results, but in the absence of any other arguments, models are our best view of reality.

  35. Willard says:

    > I thought we were discussing science.

    Depends on how we interpret this:

    The implications of our results are that high estimates of ECS and TCR derived from a majority of CMIP5 climate models are inconsistent (at a 95% confidence level) with observed warming during the historical period.

    https://judithcurry.com/2018/04/24/impact-of-recent-forcing-and-ocean-heat-uptake-data-on-estimates-of-climate-sensitivity/

    One could also argue that a more important implication of L18 is that their derivation of lowball estimates based on a simplistic energy balance model is incompatible (at a 97% confidence level) with the majority of CMIP5 climate models.

    Conflating one’s lowballing with the observed warming itself may not be that scientific.

  36. The implications of our results are that high estimates of ECS and TCR derived from a majority of CMIP5 climate models are inconsistent (at a 95% confidence level) with observed warming during the historical period.

    There are climate models with high sensitivity that compare well with the observed warming over the instrumental period, so this claim seems wrong.

  37. Dave_Geologist says:

    SM

    1. why would we not expect that? The instrumental period is nearly 170 years long?
    2. Can we expect that Paleo estimates will not be subject to the same objection?

    1a. There was a recent paper which matched observations using three time-lags, with the longest 300 years. 300 > 170.
    1b. We didn’t add all the CO2 in one slug 170 years ago. Some went in this year. Most of it in the last 30 years. Even the 170 year-old slug hasn’t yet equilibrated its 300-year lag. The more recent ones haven’t equilibrated their intermediate lag yet. Last year’s hasn’t equilibrated its short-term lag yet.

    2. Too many negatives, so for clarity, the palaeo estimates are generally not prone to that problem. Here their low resolution is a feature, not a bug. They’re generally going from one stable(ish) state to another stable(ish) state, over a period of hundreds to thousands of years. But geologically that still looks like an instant, so you’re comparing equilibrium before with equilibrium after. The exception may be the PETM, but that is controversial. There are one or a few workers who think they’ve identified annual cycles analogous to varves in shallow-marine sediments (IIRC) offshore US, which would make the PETM onset as rapid as AGW. But annual cycles in oceans are a tricky and controversial subject. Most workers would attribute them to multi-decadal oscillations and/or chaotic/stochastic effects like falling under a hurricane track once per century.

  38. Dave_Geologist says:

    SM

    If this were any other field people would just live with the two different estimates and over time the tension would be resolved.

    No they wouldn’t. FTL neutrinos. Cold fusion. Memory of water. They’d go with the ordinary claims until the extraordinary ones presented extraordinary evidence.

  39. “The implications of our results are that high estimates of ECS and TCR derived from a majority of CMIP5 climate models are inconsistent (at a 95% confidence level) with observed warming during the historical period. “

    seems a slightly odd thing to say, regarding TCR at least, given that the warming itself observed during the historical period is clearly consistent with the models. That would suggest to me that there is something amiss with the assumptions of the statistical test.

  40. Never any mention of effective TCR and ECS

  41. Paul,
    It is discussed in the paper.

  42. frankclimate says:

    Andrew: I understand it in this way: From 155 years of obseravations the TCR is estimated at 1.33 K/ 2*CO2, the ECS ( with the involvement of the best available data of the heatuptake) gives around 1.8. And you don’t dispute this outcome of L/C 18. You argue that when one uses your model ensemble it is not impossible that there is still some internal varibility at hidden work ( in the observations) and the TCR(ECS) could increase when this variabilty will appear in the real climate system because the models do not eliminate this.
    But how can you eliminate an ECS of below 2 with this method? You use models ( with a mean ECS of about 3.2) to follow that an observed TCR(ECS) must be too low because there is a possibility that the observed time span is “infected” with much more internal variability than estimated. But where is the confidence that ” Unfortunately for them, it’s already shown to be wrong!” ( your tweet)?

    [Typo fixed. -W]

  43. Willard says:

    > this claim seems wrong.

    I’m sure FrankB will relay the typo to the proper authorities in a moment, even if the “observational” brand seems lukewarmingly important.

  44. verytallguy says:

    If this were any other field people would just live with the two different estimates and over time the tension would be resolved

    I’m sceptical. The Hubble controversy comes immediately to mind:

    “He was so angry,” recalls Freedman, now at the University of Chicago in Illinois, “that you sort of become aware that you’re the only two people in the building. I took a step back, and that was when I realized, oh boy, this was not the friendliest of fields.

    http://www.sciencemag.org/news/2017/03/recharged-debate-over-speed-expansion-universe-could-lead-new-physics

    and of course the famous adage

    Academic politics is the most vicious and bitter form of politics, because the stakes are so low

    The thing that’s different about climate change, I think, is that so many manifestly people unqualified to comment on the issues have such strong opinions.

  45. frankclimate,
    I don’t think one can eliminate an ECS below 2K, but given our understanding of the various feedbacks, it seems likely that it will be above 2K.

    there is still some internal varibility at hidden work ( in the observations) and the TCR(ECS) could increase when this variabilty will appear in the real climate system because the models do not eliminate this.

    I wouldn’t put it like that. I think the idea is that the internal variability means that there is not a unique path that we will follow for a given change in external forcing. Hence, assuming that there is a unique relationship between \Delta F, \Delta T, and \Delta Q is probably wrong. It also appears that if one assumes this relationship then the tendency is to produce a low bias in climate sensitivity.

  46. Andrew Dessler says:

    Frankclimate: I agree that my ACP paper only identifies the bias in the method, but does not tell you that the L&C estimates are actually biased. But other work (Marvel eat al., 2018, Zhou eat al., Andrews and Webb) show that the actual pattern that we have experienced is biasing the ECS estimates low.

  47. Dave_Geologist says:

    SM

    If this were any other field people would just live with the two different estimates and over time the tension would be resolved.

    Or they’re treat it as a non-binary choice. They’d observe that we have dozens, probably hundreds of estimates, most of which fall into a +/-50% range. And some which are a lot more, and some which are a bit less. We’ve now got one more which is a bit less (but it’s not really independent of a previous one in that group). But it’s just one more out of many (or perhaps just a half more to allow for the non-independence). So it barely shifts the dial when it comes to the range and its midpoint. You’d need to have dozens of independent LC18s to do that, and using multiple methods, not just surface-temperature energy balance.

    You could formalise it using Bayes’ Theorem. But not by assuming we start from a blank slate. By assuming ignorance you implicitly deny all pre-existing science.

  48. niclewis says:

    Andrew Dessler:

    my ACP paper only identifies the bias in the method, but does not tell you that the L&C estimates are actually biased

    Your ACP paper doesn’t identify any bias in the L&C energy budget method, other than a 7% lower the median ECS estimate for the MPI-ESM1.1 model relative to that for MPI-ESM1.2 estimated from 1000 years of abrupt2xCO2 simulation data. That is due to time variation of feedback strength in the model, an effect that is addressed in the L&C paper.

    A fair comparison would be with ECS estimated from regression over the first 50 to 100 years of MPI-ESM1.2 abrupt2xCO2 simulation data, excluding year 1 data for reasons explained in the L&C paper. I think you’ll find that such ECS estimates are if anything marginally lower than your ensemble-median historical period ECS estimate of 2.72 K.

    What your analysis of the MPI-ESM1.1 historical run data in fact shows is that, unsurprisingly, internal variability (whether in AOGCMs or in the real climate system) induces uncertainty in estimates of feedback strength and of ECS. That is well known.

  49. Willard says:

    > Your ACP paper doesn’t identify any bias in the L&C energy budget method, other than […]

    I don’t always fail to identify a bias, when I do it’s none other than the bias that I identify.

  50. Nic,
    Do you at least accept that internal variability can indeed lead to a range of temperature pathways for a given pathway; there is not a unique relationship between temperature change, change in planetary energy imbalance, and change in forcing? If you’re arguing that the results of your energy balance approach present the most robust estimates (for example, we should expect the actual climate sensitivity to be close your best estimates), then it would seem that you are.

  51. Steven Mosher says:

    I’m sceptical. The Hubble controversy comes immediately to mind:

    I raise you solar neutrinos.

  52. Mosher: “If this were any other field people would just live with the two different estimates and over time the tension would be resolved.

    Within climate science we have two(?) different estimates. Well, actually many more than just two, including estimates with the same methodology as the above paper, but with biases removed, that find very high climate sensitivities. All these estimates with their strengths and weaknesses are considered when making an all over estimate.

    It is the blog science community that insists that only the Lewis and Curry estimates are right and the rest is wrong. It is the blog science community that does what you accuse science of doing.

  53. Willard says:

    > we should expect the actual climate sensitivity to be close your best estimates

    That may explain why L18 refers to ze best estimates or best estimates simpliciter when talking about its own best estimate, and qualifies the best estimates from other sources.

    There are 24 occurences of “best estimate” in L18, and thus may compete with “observational” in the economy of the lukewarm brand, as there are 26 occurences of “observational.”

  54. If they are calculating an effective ECS, then why aren’t they finding a mean of around ~3C ?

  55. Steven Mosher says:

    “Mosher: a lot of your objections are implicitly or explicitly addressed in our ACP paper. ”

    I missed the implicit ones. could you spell them out or cite?

    “According to our model ensemble, 155 years is not enough to eliminate the impact of variability on the estimate of ECS. ”

    how many years would you need and why?

    “Also, you asked how internal variability could’ve turned out differently. Well, that’s exactly what our model ensemble tells us. ”

    that assumes the models can tell you anything about internal variability which is not modelled but is rather an emergent property.

    “And the answer is that it can turn out differently enough to confound our estimates of ECS. I realize that people may be distrustful of model results, but in the absence of any other arguments, models are our best view of reality.”

    err no. theres no evidence that models give you the best view of the reality of internal variability. best compared to what? best compared how? best using what metric.

    they give you a view that represents the current state of understanding. the tell us what we might look harder at in observations.

    so what would you look at to re inforce your notions about internal variability.

  56. Some other things we should consider (and some we probably shouldn’t ;o)

    from Knutti, Rugenstein & Hegerl, “Beyond equilibrium climate sensitivity”, Nature Geoscience volume 10, pages 727–736 (2017) doi:10.1038/ngeo3017. Being a good Bayesian, I’ll marginalise over all estimates, weighting them by their posterior plausibility.

  57. “err no. theres no evidence that models give you the best view of the reality of internal variability. best compared to what? best compared how? best using what metric.”

    Given we only have one realisation of the observed internal variability, it is pretty hard to define a metric that will measure how good a model of the variability of internal variability is. What metric should we use to determine how many sides our (D&D) dice have if we are only told the outcome of a single roll? Sometimes you just have to acknowledge that there are things we know we don’t know and use physics to try and estimate them. If the variability is large or non-symmetrical, then any reasonable model is likely to be better than a Dirac delta function centered on the observations.

  58. Paul,

    If they are calculating an effective ECS, then why aren’t they finding a mean of around ~3C ?

    Well, effective ECS refers to that determined using an energy balance approach (such as in Lewis & Curry) and tends to produce best estimates closer to 2K than to 3K.

  59. Best estimates of TCR are closer to 2C, and those of ECS are 3C.

  60. Paul,
    Yes, I know. However, the term “effective ECS” refers to estimates that use the approach used in Lewis & Curry which return lower best estimate values.

  61. Andrew E Dessler says:

    Nic Lewis: Clearly, my comments above were not written as clearly as they could have been. For everyone else’s info, let me be clear: my paper showed that the energy balance method L&C used is IMPRECISE and one could infer values of ECS that are far from the actual value. Thus, one should not assume that values in your paper, obtained from one ensemble of reality, are accurate estimates of the system’s true values.

    When you combine my paper with others that you’re familiar with (Zhou et al., Andrews and Webb, Marvel et al., etc.), it seems very very likely that the surface pattern variability is making your estimate too low. Your argument in Sect. 7 of your paper that the surface pattern is forced are not well constructed or convincing (but I’m guessing you already know that).

  62. Christian says:

    @ ATTP and rest

    Really? There is nothing really new in the “new” Paper, we talking about the same things like years ago, classical circle talk and we all knew, if we want to play out forcing, observation and methode, its easy to find best eastimates on the lower bound, also to the upper bound, if we want, just only need to adjust forcing or talking another dataset (Haustein et al 2017,Richard J. Millar, Pierre Friedlingstein 2018, see Supps for Berkeley Temps ) and TCR and ECS will increase, play out uncertainy in aerosol forcing, even more increase in TCR/ECS, play out system heat uptake, play out not time constant feedbacks…

    And so on, boring Stuff

    For this, last comment on this topic

  63. Andrew E Dessler says:

    Mosher:

    “According to our model ensemble, 155 years is not enough to eliminate the impact of variability on the estimate of ECS. ”

    how many years would you need and why?

    I don’t know. But I do know it’s a lot longer than 155 years.

    “Also, you asked how internal variability could’ve turned out differently. Well, that’s exactly what our model ensemble tells us. ”

    that assumes the models can tell you anything about internal variability which is not modelled but is rather an emergent property.

    Obviously, if the models misrepresent variability then any conclusions you draw from them are wrong. However, I don’t see any reason to think they’re wrong about the existence of the issue (although the details might not be simulated correctly).

    “And the answer is that it can turn out differently enough to confound our estimates of ECS. I realize that people may be distrustful of model results, but in the absence of any other arguments, models are our best view of reality.”

    err no. theres no evidence that models give you the best view of the reality of internal variability. best compared to what? best compared how? best using what metric.

    Models are really the ONLY way to look at this problem. I agree that validating internal variability remains a difficult challenge for the community.

    they give you a view that represents the current state of understanding. the tell us what we might look harder at in observations.

    so what would you look at to re inforce your notions about internal variability.

    This has been one of the big problems in climate science — evaluating internal variability. Only recently have we had computer power to run these big ensembles, which give us one view of the problem. Other than models, I don’t have any good ideas how to separate forced changes from internal variability.

  64. Thanks. I take the most basic approach to this analysis. Go to the Wikipedia global warming page and look up the land temperature rise, note the CO2 increase over that time frame, and do the calculation for ECS.

    Like Christian said : “And so on, boring Stuff”

  65. Christian says:

    @ Dessler

    Ya and this seem cleary normal, because we have one climate system, so internal variability and antropogenic changes are interacting. Therefore if we use those methods we always getting TCR/ECS which could vary a lot to the state of synchronicity between internal variability with forced response, as your Paper indicate…

    Anyway:
    https://agupubs.onlinelibrary.wiley.com/doi/full/10.1002/2017GL076500

    Greets

  66. Christian says:

    @ Mosher

    “how many years would you need and why?”

    1) Long
    2) Because climate system was not in equilibrium to the past state before human caused change begun, if there is really a slow signal from the past, it would work against the strong signal of warming

    So it seem often not so easy as people think it is

  67. paulski0 says:

    If new estimates suggests that the change in forcing, and the change in system heat uptake rate, are slightly lower than previously thought, then the estimates for the TCR and ECS go down, which I think is the main difference between the new paper and the old one.

    Both the forcing and heat uptake rate are higher, significantly so*. In the case of forcing there is a big upward revision of methane forcing, and a fairly substantial upward revision of Ozone forcing. Both of those seem reasonable AFAICS, but then there is also a big revision of aerosol forcing, which is very much not reasonable, and clearly wrong. Despite insisting repeatedly that they have retained the AR5 best estimate of -0.9W/m2 for 2011, the best estimate figure they actually use for 2011 is -0.777. Work that one out.

    One interesting thing about the results of EffCS calculations is that they have been trending upwards, and will almost certainly continue to trend upwards for the next several years, due to the recent large uptick in warming and TOA imbalance. By my calculations, including some reasonable extrapolations, if we were using the forcing calibrations from the original L&C2015 and simply updating then EffCS would reach over 2K for 2010-2019 using HadCRUT4, compared with 1.64K for 1995-2011 in the original paper (and similar for 2002-2011). Of course, these forcing calibration revisions complicate that picture.

    * Though the heat uptake rate is lower than indicated by all other sources I’ve seen for the 2007-2016 period. I think the issue there is that their method is simple subtraction involving just the two end point years, and 2016 happens to be a bit of a low outlier for OHC. The latest CERES-EBAF data, calibrated to Argo OHC, says it’s 0.8W/m2, compared to 0.65W/m2 in L&C2018.

  68. niclewis says:

    ATTP:

    Do you at least accept that internal variability can indeed lead to a range of temperature pathways for a given pathway;

    That doesn’t make sense. I assume that you mean “for a given forcing pathway”?

    Of course I accept that. If you had read and understood the new Lewis & Curry paper, or the earlier one, you wouldn’t ask such an odd question. Both papers make proper allowance for internal variability in both surface temperature and radiative imbalance/heat uptake.

  69. Ed Davies says:

    I can understand how observational studies like this can give a good estimate of TCS but fail to see why they’re likely to help with ECS.

    Looking at AR5 Figure SPM.5 the largest uncertainties in the total anthropogenic forcings are those associated with aerosols. Aerosols have presumably been roughly proportional to the emissions of CO₂ (either because they’re directly caused by them or just because of the association with general levels of industrial activity) and so, because the growth in emissions and the increase in CO₂ in the atmosphere have been roughly exponential, also to the general CO₂ level.

    Consequently, I can’t see how observational studies would extricate the forcings of CO₂ and aerosols but once you reach 2xCO₂ and want to let things equilibrate with no further emissions the aerosols will presumably rapidly go to zero (in weeks or months, at most, I’d guess) which would then affect the ECS to an extent which is not well known.

  70. Nic,

    That doesn’t make sense. I assume that you mean “for a given forcing pathway”?

    Well, yes, that is what I meant.

    If you had read and understood the new Lewis & Curry paper

    Thanks, I see that your attempts to avoid condescension are failing, or you’re not even trying (the latter, I suspect).

    Both papers make proper allowance for internal variability in both surface temperature and radiative imbalance/heat uptake.

    I realise that you choose your starting and end points to try to account for variability, but I do not think you can compensate for all possible impacts of internal variability, which is essentially Andrew’s point.

    So, my question was whether or not you accept Andrew Dessler’s point. If you do, then you should be willing to acknowledge that your results may not be indicative that climate sensitivity is probably lower than other estimates suggest. It may simply be as Andrew suggests; the method you’ve used is imprecise and that the ECS values that you infer from this method could be far from the actual value.

    If you disagree and think that your results are accurate/reasonable estimates of the system’s true values, then you don’t accept what I asked and should acknowledge this. Ideally, you should explain how the pathway we’ve followed is precisely determined by the forcing pathway and largely insensitive to internal variability.

  71. Willard says:

    > Both papers make proper allowance for internal variability in both surface temperature and radiative imbalance/heat uptake.

    In fairness, there is indeed 36 occurences of “variability,” the first being in the abstract, which refers to this paragraph, which indicates that “proper” means “modest,” something that is intriguing considering Judy’s reliance on Mr. T in other contexts:

    It is notable that the best estimates for both ECS and TCR are almost identical across all four combinations of base period and final period. This is consistent with a modest influence of shorter-term climate system internal variability and of measurement/estimation error on energy budget sensitivity estimates. The estimates using the 1869−1882 base period and 2007−2016 final period combination are preferred; they have the highest [Delta]T and [Delta]F values and as a result are best constrained. Moreover, with the Argo ocean-observing network fully operational throughout 2007–2016, there is also higher confidence in the reliability of the ocean heat uptake estimate when using that final period. Although HadCRUT4 observational coverage was modest during 1869−1882, the fact that TCR estimation is very similar using the higher-coverage 1930−1950 base period gives confidence in the ECS and TCR estimates using the former base period.

    Note the “best estimate” simpliciter.

    Another occurence reveals a typo:

    If follows that there is no reason to believe energy budget sensitivity estimates based on changes over the full historical period are biased downwards by internal variability in SST patterns.

  72. Willard: “In fairness, there are indeed 36 occurrences of “variability,” the first being in the abstract … which indicates that “proper” means “modest,” something that is intriguing considering Judy’s reliance on Mr. T in other contexts:

    That fits to Judith Uncertainty Monster Curry with this paper again producing an estimate of the climate sensitivity with one of the smallest uncertainty ranges.

    But I am reasonably confident the Uncertainty Monster will still have a cameo appearance in a future blog post.

  73. Victor,
    It is interesting that Judith has, in the past, argued that internal variability could explain a lot of the observed warming, but now authors a paper essentially suggesting it plays no/little role.

  74. Joshua says:

    FWIW, her contributions to this paper may not actually necessitate contradictory reasoning on her part, if you try my drift.

  75. Chubbs says:

    Cowtan, Rohde and Hausfather (2018) highlight the uncertainty in SST estimates prior to 1970 due to differing measurement techniques and adjustment factors. Their alternative series derived from coastal and island station is cooler than HadCRUT prior to 1900 and when combined with BEST land gives a TCR of 1.76C using the same methods as L+C 2015. So it is possible that our single climate realization isn’t that far from the climate model mean, but the data is inadequate to confirm.

  76. Willard says:

    Seems that Andy’s comment got a comment from Roy:

    Dessler’s objections to LC18 sound much like what he did several years ago in response to our papers demonstrating that time-varying internal radiative forcing pretty much prevent feedback (and thus ECS) diagnosis from short-term temperature and radiative flux variations. I’m not sure he actually understands what others have done, now including the LC (and Otto et al.) alternative methodology of diagnosing feedback from long-term changes in Tsfc, ocean heat content, and assumed radiative forcings. None of these methods are great, because of associated assumptions. Be he seems too quick to discount ANY study that suggests low ECS. I wonder why? I’m willing to consider high or low, whereas he just published a paper saying there is NO evidence of ECS below 2 deg. C.

    https://judithcurry.com/2018/04/24/impact-of-recent-forcing-and-ocean-heat-uptake-data-on-estimates-of-climate-sensitivity/#comment-871148

    As anyone can see, ClimateBall ™ is all about science.

  77. Joshua says:

    Climateball (IMO) is very frequently about pricing the ignorance if one’s interlocutor.

    If what they say “doesn’t make sense” or if they “don’t understand,” one doesn’t have to actually be correct. Someone arguing “nonsense ” is also very useful (Nic hit a twofer above). Someone bring an “advocate” or “alarmist” – or in all fairness, a “denier” – is an effective shortcut. And it’s certainly good that Nic, Judith, and Roy can be proven to not be activists.

  78. Joshua says:

    Nic –

    Since you’re reading here and I can’t post at Judith’s. You said:

    Stevens and Mauritsen (another “good guy”) supplied the data that Dessler used. It appears to be common practice in climate science for suppliers of non-published /non-publicly available data that a paper is strongly dependent on to be invited to be, and to become, co-authors. I don’t think you can read much into who the co-authors are in a case like this.

    Not to question your skills in reverse engineering Stevens’ contributions to papers he co-authors (with a nice play of plausible denisbikity, I might add)….

    Would you mind describing the criteria you use for co-authorship on your own papers?

    I. e., what kinds of contributions do you require from your co-authors?

  79. Willard says:

    FWIW, here are some items that the Journal requires:

    Fragmentation of research papers should be avoided. A scientist who has done extensive work on a topic or a group of related topics should organize publications so that each paper gives a complete account of a particular aspect of the general study.

    It is unethical for an author to publish manuscripts describing essentially the same research in more than one peer-reviewed paper.

    It is inappropriate to submit manuscripts with an obvious commercial intent.

    […]

    A criticism of a published paper may be justified and is allowed in a “Comment and Reply” sequence; however, personal criticism is never considered acceptable.

    Only individuals who have made a substantive intellectual contribution to the published research should be listed as coauthors. The contributions usually involve significantly helping with the acquisition of data or analysis and/or contributions to the interpretation of information. […] It is unethical for the corresponding author to submit work without all living coauthors having seen the final version of the article, agreed with the major conclusions, and agreed to its submission for publication.

    https://www.ametsoc.org/ams/index.cfm/publications/ethical-guidelines-and-ams-policies/author-disclosure-and-obligations/

  80. Joshua says:

    Nic once explained to me that he felt it was better to write a blog post implying an ethical breach on the part of a scientist rather than first ask the scientist directly for an explanation for their work.

    He said that he didn’t want to put the scientist in an uncomfortable position by being faced with direct questions (with the implied logic that it would be less uncomfortable for them to have posts up at Judith’s where other “skeptics” piled on with accusations of scientific fraud).

    So I guess that can explain why he recently wrote a comment on Judith’s blog that implied an ethical breach on Stevens’ part – rather than ask Stevens directly about his contribution to Dessler’s paper? Prollly wanted to avoid making Stevens uncomfortable.

  81. Andrew E Dessler says:

    I started to write a comment defending Bjorn and Thorsten’s contributions, but then I suddenly realized that that argument is a masterstroke of diversion. So let’s focus on the actual science: how Lewis and Curry’s work produces an ECS that is biased low.

  82. HAS says:

    One quick question. A number of people comment that Dessler et al replicate L&C’s method. Is this correct, it seems the attempts to select start and end periods in the latter is missing in the former?

  83. Olof R says:

    Andrew Dessler,
    I see two reasons why Lewis and Curry (2018) produce a low ECS.
    They avoid the Berkeley Earth land/ocean dataset with 12 % higher trend than Cowtan and Way.
    They talk away the fact that observations are blended SST/SAT whereas models are global SAT (unlike Richardson et al 2016 that pinpoint the blend bias).
    It’s possible that the method still produce low ECS though..

  84. angech says:

    “In a sense, Lewis & Curry are taking one realization of reality and assuming that it is an exact representation of the typical response of the system. It probably isn’t. This doesn’t mean that climate sensitivity can’t be low (even mainstream estimates do not rule this out). It simply means that we should be cautious of assuming that it is low based on an estimate that can’t fully account for how internal variability may have influenced the path that we’ve actually followed.”
    All very true as you have consistently argued above.
    We do however have only one real set of data observations to go off. The chance that the baseline chosen was affected by natural variability is real the chance that the comparison periods were affected is real. The trend over the baseline compared to that of the other periods could be worked out and if different noted. If not it suggests that natural variability was unlikely to have affected the result. Why would it happen the same way over different time intervals and at different times?
    Hence the L and C has a chance of being right though as you and AD suggest a possibility of unusual internal variability exists. A shame that in the previous post everyone was so insistent that Internal variability was small or unimportant over long time periods.

  85. angech,

    one should not assume that values in your paper, obtained from one ensemble of reality, are accurate estimates of the system’s true values.

    There is a difference between it being unlikely that internal variability could cause something comparable to our modern warming, and it being significant enough to potentially bias obervationally-based, energy balance climate sensitivity estimates.

  86. Dave_Geologist says:

    pualskio

    Both of those seem reasonable AFAICS, but then there is also a big revision of aerosol forcing, which is very much not reasonable, and clearly wrong. Despite insisting repeatedly that they have retained the AR5 best estimate of -0.9W/m2 for 2011, the best estimate figure they actually use for 2011 is -0.777. Work that one out.

    I also had some difficulty reconciling “Reflecting recent evidence against strong aerosol forcing, its AR5 uncertainty lower bound is increased slightly.” in the Abstract with “using the original AR5 uncertainty distributions except for aerosol forcing. For aerosol forcing a normal distribution with unchanged −0.9 Wm −2 median but the revised −0.1 to −1.7 Wm −2 5−95% uncertainty range is used” (the only other occurrence of “AR5 uncertainty”). No mention of the recent evidence AFAICS, either in the text or on the form of references. I’d certainly have picked up on that as a reviewer. If you keep everything the same as IPCC, but change one parameter, I’d want to do a deep dive into why that was changed and what difference it made. Unless it was something that had appeared after AR5 and is so obvious everyone agrees it’s a weakness in AR5 that needs to be corrected, i.e. AR5 is out-of-date.

    One interesting thing about the results of EffCS calculations is that they have been trending upwards, and will almost certainly continue to trend upwards for the next several years,

    Hmmm, maybe the result is overly sensitive on start or end dates or adding a few more years’ data. Wonder if that’s due to internal variability leaking into the calculation and biasing the forcing?

  87. dikranmarsupial says:

    niclewis wrote “Both papers make proper allowance for internal variability in both surface temperature and radiative imbalance/heat uptake.” [emphasis mine]

    Clearly the papers make some allowance for internal variability, but to assert that they make proper allowance suggests it has been established that no further steps need be taken in order to avoid bias, or that the credible/confidence interval on the conclusions does not need to be further extended to account for the residual variance cause by internal variability. Without using GCMs, it is hard to see how we could establish that from the single realisation we have actually observed.

  88. Dikran,
    Exactly, I think to allow fully for internal variability would require some kind of model. It’s hard to see how this wouldn’t require using some kind of GCM and – as Andrew Dessler indicates – if you do consider GCM results, they suggest that these observationally-based, energy-balance approaches tend to be biased low.

    As I understand it (and happy to be corrected if wrong) Lewis & Curry’s analysis attempts to reduce the impact of variability on the observed surface temperatures over the instrumental temperature record. This, however, does not mean that this has produced a result that is minimally biased by internal variability since you can’t know how internal variability could have influenced the pathway that we could have followed.

  89. dikranmarsupial says:

    Andrew Dessler wrote “For everyone else’s info, let me be clear: my paper showed that the energy balance method L&C used is IMPRECISE and one could infer values of ECS that are far from the actual value.”

    I think part of the problem is there is too much focus on bias, and that people mean different things by it. In a statistical sense, the energy balance model may be unbiased in the sense that if you were to use it many times on independent realisations of the climate (parallel Earths / research equipment from Magrathea) the average value you got from the estimator would be the true value. However that doesn’t mean that the value you get from one particular realisation won’t be substantially lower or higher than the true value. In statistics we would call that “variance” rather than “bias”.

    Anyway, self-skepticism is important in science, so we should always avoid overstating the robustness of our approach, and state the uncertainties (rather than just acknowledging them), and assume that there is something wrong with our analysis if it disagrees with others. Of course being only human we still make errors (sometimes stupid ones), but self-skepticism is the best guard against it.

  90. dikranmarsupial says:

    * of course in the absence of parallel Earths, one could always use computer simulations of parallel Earths to estimate the bias and variance of an estimator of TCR/ECS, if only someone had thought of trying that! ;o)

  91. I noticed this paper (whilst looking for something else), which might be relevant:

    Transient Climate Sensitivity Depends on Base Climate Ocean Circulation

    Jie He, Michael Winton, Gabriel Vecchi. Liwei Jia and Maria Rugenstein

    Journal of Climate, Volume 31 No. 10, pp 1493-1504, May 2018.

    doi: 10.1175/JCLI-D-16-0581.1

    Abstract
    There is large uncertainty in the simulation of transient climate sensitivity. This study aims to understand how such uncertainty is related to the simulation of the base climate by comparing two simulations with the same model but in which CO2 is increased from either a preindustrial (1860) or a present-day (1990) control simulation. This allows different base climate ocean circulations that are representative of those in current climate models to be imposed upon a single model. As a result, the model projects different transient climate sensitivities that are comparable to the multimodel spread. The greater warming in the 1990-start run occurs primarily at high latitudes and particularly over regions of oceanic convection. In the 1990-start run, ocean overturning circulations are initially weaker and weaken less from CO2 forcing. As a consequence, there are smaller reductions in the poleward ocean heat transport, leading to less tropical ocean heat storage and less moderated high-latitude surface warming. This process is evident in both hemispheres, with changes in the Atlantic meridional overturning circulation and the Antarctic Bottom Water formation dominating the warming differences in each hemisphere. The high-latitude warming in the 1990-start run is enhanced through albedo and cloud feedbacks, resulting in a smaller ocean heat uptake efficacy. The results highlight the importance of improving the base climate ocean circulation in order to provide a reasonable starting point for assessments of past climate change and the projection of future climate change.

  92. Steven Mosher says:

    “The results highlight the importance of improving the base climate ocean circulation in order to provide a reasonable starting point for assessments of past climate change and the projection of future climate change.The results highlight the importance of improving the base climate ocean circulation in order to provide a reasonable starting point for assessments of past climate change and the projection of future climate change.”

    helpful

  93. JCH says:

    Preprint:

    Radiative feedbacks from stochastic variability in surface temperature and radiative imbalance

    Several other issues with regression-based feedback estimates have been identified. Regression estimates rely on an often unstated assumption that variability in TOA radiation arises primarily as a response to variability in surface temperature which is, in turn, driven by nonradiative processes. Spencer and Braswell [2010, 2011] noted that if unforced TOA radiation itself plays an important role in driving surface temperature variability, then regression-based feedback estimates will be biased towards higher sensitivity – although the importance of unforced radiation anomalies has been challenged on methodological grounds [Murphy and Forster, 2010], and on the basis that air-sea heat flux variability, particularly associated with the El-Niño/Southern Oscillation (ENSO), appears to be large relative to radiative variability [Dessler, 2011]. Additionally, the net regression- based estimate of feedbacks associated with internal variability depends on the lag at which the regression is performed, and on whether monthly or annual data are used [Forster, 2016].

  94. Steven Mosher says:

    “I see two reasons why Lewis and Curry (2018) produce a low ECS.
    They avoid the Berkeley Earth land/ocean dataset with 12 % higher trend than Cowtan and Way.”

    To be fair Nic wrote to me on April 7th. He noted a change in our product that gave a higher trend.
    he asked what was the change we made. So a couple points.

    A) the dataset has never been peer reviewed– so On Climate Etc he said he has no confidence in it.
    B) I could not recall all the improvements we’ve made ( Robert and Zeke did some work )
    So I passed him on to Robert. Dont know if he got an answer yet from Robert so
    on climate etc, he argues that our changes were un explained.

    Nic has a tendency to pick the temperature series with the lowest trend, AND THEN argue against the other data, rather than showing the impact of selecting diferent data, and Then arguing why x or y may be preferred.

  95. Steven Mosher says:

    “I started to write a comment defending Bjorn and Thorsten’s contributions, but then I suddenly realized that that argument is a masterstroke of diversion. So let’s focus on the actual science: how Lewis and Curry’s work produces an ECS that is biased low.”

    At the risk of being a dummy there are 4 ways.

    1) Underestimate the Change in T.
    A) selection of time periods
    b) selection of data product
    c) selection of metric ( SST+SAT)
    d) foolishly picking the only earth we have to get observations.
    2) Over estimate the Increase in Forcing
    a) hmm I dont pay attention to this stuff, help!

    3. Assuming Internal variability isnt an issue over the time period in question
    a) ignoring model “data” and findings
    4. Bork up the TCR to ECS ratio.

  96. As I suggested on a previous thread about Lewis’ work; if there are a lot of model choices that tend to bias the estimate low, it still has value as a lower bound of sorts on TCR/ECS. It suggests that we can’t plausibly use energy balance models to give a lower estimate. While an upper bound would be of more practical use, it does at least argue against some of the nuttier low ECS claims made in the blogs and occasionally papers (e.g. Loehle).

  97. Willard says:

    > I think part of the problem is there is too much focus on bias

    There is indeed 25 occurences of “bias” in L18, the same ballpark as “variability” and “observational.” One important hit:

    TCR can be estimated using (5) with a recent final period and a base period ending circa 1950. Although occurring mainly over the last 70 years, the effect on surface temperature of the development of forcing over the whole historical period (post ~1850) has been estimated to be broadly equivalent to that of a 100-year linear forcing ramp (Armour 2017). TCR may therefore also be estimated using a base period early in the historical period, with a possible marginal upwards bias since with a longer ramp period the climate system will have had more time to respond to the ramped forcing. LC15 found that estimating TCR using (5) with a recent final period and a base period either early in the historical period or of 1930−1950 provided an estimate of TCR closely consistent with its definition.

    Table 3 contains the values that are “closely consistent with its definition.” Using 1930-50, the TCR’s best estimate is 1.27, compared to 1.33 of L18’s favored choice, and L15’s 1.33. It changes the ECS’ 95% range from 1.05–3.15 to 1.15–2.7. That choice thus seems to have more than a 7% impact.

    How estimating a value can be fairly consistent with a definition isn’t quite clear.

  98. This is a great question that Joshua asks, after the opening is provided:

    “Would you mind describing the criteria you use for co-authorship on your own papers? “

    I can guess what the situation is with respect to Nic Lewis.

  99. Willard, “variability” and “variance” are not quite the same thing – “variability” is probably meant in most instances as a property of the climate system, whereas “variance” (in the point I was making) is a property of the estimator.

    Bias means the estimator is systematically wrong (i.e. it’s average value over multiple realisations of the experiment is not equal to the true value); variance is the error due to estimating some quantity on a particular finite sample of data (i.e. the difference you would see if you repeatedly perform the experiment on independent samples). The use of bias in the paragraph you quote seems likely to be bias in a statistical sense, whereas the “bias” due to internal variability is more likely to be “variance”.

  100. Willard says:

    > “variability” and “variance” are not quite the same thing

    I never suggested otherwise. What I said was that the number of “variability,” “observational,” and “bias” were in the same ballpark in L18.

    These three words do have technical meanings. These three words also have ClimateBall connotations. The fact that Nic use them so much may indicate how to prepare for more ClimateBall rounds. Notice for instance FrankB’s comment at Judy’s:

    Yes, it’s ocams razor! All the estimates of higher TCR/ECS are deduced from models and the estimations of higher internal variability which are not justified from ANYTHING we observed. It’s pure speculation. It’s your choice to believe ( yes, believing is one of the most important requirements to follow this projections) in some possibilities of gloom and doom or to follow the trusted science.

    Variability. ANYTHING we observed. Pure speculation. Choice to believe. Gloom and doom.

    The dogwhistling is not that subtle.

  101. angech says:

    ” Andrew Dessler @AndrewDessler 14 Dec Terrific session on climate sensitivity today at #AGU17. Lots of discussion about climate sensitivity (ECS) estimates from the 20th century. The problem is that estimates of ECS from the 20th century obs. record are lower (1.5-2°C) than models (3°C).”
    There is a known disparity in applying the formula for ECS to the records we have.
    L and C18 fit into the observational range.
    They do not fit into the range obtained from models.
    So the models are right, something must have happened to the observations.
    Some internal variation has thrown the observations off the course they should have been on.
    Or the data being used is not the right data compilation.
    “We could improve [increase] the L and C 18 by using the Berkeley Earth land/ocean dataset with 12 % higher trend than Cowtan and Way.” [thanks Mosher]
    Bias “these observationally-based, energy-balance approaches tend to be biased low.” AD
    Observations in a form not fit for purpose “observations are blended SST/SAT whereas models are global SAT” Olaf.
    Observations are wrong “Cowtan, Rohde and Hausfather (2018) highlight the uncertainty in SST estimates prior to 1970hence the data is inadequate to confirm.” Chubb.
    Observations are not physics based and work differently to climate model observations “take a suite of physics-based models, where you know the ECS in advance, and try to calculate it the way LC13 and LC18 do, you get the wrong answer.” DG

  102. “I never suggested otherwise. What I said was that the number of “variability,” “observational,” and “bias” were in the same ballpark in L18. ”

    you said it in response to

    “> I think part of the problem is there is too much focus on bias”

    Which was making the point that we should be focussing on variance as well as bias. Counting the number of times “variability” occurs in the document is entirely irrelevant to that point. If you were counting terms to make some other point, it isn’t clear why you quoted my comment.

    Personally, I don’t think this is a clear climateball usage since “internal variability” seems a pretty standard term in climatology and you see it in lots of papers by mainstream climatologists.

  103. Dave_Geologist says:

    Willard

    The dogwhistling is not that subtle.

    Wasn’t the original sense of the dog-whistle that the victims couldn’t hear it? Although it seems nowadays to be interchangeable with maintaining plausible deniability with third parties, while often with the express intent that the victims do hear it.

  104. Dave_Geologist says:

    angech

    So the models are right, something must have happened to the observations.

    No, the observations are fine. They’re just not measuring what LC18 think they’re measuring. The models show why that is the case.

  105. Willard says:

    > Counting the number of times “variability” occurs in the document is entirely irrelevant to that point.

    That may explain why I did not quote that point, Dikran. My point was to reinforce the one according to which there’s too much focus on bias. In fact, it explains it. Since you simply assert that there’s no too much focus on bias, this explanation may come handy.

    It’s the number of “bias” that matters to show Nic’s insistence on bias, BTW. Not “variability.” Variability serves another function altogether, since this time Nic minimizes it.

  106. I wrote: “> Counting the number of times “variability” occurs in the document is entirely irrelevant to that point.”

    Willard wrote “That may explain why I did not quote that point, Dikran.”

    You did quote that point Willard, you quoted “I think part of the problem is there is too much focus on bias” and that was the point for which the number of occurrences of “variability” is entirely irrelevant. That was the only point I made in that paragraph.

    “My point was to reinforce the one according to which there’s too much focus on bias.”

    right, so you reinforced there being too much focus on bias by showing that there were an approximately equal number of ocurrences of “variability” (a word that is entirely irrelevant to whether there is too much focus on bias)?

    “Variability serves another function altogether, since this time Nic minimizes it.”

    How many times should Nic have mentioned it? What ratio of “bias” and “variability” could Nic have included that would have been satisfactory? Since “variance” (not “variability”) is the complementary property to “bias” is the comparison meaningful?

  107. Chubbs says:

    Angech: the last 40-50 years of observations indicate that the models are providing better planning guidance than L&C 2018

  108. Willard says:

    Perhaps you ought to try answering your many questions regarding your own claim that there’s too much focus on bias first, Dikran. How much focus should there satisfactorily be? That “bias,” “variability,” and “observational” has about the same number of occurences is quite a strange coincidence, don’t you think?

    While there is no relationship between “bias” and “variability” on technical grounds, there is one on ClimateBall ground. It reinforces the contrarian framing. Here’s one reading of “bias,” courtesy of Very Tall and David Young from the Boeing Company:

    I viewed the video you linked and it is very disappointing. The last third of it is just self-serving tripe. It really does help me understand Dessler’s appearance here and his biases. It explains why he has to just ignore real critiques or lines of evidence that the doesn’t like. I must say as well it calls into question for me his honesty and directness in dealing with science generally

    https://scienceofdoom.com/2017/12/24/clouds-and-water-vapor-part-eleven-ceppi-et-al-zelinka-et-al/#comment-123671

    Try as you might to dissociate the technical meaning of bias from its ClimateBall connotation, you won’t succeed. Witness the exchange between Andrew and Nic above.

    Subtext matters even more when outright aggression is condemned.

  109. To avoid a Dikran vs Willard comment thread, I think Dikran’s point about the technical difference between bias and variance is correct. I hadn’t appreciated the Climateball context of the term “bias” that Willard points out, but it does seem worth bearing in mind.

  110. I asked Willard

    How many times should Nic have mentioned it? What ratio of “bias” and “variability” could Nic have included that would have been satisfactory? Since “variance” (not “variability”) is the complementary property to “bias” is the comparison meaningful?

    Willard avoids answering the questions by firing them back at me, which of course is much easier than actually answering them:

    Perhaps you ought to try answering your many questions regarding your own claim that there’s too much focus on bias first, Dikran. How much focus should there satisfactorily be?

    There should be an appropriate balance of “bias” and “variance”, but “variance” is hardly mentioned. What is appropriate depends on context, but as we only have one realisation of the climate system that we can observe, we do need to worry about the variance of the estimator, not just the bias. There, I have answered your question, no please answer mine.

    That “bias,” “variability,” and “observational” has about the same number of occurences is quite a strange coincidence, don’t you think?

    As I have pointed out several times, “variance” and “variability” are not the same thing, so no it isn’t at all a strange coincidence. “variability” is a relevant property of the climate system, so of course it gets mentioned frequently. How many times is “variance” mentioned?

    Try as you might to dissociate the technical meaning of bias from its ClimateBall connotation, you won’t succeed.

    It is the misuse of “bias” that fosters this connotation. If we used “variance” a bit more, we might reduce it.

    BTW I am not saying Lewis focusses too much on bias, I am saying the discussion of the estimation of TCR/ECS in general focuses too much on bias (as we only have one run of the experiment).

  111. I think Willard’s point is that the term “bias” has a Climateball meaning that is distinct from its formal statistical meaning. I may, I will admit, be confused by the point Willard is trying to make 🙂

  112. To avoid a Willard-v-Dikran thread, I’ll stop commenting on it at this point, except to say we need more work of the sort that Andrew Dessler is doing (but not less of what Nic Lewis is working on).

  113. Willard says:

    > I am saying the discussion of the estimation of TCR/ECS in general focuses too much on bias (as we only have one run of the experiment).

    You didn’t support that claim the same way you require me to support that there’s too much focus on the word “bias,” Dikran. In contradistinction to you, I don’t even need to show that there’s too much “bias” in L18 for my point to stand. My point, to repeat and perhaps clarify, is that there’s so much talk of bias because bias is not a Good Thing. All I need to show is that (a) it’s a lot, i.e. it occurs more than once per page and a half; that (b) it’s in the same ballpark as other ClimateBall words like “variability” and “observational”; (c) it has currency in ClimateBall, e.g. see above.

    Or take Nic’s minimization of Andrew’s point earlier:

    Your ACP paper doesn’t identify any bias in the L&C energy budget method, other than a 7% lower the median ECS estimate for the MPI-ESM1.1 model relative to that for MPI-ESM1.2 estimated from 1000 years of abrupt2xCO2 simulation data. That is due to time variation of feedback strength in the model, an effect that is addressed in the L&C paper.

    I don’t know about other commenters’ financial intuition, but if Nic can get me an investment vehicle that gives a 7% annual return garanteed (even against the bankcrupcy of the issuer), I may be able to find him enough clients for his lifetime. So I’m not sure I buy Nic’s “but only 7%.” In some contexts, 7% is a lot.

    Also note the “an effect that is addressed in the L&C paper” handwaving.

  114. Andrew E Dessler says:

    I agree with several previous commenters that the terms people use our causing some confusion. We wrote our ACP paper carefully to be as clear as possible, but I have not been as clear in my tweets and in blog comments (because to be honest I don’t spend a lot of time thinking about them before I post). I said several times that the Lewis and Curry method is biased, but I should’ve said it is imprecise. Our ACP paper shows that the method would give us an accurate estimate if we had 100 different realizations of the 20th century. However, The imprecision of the method means that with only one realization (the historical record), it is possible that you could get an answer that is far from true. Several other papers that have come out recently have also suggested that the pattern of warming that we experienced during the late 20th century causes energy balance estimates of ECS to be lower than the climate system’s true value.

  115. Steven Mosher says:

    “However, The imprecision of the method means that with only one realization (the historical record), it is possible that you could get an answer that is far from true. Several other papers that have come out recently have also suggested that the pattern of warming that we experienced during the late 20th century causes energy balance estimates of ECS to be lower than the climate system’s true value.”

    That seems fair.

    what are the patterns of warming that would have led to the “true” value

  116. “what are the patterns of warming that would have led to the “true” value”

    I suspect the answer would be one where the net effect of internal variability were exactly zero, so we directly observe the forced response of the climate system.

  117. Steven Mosher says:

    ” Here we test the method using a 100-member ensemble
    of the Max Planck Institute Earth System Model (MPIESM1.1)”

    First question would be “what does the same test done on all the other models show?”

    WRT the “pattern” of warming, a picture would be nice
    and comparisons of the patterns in the model versus the lucky pattern that reality dished up to us.

    Really interesting work.

  118. Steven Mosher says:

    “I suspect the answer would be one where the net effect of internal variability were exactly zero, so we directly observe the forced response of the climate system.”

    ya mathematically.

    I re read the paper looking for some images of the pattern…
    And with a 100 member ensemble I figure they would also include some examples of the world
    we could have seen.

    As best I can see it the argumnet goes that with the same exact forcing in a 100 member ensemble we get surfaces (where we live) that warm by wildly different amounts.. and we basically lucked out
    and experienced an ECS that is even lower than the lowest ECS demonstrated by the 100 member ensemble.. wait..

    In any case they did publish the code and data so people could go look and see what warming pattern they found.

  119. HAS says:

    “Our ACP paper shows that the method would give us an accurate estimate if we had 100 different realizations of the 20th century. However, The imprecision of the method means that with only one realization (the historical record), it is possible that you could get an answer that is far from true.”

    Help me here. This seems to be implying that climate sensitivity is an artifact of a GCM rather than of the actual climate we are experiencing. It seems to me the latter is the important measure, particularly when based on the instrumental period to produce useful results for near term policy.

    On this basis if using GCMs to help understand energy balance models one would limit the runs in the ensemble to those that accurately represented the instrumental period climate (averaged over some acceptable time periods etc) to within the limits of our ability to measure it.

    What might have happened in other realisations the model may have been able to reproduce seems to be of less interest.

    More interesting would be to look at the runs that reproduce the instrumental period and investigate the stability of the measure to start and finish periods, for example.

  120. JCH says:

    It seems to me the latter is the important measure, particularly when based on the instrumental period to produce useful results for near term policy.

    He gets Lewis and Curry: political.

  121. HAS,
    Technically, the ECS is a climate model metric determined by increasing atmospheric CO2 at 1% per year for 70 years and then letting the system run to equilibrium (which would hundreds of years). For a single model there is, in principle, one ECS. The idea is that for reality there is probably also one ECS from a given initial state. The problem is that if you try to infer this from something like the historical record, you could get a result that is not close to this true value, because of the impact of internal variability.

    Is it possible that internal variablity could ultimately mean that the ECS that we experience is actually different to the “true” value, and close to what these energy balance estimates suggest? I guess that this is possible. However, the longer the timescale, the more likely it is that we will sample more of the parameter and the more likely it is that the mean of what we experience will be close to the “true” ECS. This is not guaranteed, of course, but the point is mainly that we should be careful of assuming that observationally-based, energy balance estimates of climate sensitivity are close to the “true” climate sensitivity values.

  122. Only studying models runs that happen to be close to the realised warming to understand climate change is a bad idea. Just as bad as only studying dice that produced a three to understand dice games.

  123. HAS says:

    But the point is the ECS in the real world is more tightly constrained than just an initial state and how the models run from there. We have for example in this instance the record over the instrumental period as a basic constraint. The problem is that the internal variability from model runs that don’t reproduce our immediate history is being imported into short-term assessments of the current impact of various forcings and what the immediate future holds.

    We might have followed a different path from a common pre-industrial past with actual observed forcings, but we didn’t.

    It does rather expose a difference in attitude toward GCMs. The concept of “true” sensitivity as you describe it is ultimately unknowable in an empirical sense, and GCMs unfalsifiable (there is always the claim of “wait there’s more” available).

    If on the other hand one constrains the discussion to models that replicate the known climate (no matter how imperfect that might be) the discussion changes. We have two methods that (may then) produce different results. This is something to be explored.

    More generally opportunities to evaluate GCMs should be welcomed by the modelling community. What these two papers do, as I understand them, is help us understand the limitations of both approaches, not the superiority of one over the other. (It does perhaps show the superiority of GCMs in determining “true” sensitivity, but that is hardly surprising since without GCMs it wouldn’t exist).

  124. HAS says:

    Victor we are studying games that begin with a throw of 3.

  125. -1=e^iπ says:

    The new Lewis and Curry paper represents a good improvement over the past papers. I am glad to see that Lewis and Curry have conceded that unobserved regions of the Earth’s temperature history and the desirability of statistical infilling, differences in forcing efficiency, and differences between effective ECS and actual ECS due to the imperfectness of the energy balance model are important issues. It would have been nice if they took into account the uncertainty in the relationship between effective ECS and actual ECS inferring from climate models in their final estimates (or maybe they did, I have yet to go through their full paper).

    One important issue in the paper, which I haven’t seen discussed in this comment section, is the changes to methane forcing due to the inclusion of near-infrared bands combined with taking cloud cover into account. See paper https://agupubs.onlinelibrary.wiley.com/doi/full/10.1002/2016GL071930

    From my understanding, near-infrared bands just above 2.7 micro meters causes a cooling effect, but the bands just below 2.7 micro meters cause a warming effect. And the reason for this is that the clouds start to become unreflective after 2.7 micro meters. So while a 3.0 micro meter band might cause surface cooling due to blocking short wave radiation from reaching the surface, a 2.0 band causes warming as it absorbs some short wave radiation reflected off of clouds. Pretty cool.

    https://www.nohrsc.noaa.gov/technology/avhrr3a/avhrr3a.html

  126. Willard says:

    > Victor we are studying games that begin with a throw of 3.

    That would be “game” in singular, HAS. Only one (instance of the) game started with a 3. Many other values could have been possible. It’s hard to know if that 3 is precise when all you got is one 3.

    If you want to study the kind of game it is, it might make more sense to focus on the dice than the three observed. Calling the study of the dice “model-based” and the study of the three “observation-based” is just gamesmanship – in both cases we can only get a model of the game. (Opposing “physics-based” to “observation-based” might be more even-handed.) Having to choose between studying the dice itself by simulating some throws and studying the dice just by looking at past throws leads a false dilemma unless we could establish one approach as more accurate than the other.

  127. HAS says:

    Willard, we are studying games that begin with 3. It’s what comes next that we are trying to adduce. That approach is necessarily more relevant (rather than “accurate”) because that’s the world our hypothetical player lives in. You are making the mistake of assuming s/he is estimating the behaviour of the while class of dice systems. That’s been done by the support team sitting behind the one way mirrors.

    I suspect the analogy has run its course, so to be clear on the substance there is a distinction between what modellers do in model world in the privacy of their own homes (and what Victor correctly states should cover all relevant angles within reason), and the application of models . L&C and Dessler et al are primarily about the application of their respective models to the real world.

  128. Dave_Geologist says:

    SM

    First question would be “what does the same test done on all the other models show?

    Oh look, it’s been done. Surprise, surprise, someone thought about it already. Not just one other model, but more than a dozen. Case closed, eh?

    Or was there a whiff of the Nirvana Fallacy about your question (Impossible Expectations: demanding ever more precision and ever more perfect information before conceding or taking action)?

    Plain Language Summary
    Even if we remove the uncertainty associated with human behavior, we still don’t know exactly how hot it is going to get. This is because warming associated with increased atmospheric carbon dioxide triggers climate changes that themselves can accelerate or decelerate the warming. The equilibrium climate sensitivity (ECS) is defined as the eventual warming in response to a doubling of atmospheric CO2, and it is tempting to estimate this quantity from recent observations. However, in climate models, the ECS inferred from recent decades is lower than the eventual warming for two reasons. First, it takes the climate many centuries to fully come to equilibrium, and models indicate that we should expect even more warming in the future. Second, the conditions experienced in the real world seem to have given rise to especially low estimates of ECS, perhaps purely by chance. Climate models indicate that not only are ECS estimates based on recent decades lower than the eventual warming, but they may not even be predictive of that warming. A climate model that shows strong warming in response to recent real‐world conditions does not necessarily have high long‐term sensitivity, and vice versa.

    Or was there a whiff of the Nirvana Fallacy about your question (Impossible Expectations: demanding more precision and more perfect information).

    The Supplementary Information isn’t paywalled, Plenty of detail there,

  129. Dave_Geologist says:

    HAS

    …the actual climate we are experiencing. It seems to me the latter is the important measure, particularly when based on the instrumental period to produce useful results for near term policy.

    No. The important measure is how much future warming is locked in for a given amount of current anthropogenic forcing (CO2 etc). Because the lag time for the planet to re-equilibrate is decades to centuries. Your approach is equivalent to standing naked in the snow, taking your temperature, and saying “fine, I’m only 0.5° below normal”. Then “fine, I’m only 2° below normal”. Then “fine, I’m not hypothermic yet”. By the time you are hypothermic, it’s too late to take action as you’ve lost consciousness. As opposed to anticipating the coming hypothermia, even though you’ve barely started shivering. Our grandchildren will have to live with the consequences of our folly, and absent a time machine, won’t be able to undo it. Just curse our memory.

  130. angech says:

    AD Several other papers that have come out recently have also suggested that the pattern of warming that we experienced during the late 20th century causes energy balance estimates of ECS to be lower than the climate system’s true value.

    Steven Mosher says: April 28, 2018
    “That seems fair.
    what are the patterns of warming that would have led to the “true” value.”
    Was that a pause for thought, Steven?
    Which leads to a problem This answer conflicts badly with that of Gavin D G April 29, 2018 am
    “An emerging literature suggests that estimates of equilibrium climate sensitivity (ECS) derived from recent observations and energy balance models are biased low because models project more positive climate feedback in the far future.”
    Not the fault of the pattern of warming at all.

  131. JCH says:

    Not the fault of the pattern of warming at all.

    He walks into another plate glass door. It’s hard to watch.

  132. -1=e^iπ said
    “desirability of statistical infilling”

    Unwittingly, Mr. Imaginary # touches on the tip of the iceberg

  133. Willard says:

    > we are studying games that begin with 3

    You keep using that word, HAS. It might not mean what you make it mean. If what you say really was the case, you’d have a whole set full of games (i.e. similar planetary systems), each with its own dice rolls (i.e. historical data), all of them starting with a 3. And no, I’m not assuming your favorite ClimateBall player is estimating the behaviour of the whole class of dice systems. That’s what he’d do if he really was studying the game as properly understood. That’s not what he does.

    His usage of “bias” shows that he doesn’t. It assumes that his model provides a true reference point. This is a strong assumption, so strong as to undermine how we usually conceive climate systems. Something about initial conditions.

    As an idealization, it’s no big deal. As a propaganda tool for the GWPF and other contrarian megaphones, it’s more than underwhelming. Gamesmanship is involved more than game theory.

    Maybe it’s a vocabulary thing.

  134. izen says:

    I struggle to grasp the TCR/ECS metrics and how they are estimated, measured, and modelled.
    So the following question may be inane from the D-K effect.

    When Observational determinations generate lower TCR/ECS than modulz (and Paleo), does this mean;-

    1) Less energy, in Joules, has entered the system than modulz assume.

    2) More energy, in Joules, has escaped the system because feedbacks are smaller than the modulz model.

    3) The same amount of energy, in Joules, has entered the system (90%oceans, but also phase changes and regional redistribution), but the effective thermal capacity is higher so the temperature rise is smaller.

    If this is a hopelessly misguided question, based in a fundamental misunderstanding of the science, please indicate where I might correct the misconceptions I am working on!

  135. JCH says:

    Trenberth: it either went into the oceans or was reflected back to space before it arrived at the surface.

  136. Willard says:

    > Just as the dice thrower discards all those models that don’t start with 3 […]

    A Backgammon player won’t discard his dice model because he got a 3, HAS. He’d still expect to get between 1-1 and 6-6. If he studies a position, say a back game, he will look at many different rolls, and if he’s a professional, he’ll even run something like a Monte Carlo by hand.

    The main differences between a Backgammon player and a climate scientist lie in the confidence the former has in the belief he’s rolling dice and the results of the rolls he got. The latter has neither luxuries. All modulz are all wrong, but there’s no adamant data either. If you think you got a 1 or a 13 on two dice, chances are it is your observation that needs to be revised.

    And once again, to hammer that point home – even if you root for energy balance approaches, they still are models, just like everything else and however you might try to portray them as “evidence-based,” whatever that means. (As if the other approaches weren’t based on observation.) The whole “evidence-based” thing minimizes that it’s still an inferential matter. Hence why D18’s abstract reads:

    Our climate is constrained by the balance between solar energy absorbed by the Earth and terrestrial energy radiated to space. This energy balance has been widely used to infer equilibrium climate sensitivity (ECS) from observations of 20th-century warming. Such estimates yield lower values than other methods, and these have been influential in pushing down the consensus ECS range in recent assessments. Here we test the method using a 100-member ensemble of the Max Planck Institute Earth System Model (MPIESM1.1) simulations of the period 1850–2005 with known forcing. We calculate ECS in each ensemble member using energy balance, yielding values ranging from 2.1 to 3.9 K. The spread in the ensemble is related to the central assumption in the energy budget framework: that global average surface temperature anomalies are indicative of anomalies in outgoing energy (either of terrestrial origin or reflected solar energy). We find that this assumption is not well supported over the historical temperature record in the model ensemble or more recent satellite observations. We find that framing energy balance in terms of 500 hPa tropical temperature better describes the planet’s energy balance.

    Source: https://www.atmos-chem-phys.net/18/5147/2018/acp-18-5147-2018.pdf

    Just like throwing a bunch of dice can tell you if your dice are calibrated, Andrew’s approach can tell you that Nic’s approach lowballs sensitivity. It can also help you diagnoze why, say because of some spatial effects. And even if you’d like to stick to energy balance approaches, Andrew provides a way to improve upon them, e.g. by using the tropical atmosphere data.

  137. HAS says:

    Dave I understand what ECS attempts to measure. What I’m suggesting, to combine our words, is we are trying to measure “how much future warming is locked in for a given amount of current anthropogenic forcing (CO2 etc)” in “the actual climate we are experiencing.”

    I think that is more important than measuring the properties of a GCM.

    Willard my comments weren’t directed at L&C, they relate to how Dessler et al apply their model. Their model has been developed to investigate what happens in the class of all earth-like planets (aka all dice games). However they then face the problem of applying it to reality where we know something about what’s going on (the games that have started with the throw of a 3). Just as the dice thrower discards all those models that don’t start with 3, so they should put aside those model runs that don’t replicate what’s actually happened (within the constraints of our ability to measure this). The general learnings should be embodied in the model structure.

    As I read it they partially do this by trying to replicate the initial conditions, but after that they put aside information about the actual pathway the climate took. This I suspect is why they find much greater variation than is derived from empirical measures. They are including a whole lot of climates that never happened.

  138. HAS,

    What I’m suggesting, to combine our words, is we are trying to measure “how much future warming is locked in for a given amount of current anthropogenic forcing (CO2 etc)” in “the actual climate we are experiencing.”

    I think that is more important than measuring the properties of a GCM.

    Except we don’t know that what we’ve experience is necessarily a good indicator of what we will experience. Our understanding of the climate suggests otherwise. In other words, a historical period that suggests ECS might be low does not necessarily imply (as I understand it) that what we will experience on longer timescales will be consistent with this low ECS estimate. My view is that we can’t rule out that the ECS will be low, but that the results presented in Lewis & Curry do not provide strong evidence that it will be.

  139. HAS says:

    aTTP, yes understand. The point is the fact that we’ve had this period changes the likelihoods of what we might experience in the future. This needs to be recognised. Desler et als critique of empirical methods (not just of L&C) relies (I think) on importing variation we didn’t have into the calculation of TCR over the instrumental period. If we are forecasting we need to take account of the uncertainty, but constrained to those models that reproduce our past.

  140. HAS,

    The point is the fact that we’ve had this period changes the likelihoods of what we might experience in the future.

    Okay, that’s a fair point. I don’t know if that is somehow incorporated into Dessler et als. analysis, or not. Certainly would be interested the long-term evolution of climate models that initially follow a pathway comparable to what we’ve actually experienced. Maybe this has been done?

  141. HAS says:

    Willard, as I said before I am only talking about the way Dessler et al apply their model to evaluate the energy balance model. In amongst all the philosophising I’ve really only posed two questions:

    1. Should they have replicated the selection of end points used by L&C rather than just the first and last decades in their available model runs?
    2. Where they say “Here we test the method using a 100-member ensemble of the Max Planck Institute Earth System Model (MPIESM1.1) simulations of the period 1850–2005 with known forcing.” should they have used the full range of runs only initialised to the preindustrial level (as they do), or should they have constrained them to models runs that reasonably replicated the measured climate we experience over the instrumental period?

    The first may explain the difference in the absolute value of the estimate, the latter the wider spread.

  142. HAS: “The point is the fact that we’ve had this period changes the likelihoods of what we might experience in the future.”

    ATTP: “Okay, that’s a fair point.”

    ATTP, you are being too generous here. Again.

    First of all, you can also get high estimates of the climate sensitivity using the same kind of simplified statistical climate models if you account for the biases these statistical models have. http://variable-variability.blogspot.com/2016/07/climate-sensitivity-energy-balance-models.html

    Secondly, as Dresslers et al. (2018) showed you can get a very wide range of climate sensitivity estimates from these statistical models if you apply them on the output of a physical climate model (which has one well-defined climate sensitivity). Thus getting a high or a low value is not particularly informative.

    It would make no sense to take one of these model runs that happens to have had a very high or a very low sensitivity to then argue that the climate sensitivity of that model is wrong. As long as it is in the uncertainty range all is fine. Just like you do not claim a die is biased when you get one three.

    In the end the evidence is based on all the evidence, including the energy balance models, but definitely not with a high weight because they are not particularly informative. If Lewis wants people to ignore all the other evidence, he still has a few papers to write to show each and everyone of these other estimates are wrong.

  143. izen says:

    @-W
    “The main differences between a Backgammon player and a climate scientist …. ”

    Is this not a game with THREE dice.
    The Instrumental record, which we are pretty sure is a 3.
    The +/- Forcings/Aerosols which we are less certain, but think is a 3.
    And the rate of energy uptake by the Oceans; which is much more uncertain, varies over time and may be altered by how far out of equilibrium it is at any point.
    That throw is effectively subtracted from the other two.

    There is an element of Groundhog day in this. I made and first posted this in Oct 2016, it still seems apposite. (if not geometrically accurate!)
    https://izenmeme.wordpress.com/2016/10/19/climate-sensitivity-and-visual-rhetoric/ecs_pdf3b/

  144. Andrew E Dessler says:

    HAS: Here are a few answers to your questions:

    1. Should they have replicated the selection of end points used by L&C rather than just the first and last decades in their available model runs?

    That’s not really possible, since our ensemble members end in 2005. We have calculated ECS using 1869-1882 as the start period and the results don’t change much. In particular, there is still a very large spread in inferred ECS among the ensemble members.

    2. Where they say “Here we test the method using a 100-member ensemble of the Max Planck Institute Earth System Model (MPIESM1.1) simulations of the period 1850–2005 with known forcing.” should they have used the full range of runs only initialised to the preindustrial level (as they do), or should they have constrained them to models runs that reasonably replicated the measured climate we experience over the instrumental period?

    We obviously don’t address this, but it has been addressed. Marvel et al. 2018 (GRL) did investigate this by plugging observed SSTs into the model and calculating ECS. They get a very low ECS. There’s quite a bit of other literature that supports this (Zhou et al., Andrews and Webb, etc.). My interpretation, which I think is shared by most other experts, is that the energy budget analyses are pretty likely low biased, for reasons explained in these papers.

  145. Willard says:

    > Is this not a game with THREE dice.

    Backgammon? Kinda. There are two pairs of dice to move the stones, and there is a doubling cube. This one is used for betting on the number of points (1, 2, 4, 8, 16, 32) the winner of the game will score. (A Backgammon match is a series of games.) Before it has been offered for the first time, either player can offer it to the opponent in exchange of doubling the odds. In return, the opponent gets the cube. Later in the game, he alone can offer back the cube. Et cetera.

    Here would be one way to connect this cube with L18, courtesy of Ragnaar at Judy’s:

    > They are also aware of all the whiz kid models that fell flat on their face. Derivatives.

    You could not have said it better, Ragnaar. Perhaps unknowingly. Nic used to be an accountant. He may relate to it.

    Now, in your story, are the deratives freaks those who’d try to downplay risks based on observable success, or those who’d prefer to play it safe?

    https://judithcurry.com/2018/04/24/impact-of-recent-forcing-and-ocean-heat-uptake-data-on-estimates-of-climate-sensitivity/#comment-871251

  146. HAS says:

    Victor, “the fact that we’ve had this period changes the likelihoods of what we might experience in the future” is a tautology for systems like this where we are assuming the past tells us something about the future. If we’re not then we may as well go back to throwing dice.

    Andrew, I don’t see how the end points would in principle effect the spread, just the absolute value. It would be simple enough to try and re do the sums using different periods using (say) L&C’s decision rule for their selection, even if not optimised. It would show the sensitivity of the results to this limitation of the analysis (aka replicating balance model methods).

    On the question of constraining the model runs to those that replicate the instrumental period climates, intuitively that should constrain the range of the estimates. The subsequent critique that attributes the range to changes in internal states should be significantly diminished.

    I’ll look at the other papers you cite.

    Izen and Willard, I’m sorry I responded on the dice. It was just meant as quick short hand for the idea that if you are playing a game that progressively evolves ones future strategy takes account of what has happened before.

  147. Steven Mosher says:

    “Not the fault of the pattern of warming at all.”

    I would not know. I have yet to see a visual of this pattern of warming
    or ANY patterns of warming in the 100 member ensemble.

    I suppose the GCM point could be driven home more forcefully if more than 1 model were
    used to construct 100 member ensembles.

  148. Willard says:

    > I’m sorry I responded on the dice. It was just meant as quick short hand for the idea that if you are playing a game that progressively evolves ones future strategy takes account of what has happened before.

    I’m not sure why, HAS. It came right after you editorialized about “attitudes” toward GCMs, and in response to VeeV’s Only studying models runs that happen to be close to the realised warming to understand climate change is a bad idea.

    Does that mean you’re willing to take back your more interesting would be to look at the runs that reproduce the instrumental period and investigate the stability of the measure to start and finish periods?

  149. HAS: “Victor, “the fact that we’ve had this period changes the likelihoods of what we might experience in the future” is a tautology for systems like this where we are assuming the past tells us something about the future. If we’re not then we may as well go back to throwing dice.

    If you throw a die many times you may see it is biased and use that to predict the future.

    If you throw it once and get a three that is not particularly informative about the future.

    Same system, different uncertainties. As long as you are within the confidence/uncertainty range all is good. Only if you get a statistically significant result you may want to update your predictions about the future (preferably after understanding the reason).

    The paradoxical worshippers of the Uncertainty Monster who accuse scientists of ignoring uncertainties should themselves stop ignoring uncertainties.

    P.S. Model spread is not uncertainty. The uncertainty is typically about twice as large. So the unreliability of the Lewis Curry statistical method is likely even larger than the friendly Dessler et al. study estimated. Then we are not even discussing the uncertainty in the total historical warming estimates, which could well have been a few tenth of a degree Celcius larger. All is good.

  150. Willard says:

    Having multiple views is a Good Thing:

  151. HAS says:

    Victor, are you saying we should ignore information from the instrumental period when making inferences about the climate? I’m sure you aren’t and me neither, that’s the (what should be uncontroversial) point I’m making.

    Willard to answer your question.the answer is “no”.

  152. Hyperactive Hydrologist says:

    If you are concerned about internal variability why not us a longer period for your historic an present comparison? Say 50 years. Why use really short periods that area more likely to be affected by internal variability and the jump through hoops trying to justify those periods based on internal variability?

  153. HAS wrote “we are studying games that begin with 3.” but the climate system is chaotic, so do we get the same result if we begin with 3.1, or 3.01, or 3.0000001? It seems to me that many don’t understand the use of Monte-Carlo simulation (which is what a GCM ensemble is). We begin by assuming that the observed climate is an additive combination of the response to the forcings (the “forced response”) and internal climate variability (the “unforced response”), and that the unforced response is essentially independent of the forcings (which us unlikely to be completely true). With a GCM we can’t predict the effects of internal climate variability, because it is chaotic (i.e. deterministic, but with high sensitivity to initial conditions – I’ll return to that later). We can however simulate internal variability because it is deterministic, just like the forced response. Thus what we do is run many simulations of the climate, each starting off from a different set of initial conditions, so that each will have the essentially same forced response (which doesn’t strongly depend on initial conditions), but different internal variability (e.g. the phases of ENSO won’t be coherent between runs). Taking the average of model runs however means that the effects of internal variability cancel out, which leaves you with an estimate of the forced response on its own.

    This is an important point, the ensemble mean is not directly a projection of what we expect to observe (obviously it is much too smooth for a start) as it is only the forced response and the observed climate will also have the effects of one realisation of internal variability. However, the spread of the model runs gives an estimate of the variation around the forced response that we can plausibly expect to see due to the unforced response (or at least withing the uncertainties included in the model – see Victor’s excellent blog post that he gave the URL for upthread).

    Right. So why shouldn’t we select the models that give the most similar results to the estimates of TCR/ECS based on observations? Well the first reason is that TCR and (especially ECS) are determined by the forced response, and the best estimate of the forced response depends on all members of the ensemble, so by discarding the runs with higher estimates, the ensemble mean is no longer an estimate of the forced response that you need for estimating ECS as you have made it dependent on the unforced response again. The other reason is that they may give low estimates of ECS, but not for the right reasons, e.g. it may be due to ENSO in one model, but some other factor in another, and internal variability being chaotic, the future may not be heavily dependent on the state of internal climate variability today. I can see why you might want to prune back the ensemble for e.g. decadal-prediction, but AFAICS there seem reasons not to do it when estimating TCR/ECS.

    Caveat: Just a lowly computer scientists that uses GCM output now and again, but isn’t a climate modeller (but does use Monte Carlo simulation a fair bit for other things).

  154. Steven Mosher says:

    excellent dk

  155. HAS says:

    dikranmarsupial we have run our ensemble as you have discussed across the instrumental period, but we know that many members don’t match the measured climate, and we want to use the ensemble to replicate TCR derived from that period by alternative means.

    What would you do with those members that are inconsistent with the known climate?

    Footnote: we aren’t suggesting selecting “the models that give the most similar results to the estimates of TCR/ECS based on observations”.

  156. Steven Mosher says:

    “Time-Varying Climate Sensitivity from Regional Feedbacks”

    ya read that.

    1. Pattern is not defined in any rigorous testable way, reminds me of the pattern hounds /sun nuts at WUWT.

    2. The variation in “patterns” over all members of the ensemble are not cataloged.

    3. I see no comparison in Dresller to the “patterns” in Amour.

    Basicaly if you want to make a pattern argument, you need to actually make it and not just wave
    your arms.
    Questions: are all the “patterns” produced by GCMs realistic? Dunno.

    looking at a bunch of models…

    I like the models that have “speckels” of warming and cooling over 100 years

    http://berkeleyearth.org/graphics/model-performance-against-berkeley-earth-data-set/#section-2-33

    probably something amiss with the physics of a model that shows grids of intense cooling over 100 year period adjacent to a grids of warming. who knows

  157. HAS read further down, the third paragraph tells you why we should use the whole ensemble. We want to estimate the forced response of the climate, and we can only do that using the whole ensemble. If some model run is inconsistent with the observations with regard to the unforced response, that is completely irrelevant as it gets averaged out anyway if the ensemble is sufficiently large.

  158. HAS says:

    dikranmarsupial, yes I understand that. The issue here is that the ensemble runs are being individually used to calculate a statistic derived from the instrumental period and it is then being suggested that the derivation of that statistic is unreliable because the spread is so great.

    Humor me and try and answer the question I posed above. What would you do under those circumstances? I should add that constraining the output of GCMs to those that conform to the instrumental period’s climate in order to study other climate phenomena is hardly novel. Its what the IPCC does to get their various scenarios.

  159. Hyperactive Hydrologist says:

    HAS,

    You are assuming the instrumental period is long enough to represent the full range of internal variability and hence the full statistical spread.

  160. HAS says:

    HH I didn’t think I was assuming anything. I’m asking two questions of Dessler et al in their replication/testing of the energy balance models and L&C by implication (see above).

  161. HAS “Humor me and try and answer the question I posed above. What would you do under those circumstances? ”

    I have already answered that question and explained why. I would use the whole ensemble because I am interested in estimating the forced response of the climate system.

    “dikranmarsupial, yes I understand that. The issue here is that the ensemble runs are being individually used to calculate a statistic derived from the instrumental period and it is then being suggested that the derivation of that statistic is unreliable because the spread is so great.”

    No, I don’t think you do understand. The ECS estimates from individual runs are just that, estimates. The error of an estimator has two components, bias (the estimator is systematically wrong) and variance (the degree to which the answer varies depending on which run you look at). If you average the value of the estimator over multiple runs, it doesn’t reduce the bias component, but it does tend to reduce the variance component because the effects of the variance are not generally coherent and so average out to zero. Thus we don’t trust any single model run to tell us ECS, we average over the ensemble. Likewise the ECS we get from observations is like an estimate we get from looking at just one ensemble member. Now we may be lucky and pick an ensemble member that just happened to be near the mean, but we may not, we may pick an ensemble member that is in the tails of the distribution. How do we know that the observed realisation of internal variability is one of these outliers? We don’t, except by comparing it with the distribution of values we get from the GCM ensemble.

  162. HAS says:

    dikranmarsupial to help it seems one has two options. Either constrain the ensemble members to the measured climate as per IPCC, or acknowledge the members individually don’t represent the actual climate and that they need to be averaged in some way before using them for that purpose.

  163. HAS says:

    dikranmarsupial commented before seeing yours.

    You are misundrstanding the issue. The model runs aren’t being used to calculate ECS as you know it, they are being used as input to test a balance model.

  164. The issue here is that the ensemble runs are being individually used to calculate a statistic derived from the instrumental period and it is then being suggested that the derivation of that statistic is unreliable because the spread is so great.

    Say I have a dice with an unknown number of sides (perhaps it could be a d4, a d6, a d8, a d10, a d12 or a d20). Say ECS corresponds to the average value you get from rolling the die multiple times.

    I tell you we have observed one roll and got a 3, and we use that as an estimate of ECS, what else can we do?

    Well, we can simulate our dice a large number of times, using our best estimate of the phsyics. Say physics tells us that it is most likely to be a d20. We simulate our d20 and find that on average, our estimate of ECS is 10.5. But of we look at individual rolls, we get values from 1 to 20 (hopefully with a uniform distribution). Now if the real climate actually is a d20, then our model is unbiased, on average, it gives us the correct answer. However it has a high variance (due to the internal variability of dice rolling), so individual rolls are not very reliable.

    Now, say we decide to select only those rolls that are consistent with the observations, say within 2 of the observed 3. Sure we have reduced the variance of our estimator, but it now gives us the answer 3 instead of 10.5, so it now has a whopping big bias. Using a non-random subset of the ensemble gives you the wrong answer!

  165. Forgot to say, if the model is right, the observations are statistically interchangable with individual model runs, so if the individual model runs have a high variance, so does the estimate from the observations.

    AIUI Dessler’s study is just showing that the observational estimates of ECS are plausibly estimates from the bottom of the spread and that the difference between observational and ensemble based estimates is plausibly the effects of internal variability on the observational estimates.

  166. HAS says:

    dikranmarsupial as I said I understand. It is irrelevant. The estimations of TCR/ECS are being done in an energy balance model. The GCM is simply providing alternative scenarios of the climate over the instrumental period to test the performance of that approach along with model derived estimates of TCR/ECS for comparison. Have you read Dessler et al (2018)?

  167. HAS says:

    Crossed again, nothing really to add.

  168. “dikranmarsupial as I said I understand. It is irrelevant. ”

    sorry, if you are going to take the attitude that you are right and whatever I say is irrelevant, then there is no point discussing it with you further as you are not listening.

  169. JCH says:

    3. I see no comparison in Dresller to the “patterns” in Amour.

    Are patterns caused by geographic features, Armour, comparable to patterns caused by internal variability, Dessler?

  170. Steven Mosher says:

    “Dessler’s study is just showing that the observational estimates of ECS are plausibly estimates from the bottom of the spread and that the difference between observational and ensemble based estimates is plausibly the effects of internal variability on the observational estimates.”

    I like the plaussibility language much better than other descriptions.

    Now if we did the Dessler study with a different model and found the observational estimate in
    the top of the spread, what would we conclude?

  171. That there is non-negligible “structural uncertainty”. But wait, there’s more, see Victor’s excellent blog post on why the model spread is an underestimate of the true uncertainty.

  172. JCH says:

    Now if we did the Dessler study with a different model …

    Stevens is one of the good guys. Maybe all that’s left are bad guys.

    I’m just a cowboy. Since early on in the series of articles on L&C I have expressed disbelief that a record that ends with a one-off wind event, anomalous and powerful, that cooled the surface like crazy could be relied upon to estimate ECS. So the talk has gone from the AMO assisting warming since 1970 to internal variability is zeroed out.

  173. paulski0 says:

    Steven Mosher,

    1. Pattern is not defined in any rigorous testable way, reminds me of the pattern hounds /sun nuts at WUWT.

    I really don’t know how you’re getting that from that paper. They show clearly the regional feedback strength split into different feedback processes (fig5), and how the proportional spatial warming pattern differs over time as the global average increases (fig3). The principle is then fairly simple – because the proportional spatial warming pattern is different over a 200-300 year timescale than for the first few decades of the forced response, the spatially variable feedback strength is activated differently. Specifically (at least in this model), warming is proportionally greater at higher latitudes over time, which is where the net feedback strength is less negative. Hence the global net feedback strength increases over time.

    Of course, that’s the forced response pattern effect, which shouldn’t be confused (e.g. see angech’s confusion) with the quite independent issue of temporary variance in effective sensitivity caused by unforced natural variability. In terms of spatial variability influence over the recent period, the main thing which has been picked out has been the pattern over the Tropical Pacific (e.g. Andrews and Webb 2017, Kosaka and Xie 2013). In my simplistic understanding, I think about it in terms of the West-East gradient. You can see on figure 5 from Armour et al. that there is a patch of very strong positive feedback in the Equatorial East Pacific in the CCSM4 model, whereas the West Pacific feedback is strongly negative. If you look at the spatial trend pattern over the past 30-40 years, particularly up to 2014, the West has been warming, which triggers those damping negative feedbacks, whereas the East has been cooling, which means those strong positive feedbacks are pushing the global net feedback towards a more negative value.

  174. paulski0 says:

    Steven Mosher,

    probably something amiss with the physics of a model that shows grids of intense cooling over 100 year period adjacent to a grids of warming. who knows

    Hmm… interesting. I’ve taken a look and the MIROC models show an abrupt large cooling in all realisations at 1960 in both the Mexican and Australian cold spots (I haven’t checked the others). No obvious explanation. Could be a bug or some sort of input discontinuity.

    That might actually have some relevance for L&C2018. Their aerosol forcing update derives from Myhre et al. 2017, which reports on model simulations based on emissions estimates since 1990 and suggests a positive forcing change of about 0.1W/m2 between 1990 and 2015. The model which produces comfortably the largest positive change is SPRINTARS, which is basically MIROC, and about half the increase in that model involves a step change in cloud albedo effect between 2000 and 2005 which doesn’t make any physical sense.

  175. Christian says:

    Steven,

    “Now if we did the Dessler study with a different model and found the observational estimate in
    the top of the spread, what would we conclude?”

    Nothing, never was the thing, if spread is also huge, its the same´like before, such methodes can produce values which are altered by internal variability, therefore its liky that based on observation ECS is to small/great, also Dessler told here, that his paper dosent argue direct for “cold” bias in ECS, but other Papers do, the thing he shows, was that the vaules can be altered by internal variabilty

  176. HAS: “Victor, are you saying we should ignore information from the instrumental period when making inferences about the climate? I’m sure you aren’t and me neither, that’s the (what should be uncontroversial) point I’m making.

    No, I am just saying that it is perfectly reasonable to get a three when throwing a d6 dice. Just because it is below the average does not mean all the methods we used to estimate the average are wrong. As long as you are within the uncertainty all is good.

    Throwing a seven would be exiting new information that could not be ignored.

    Throwing the dice many times and finding a statistically significant deviation would be interesting.

    We only have one instrumental dataset for the last century. We only have one three, we would need many or a seven.

    Had this simple statistical method of Lewis been more accurate the chance of getting a seven would have been higher. But this simple method is not particularly accurate and Climate Etc & Co. should stop ignoring all the other evidence on the climate sensitivity if they want to be taken seriously.

    Many people have now tried to explain this quite simple idea to you in many very clear ways. I do not think that continuing this discussion makes much sense.

  177. Andrew E Dessler says:

    Note to Mosher: no ensemble study can tell you where the historical climate trajectory would fall within the ensemble. that’s really an ill-posed question. rather, the ensemble tests tell us that the methodology produces imprecise answers.

    other papers (e.g., Marvel et al., Zhou et al.) show using different methods that the existing surface pattern is causing energy balance methods to yield too low of an ECS.

    combine all of these results and you arrive at a reasonably robust conclusion that L&C’s ECS estimate (and others derived the same way) is biased low.

  178. Christian says:

    Victor,

    “But this simple method is not particularly accurate and Climate Etc & Co. should stop ignoring all the other evidence on the climate sensitivity if they want to be taken seriously.”

    Thank for saying that but truth is “lukewarming” idea is based (for many) on believing. If you believe it you start to getting a confirmation bias because you only looking for evidence of your believes and ignor other. For that, i just only need to know who write the Paper to guess what they write in the Paper…

  179. Willard says:

    > it seems one has two options. Either constrain the ensemble members to the measured climate as per IPCC, or acknowledge the members individually don’t represent the actual climate and that they need to be averaged in some way before using them for that purpose.

    Finding where the IPCC does what you say it does would be one way to justify your “no” above, HAS. It might be important since it’s your only justification so far. I’m not sure that’s the case. For instance, the legend of Figure 12.36 reads

    Simulated changes in (a) atmospheric CO2 concentration and (b) global averaged surface temperature (°C) as calculated by the CMIP5 Earth System Models (ESMs) for the RCP8.5 scenario when CO2 emissions are prescribed to the ESMs as external forcing (blue). Also shown (b, in red) is the simulated warming from the same ESMs when directly forced by atmospheric CO2 concentration (a, red white line). Panels (c) and (d) show the range of CO2 concentrations and global average surface temperature change simulated by the Model for the Assessment of Greenhouse Gas-Induced Climate Change 6 (MAGICC6) simple climate model when emulating the CMIP3 models climate sensitivity range and the Coupled Climate Carbon Cycle Model Intercomparison Project (C4MIP) models carbon cycle feedbacks. The default line in (c) is identical to the one in (a).

    Source: http://ipcc.ch/pdf/assessment-report/ar5/wg1/WG1AR5_Chapter12_FINAL.pdf

    What is being excluded from that range seems to be the same kind of values that are excluded from the GCMs in the first place, i.e. those who would make little physical sense. Since it’s a range, your “acknowledge the members individually don’t represent the actual climate” makes little sense to me.

    To paraphrase your question to Dikran, have you read the AR5?

  180. Willard says:

    While waiting for HAS to come back (i.e. I have a hand full of quotes showing the extent of his wrongness), a manual pingback:

    Nothing in the new Dessler et al. paper indicates that the Lewis & Curry energy-budget climate sensitivity estimates are likely to be biased low.

    https://judithcurry.com/2018/04/30/why-dessler-et-al-s-critique-of-energy-budget-climate-sensitivity-estimation-is-mistaken/

    I’m not sure how Nic can say that Andrew’s critique is mistaken based on not finding anything. This kind of move usually indicates some shift of the burden of proof. Let’s blame the “plain language summary.”

    Another round of ClimateBall ™ is awaiting us.

  181. JCH says:

    A new rebuttal at Climate Etc.

  182. Dave_Geologist says:

    JCH

    Trenberth: it either went into the oceans or was reflected back to space before it arrived at the surface.

    Pretty clear now I think that it went into the oceans. Which have duly burped it out again.

  183. Dave_Geologist says:

    HAS

    they should put aside those model runs that don’t replicate what’s actually happened

    That’s been done IIRC. But in a different context. Using clouds and El Nino. The ones that match observations warm more, again IIRC.

    At the risk of flogging a dead horse, that’s not the right thing to do in this situation. I know my die is a cube, and I have sound reasoning from basic physics to say it has numbers 1 through 6 (let’s say the rules of dice specify that you have to start with a 1, and it’s customary with dice neither to repeat nor to miss out integers). I’ve thrown a three. It would be totally wrong on the basis of that one throw to conclude that my die is a non-standard one with a 3 on each face.

  184. Dave_Geologist says:

    HAS

    Footnote: we aren’t suggesting selecting “the models that give the most similar results to the estimates of TCR/ECS based on observations”.

    Just as well, given the results of Marvel et al. 2018, which I think is more relevant to the thrust of your argument. The final sentence of their abstract:

    A climate model that shows strong warming in response to recent real‐world conditions does not necessarily have high long‐term sensitivity, and vice versa.

    IOW ECS estimated in an individual model or model run, using energy-balance models like LC18, varies above and below the actual value, which is known exactly for each model, during the course of the model run.

    So if we pick the models which give a low ECS estimate using today’s data, that will include models with high actual ECS and models with low actual low ECS. IOW the models, based on their internal physics, tell us that no, the recent past is not a good guide to the future and that the simplistic energy-balance approach is just too, well, simplistic. Fancifying it with ever-updated forcing distributions and performing clever statistical manipulations is just putting lipstick on a pig.

    Sometimes Mother Nature doesn’t cooperate, there isn’t a quick-and-easy solution, and you just have to do the hard yards.

  185. HAS says:

    dikranmarsupial a bit abrupt, should have added “to the matter in hand”. Do give some thought to the difference between what goes on in GCMs and the application of their output in other experiments.

    Victor, I am discussing Desslers et al where they study the energy balance models, not GCMs. The latter are used to provide input and comparators.

    Willard it seems it was a mistake to use an example from another domain to demonstrate a principle. If you didn’t understand it best forget it.

    Dave, I now have looked at Marvel, but it doesn’t seem to reflect best practice in energy balance models. Short time periods and no attention to end periods.

  186. Joshua says:

    HAS –

    If you didn’t understand it best forget it.

    Perhaps a sub-optimal phrasing.

  187. nobodysknowledge says:

    I wonder how one can explain that model systematic biases have no effect on the estimates of sensitivity.

  188. Willard says:

    > it seems it was a mistake to use an example from another domain to demonstrate a principle.

    I disagree, HAS. It illustrated why we should not expect econometrists to have a reliable intuition regarding games.

    But this response is intriguing – have you missed my last comment? Here’s the gist of it:

    Finding where the IPCC does what you say it does would be one way to justify your “no” above, HAS. It might be important since it’s your only justification so far.

    There was also this question: have you read the AR5?

  189. HAS says:

    Interesting to note that according to Lewis’ post using the range of GCM model outputs produces less variation than using the uncertainty from the energy balance model.

    My surmise was wrong. Notwithstanding the in-principle issue remains.

    Also the impact of the particular choice of start and end periods is briefly addressed, and is the source of some of the differences.

  190. Steven Mosher says:

    “Note to Mosher: no ensemble study can tell you where the historical climate trajectory would fall within the ensemble. that’s really an ill-posed question. rather, the ensemble tests tell us that the methodology produces imprecise answers.”

    Note to Andrew: I never suggested an ensemble study WOULD tell you where.

    My main questions are.

    1. What do similar 100 ensemble tests of GCMs show you.
    2. What types of patterns did you identify in the 100 member ensemble?
    3. Do you have a method for identifying the pattern in the historical record?
    That is, we have a set of rules for identifying patterns such as el nino,
    what is the set of rules for the pattern you identified in the historical observational record.
    ( hint, with a case of 1, I’ll suggest you dont mean pattern)

  191. Steven Mosher says:

    “other papers (e.g”., Marvel et al., Zhou et al.) show using different methods that the existing surface pattern is causing energy balance methods to yield too low of an ECS.”

    I keep seeing these references to pattern with absolutely zero definition of what the “pattern” objectively is.

    I also see “patterns” in GCM output that frnakly look non physical. ie, isolated grid cells that show
    massive cooling over 100 year periods.

    what would be great is to show that a GCM can actually come close ( run it as many times as you like) to the “pattern” in the historical record. For example, in all 100 runs of the model you used
    what were the patterns you identified? pictures? images?

    or is pattern the wrong word?

  192. Andrew E Dessler says:

    Here are some comments about Lewis’ comments on my comments: https://twitter.com/AndrewDessler/status/991148670367264768

  193. Lewis applies the “unsound” science accusation. Then all the others curry the argument

    Agree with Andrew — in practical terms, Lewis’ stuff is impenetrable.

  194. Willard says:

    > Notwithstanding the in-principle issue remains.

    I’m glad you return to that point, HAS.

    Here’s another example that undermines your surmise that the IPCC “constrain the ensemble members to the measured climate”:

    Future climate is partly determined by the magnitude of future emissions of greenhouse gases, aerosols and other natural and man-made forcings. These forcings are external to the climate system, but modify how it behaves. Future climate is shaped by the Earth’s response to those forcings, along with internal variability inherent in the climate system. A range of assumptions about the magnitude and pace of future emissions helps scientists develop different emission scenarios, upon which climate model projections are based. Different climate models, meanwhile, provide alternative representations of the Earth’s response to those forcings, and of natural climate variability. Together, ensembles of models, simulating the response to a range of different scenarios, map out a range of possible futures, and help us understand their uncertainties.

    Click to access WG1AR5_Chapter12_FINAL.pdf

    This comes from the FAQ 12.1 | Why Are So Many Models and Scenarios Used to Project Climate Change?

    ***

    I’m not sure where you got the idea that the IPCC doesn’t “acknowledge the members individually don’t represent the actual climate.”

  195. HAS says:

    Joshua, my phrasing looks increasingly optimum from my POV.

  196. Hyperactive Hydrologist says:

    If the L&C method is robust why the careful selection of start and end points? Also in inconsistency historic and present interval lengths again why do this? Why not use 30 or even 50 year intervals? My guess is that is would give higher values of TCR.

  197. HAS says:

    HH Section 4 of L&C 2018 gives quite an easy read for the reasoning behind these issues.

  198. Dave_Geologist says:

    HAS
    Submit a comment to the Journal then. If it has merit it will be published. Or replicate the work with the right end-points. There are dozens of models so there must be some with fully accessible data and implementable code.

    Otherwise I’ll have to park your criticism in the “hand-waving” box.

  199. Dave_Geologist says:

    what would be great is to show that a GCM can actually come close ( run it as many times as you like) to the “pattern” in the historical record. For example, in all 100 runs of the model you used

    Do you seriously think that’s physically possible SM?

    (Puts on mock-denier hat). But chaos! But butterflies in Mexico! But Lorenz!

  200. Dave_Geologist says:

    I’m not sure where you got the idea that the IPCC doesn’t “acknowledge the members individually don’t represent the actual climate.”

    Willard, I would presume HAS read something dishonest or inaccurate on a blog, and was suckered because he hasn’t read AR5. Hence your hanging unanswered question.

  201. Hyperactive Hydrologist says:

    HAS,
    I read the paper. It just seems like the are trying to justify there choice of periods based on internal variability, why not use longer periods? This will reduce the impact of internal variability. Use a 50 year period for the present and test it against rolling 50 year periods between 1850 and 1950.

    Like I said, if the methodology was robust you should not need to justify your time periods.

  202. angech says:

    Steven Mosher says:
    “what would be great is to show that a GCM can actually come close ( run it as many times as you like) to the “pattern” in the historical record. For example, in all 100 runs of the model you used
    what were the patterns you identified? .

    I presume you mean one of the actual GCM that exist and are being commented on here.
    Not a GCM in general as one could easily be made up to fit to the historical record.
    If so none of the models can come close. The best one would be one that has an illegal parameter, ie one that assumes we completely stop all possible anthropogenic CO2 right now. As ATTP said because of residence times and inbuilt ongoing warming presumptions [realities] It would still warm faster than observations.
    What would be far easier to do would be to run one of the existing models, CO2 production as normal but ECS feedback responses of 1.6.
    Which would do the trick.
    Obviously.

  203. HAS “Do give some thought to the difference between what goes on in GCMs and the application of their output in other experiments.”

    I did, but you don’t appear to see that. Yes, the model runs give the input to the energy balance estimator of TCR/ECS, but what do you think causes the variance of the estimator?

  204. I should point out that some of my work in climate has been on statistical downscaling, where one of my interests was in predictive uncertainty (i.e. estimating how uncertain predictions from a model are likely to be) and part of that uncertainty arises from propagating the uncertainty from the model runs through to the downscaled variables. That is very similar to the uncertainty in TCR/ECS from an energy balance model that is due to the unforced variability in the “observations” (or equivalently model runs).

  205. angech says:

    HAS says: May 1, 2018 at 1:02 am
    “Also the impact of the particular choice of start and end periods is briefly addressed, and is the source of some of the differences.”
    proof

    Andrew Dessler “Let’s match the periods chosen in our model ensemble analysis as closely as possible to L&C. We can calculate ECS using these base periods: 1869-1882, 1995-2005.
    The resulting distribution has median = 3.01 K, 5-95% confidence interval: 2.59-3.56 K. Looks like an important uncertainty to me.”

    This is the highest possible ECS that can be calculated from any 10 year periods of observation from 1869 -2017.
    The highest. No hint of ECS being any greater. An upper limit using observations only.
    Achieved by taking the base and comparing to a region whose midpoint is the biggest El Nino in that time.

  206. Dave_Geologist says:

    nobodysknowledge

    I wonder how one can explain that model systematic biases have no effect on the estimates of sensitivity.

    Not sure if that’s a rhetorical question but I’ll take it as one. I’ll frame the answer in a way that may also work if it’s a “throwing-hands-in-the-air” question following previous attempts at explanation, or a misunderstanding. Note, I am not an expert and “you” is the generic you, “one” in formal English.

    The model sensitivity is a precisely defined, known property of the model. If you use model outputs as an in silico Earth and estimate ECS using energy-balance methods, you get a range of estimates which scatter around the true value. IOW the energy-balance method is not a reliable way to extract ECS, at least for a single planetary realisation and a finite time series. If you average across lots of model outputs, you get a distribution with the mean in the right place. In principle, model systematic bias (wrong ECS in the GCM, apples-to-oranges temperature comparison in the energy-balance model) need have no impact on the variance of the ECS estimate, although it will of course impact the mean. It would increase the variance if parameters driving variance scale with ECS, but there’s no a priori reason why that should be the case. It could be that the thing which gives the GCM a low ECS (for the sake of argument but IANAE, a particularly strong hydrological cycle acting through some interaction with clouds and aerosols) gives rise to more variance in the simulation output.

    In the sense that the model ECS is an emergent property of the model’s physics, the singular, known model ECS is affected by model bias. But that’s not a sensitivity estimate in the energy-balance sense (refer to previous paragraph). It’s an inherent property of the model, and GCM-based sensitivity estimates are derived from the statistics of a number of GCMs, each with it’s own, singular sensitivity. In that sense it’s no different from plotting the temperature projections of multiple CMIP5 models. Interesting in itself, but irrelevant to LC18. If you want to convince a true sceptic that ECS estimates from GCMs are wrong because of model systematic biases, the onus is on you either to identify the bias in the particular model physics or observations, or to demonstrate that this iteration of Planet Earth lies outside some agreed (P5-P95) expectation range from the model. Unless you’re a total physics denier, in which case the real world can rightfully ingore you, the null hypothesis should be “150 years of physics is right”, and it’s your counter-assertion that has to exceed the 95% confidence level.

  207. Dave_Geologist says:

    angech

    Not a GCM in general as one could easily be made up to fit to the historical record.
    If so none of the models can come close.

    The second half of that para is so untrue it doesn’t deserve any further response. Nigel Lawson Today Programme levels of untruth. If there was anything meaningful in the rest of your post, don’t expect a response because I didn’t read further.

    And it wasn’t even true in 2008 (which was presumably what the former Chancellor was re-living), if you chose the correct null hypothesis to test the assertion.

  208. Dave_Geologist says:

    Achieved by taking the base and comparing to a region whose midpoint is the biggest El Nino in that time.

    Gosh, I wonder if energy-balance ECS estimates are vulnerable to errors due to internal variability. Who’da thunk it!

    It is slightly hilarious how the “but chaos”, “but uncertainty” crowd can cheerlead for ECS estimates which claim to cleverly eliminate the impact of chaos and uncertainty and reduce everything to a simple deterministic division.

  209. angech wrote “Not a GCM in general as one could easily be made up to fit to the historical record.”

    Citation required. I don’t think this is true, unless you just mean the general form of the record as it follows the broad shape of the changes in forcings, in which case “duh!”

    It would be much easier to make a meaningless curve-fitting model that fits the historical record, which indeed is what Loehle and Scafetta did for example (see the Cawley [i.e. me] et al. paper mentioned above).

  210. “This is the highest possible ECS that can be calculated from any 10 year periods of observation from 1869 -2017.”

    3.56 K would be O.K. then?

  211. “The best one would be one that has an illegal parameter, ie one that assumes we completely stop all possible anthropogenic CO2 right now.”

    angech, anthropogenic emissions are an input to climate models, not a parameter. If you don’t understand the difference between a prediction and a projection (demonstrated in the above comment), perhaps you ought to listen more and make assertions less.

  212. SM wrote “what would be great is to show that a GCM can actually come close ( run it as many times as you like) to the “pattern” in the historical record. For example, in all 100 runs of the model you used
    what were the patterns you identified? pictures? images? “

    This seems impossible to answer without specifying what you mean by pattern and what you mean by “close”. Getting a GCM that reproduces the broad responses to changes in the forcings on multi-decadal/centennial scales, then that might be reasonable. However as soon as you look at shorter timescales or regional/sub-regional spatial scales, then I would have thought that the dependence on internal variability increases, and the models can’t be expected to reproduce that, only simulate it, which is all that is required for a Monte-Carlo simulation.

    I always find the parallel Earths a useful analogy. If we had 100 parallel Earths with the same forcings (there are an infinite number of parallel Earths, so there will be some with similar forcings), but different initial conditions. Unless we know how similar the “patterns” are on those parallel Earths, how do we know how “close” we should expect the “patterns” in the models to reproduce those seen in the historical record.

    Unfortunately, without access to the observations from the parallel Earths, the GCMs are the best estimate we have of what those observations would look like. Of course that is not an ideal situation, but at the end of the day, we have to go with the best estimate we can actually have (and think about the limitations).

  213. HAS says:

    Dave, L&C have already done it (published with better end periods). On your second comment it is one thing for Willard to write rubbish in an attempt to generate more air time, but do you really want to sign up for that brigade?

    HH did you not pick up on the problems with the nature of the forcings that mean some periods need to avoided. You no doubt read the second sentence of section 4 that sets out the issue: “Longer periods reduce the effects of interannual and decadal internal variability, but shorter periods make it feasible to avoid major volcanism and a short final period provides a higher signal.”

    Provided the decision rules for selecting the periods are robust, the method will be robust. It happens all the time in science, we avoid the night time if we want to observe the sun.

    dikranmarsupial “what causes the variance of the estimate?” In simple terms our ability to measure it and the quality of the abstractions we have to use in the process. We have here two competing approaches for the latter, energy balance models and GCMs. In this case both papers are working within the paradigm of the former. In that paradigm it is a subset of the instrumental period that is used and L&C catalogue a whole series of causes for the variance, predominantly grounded in the instrumental record. GCMs are largely irrelevant until the end when a conversion from observed estimates to GCM estimates is performed.

    Dessler et al seek to test the energy balance model by feeding it non-experience based parameters, based instead on what in GCMs causes variance. My view is that given the energy balance paradigm this is only acceptable if the GCMs are constrained to the instrumental climate.

    As it turns out it appears that unconstrained GCMs produce less variance than just working within the energy balance paradigm.

    It is hard to comment on your second comment with the information to hand, but it doesn’t sound as though you had an alternative model of the down scaled system with independent measures of the parameters in it.

  214. paulski0 says:

    Hyperactive Hydrologist,

    The paper does test longer periods e.g. 1980-2016 against 1850-1900, similar result.

  215. HAS wrote “dikranmarsupial “what causes the variance of the estimate?” In simple terms our ability to measure it and the quality of the abstractions we have to use in the process. We have here two competing approaches for the latter, energy balance models and GCMs. In this case both papers are working within the paradigm of the former. In that paradigm it is a subset of the instrumental period that is used and L&C catalogue a whole series of causes for the variance, predominantly grounded in the instrumental record. GCMs are largely irrelevant until the end when a conversion from observed estimates to GCM estimates is performed.”

    O.K. you have just unequivocally demonstrated that you don’t understand what I have been saying as some of the variance in the estimates also comes from the particular sample of data (i.e. the model run/observations) on which it is calculated. We can’t estimate this variance from a single observation, and we certainly can’t assume that it is zero. The only way we can estimate this variance is by comparison with the model spread, which is effectively what Dessler has done AFAICS.

  216. HAS,

    My view is that given the energy balance paradigm this is only acceptable if the GCMs are constrained to the instrumental climate.

    Except, I think the point is that if you do this, you can get an energy-balance estimate that is still far from the “true” value, and that the “true” is a better estimator of long-term warming. Hence, assuming that the energy balance estimate is somehow likely to be close to the “true” value might be an assumption that turns out to be wrong.

    Let me stress something. I think some of the phrasing (my own included) may not have been optimal. The argument is not that the energy balance estimate is definitely going to be wrong, it’s more that one should be cautious of assuming that the best estimate from an energy balance approach is somehow likely to be close to the “true” sensitivity. Bear in mind that it is not only climate models that suggest that the ECS is likely to be higher than the energy balance estimate suggests, there are also other estimates (paleo) that suggest this too.

  217. We can’t estimate this variance from a single observation, and we certainly can’t assume that it is zero.

    This is an important point. I don’t think it is possible to estimate the impact of internal variability from a single observation. You need some kind of model. The models suggest that the impact could be such that energy balance estimates will not necessarily produce results that are close to the “true” sensitivity. I don’t think that what Nic is trying to do in the most recent Climate Etc. post is sufficient (although I don’t entirely follow it, so am not sure).

  218. HAS says:

    dikranmarsupial the quality of parameter estimation etc within the models had been swept up in my “quality of the abstractions we have to use in the process”.

    The point under discussion is that energy balance models have theirs and GCMs a different set. There is no reason to import the second into the first, and if you are running a energy balance model, you play by energy balance model rules. For example the energy balance models rely on estimates of the surface temp in some shape or form, and the variance in that is reasonably well measured. On the other hand parameter estimation and uncertainty is quite problematic in other areas as the discussion about forcings and start and end dates attests.

  219. HAS “dikranmarsupial the quality of parameter estimation etc within the models had been swept up in my “quality of the abstractions we have to use in the process”. ”

    so you deliberately chose to only mention the source I was specifically bringing up implicitly in “quality of the abstractions”? That seems to me like deliberately fostering misunderstanding (i.e. trolling). The effects of internal variability are not abstractions, at least in the observations, and are arguably not abstractions in the models either (as they are emergent properties).

    “The point under discussion is that energy balance models have theirs and GCMs a different set.

    No, the component of the variance due to internal variability in the sample of data is common to both.

  220. HAS,

    There is no reason to import the second into the first, and if you are running a energy balance model, you play by energy balance model rules.

    I don’t quite follow what you’re suggesting, but the key thing is whether or not you can use energy balance estimates to project future warming (or rather, whether or not such an estimate will be a good predictor of future warming). The suggestion is that an energy balance estimate might not return a result that is close to the systems “true” sensitivity. That it returns a result that is close to the “true” sensitivity of a system that behave like the assumptions used in an energy balance approach doesn’t necessarily tell us what we would like to know about the real system.

  221. HAS says:

    aTTP I’m still unclear about whether “true” sensitivity is an artifact of the real world, or an artifact of model world. The fact that the energy balance model approach has to apply an adjustment factor derived from GCMs to convert their output to a GCM consistent estimate makes me think its a GCM artifact and likely to be unknowable by empirical means. Hence my earlier comment about focusing on the next century.

    Having got into this discussion I’ve had it in the back of my mind to look at the paleo stuff.

    On the model issue it is important to remind oneself that there are multiple models for how the climate or aspects of it might behave on a variety of different timescales. None of them are right, just some of them are more useful in some applications. The energy balance model has a lot of attractions compared with GCMs in some applications, and helping them to do a better job is a worthy activity.

  222. HAS says:

    dikranmarsupial you were obviously bought up in a different school. In mine everything to do with simplifying the real world was referred to as an abstraction.

  223. BBD says:

    Having got into this discussion I’ve had it in the back of my mind to look at the paleo stuff.

    Nothing there to make lukewarmers happy at all.

  224. “In mine everything to do with simplifying the real world was referred to as an abstraction.”

    We were obviously brought up differently. I wouldn’t try to evade my interlocutors key issue by only mentioning it implicitly and lumping it together with other “abstractions” that were irrelevant to the point being made.

    Both GCM and energy budget estimates of ECS have a component of variance that is due to the particular sample of data on which they were evaluated, due to unforced variability. If that were not the case, we would get the same value from each model run, but we don’t.

  225. HAS,

    I’m still unclear about whether “true” sensitivity is an artifact of the real world, or an artifact of model world. The fact that the energy balance model approach has to apply an adjustment factor derived from GCMs to convert their output to a GCM consistent estimate makes me think its a GCM artifact and likely to be unknowable by empirical means.

    All this, in my view, is largely beside the point. The question is whether or not current energy balance estimates can be used to accurately project future warming. The answer would appear to be that one should be cautious of assuming that this is the case.

  226. I think that the problem discussed here is of a rather fundamental character. The so-called observational determinations of ECS are based on energy balance models implying that the radiative imbalance of the planet can be described as a linear function of the global mean temperature. However, as discussed in the paper by Dessler, Martinsen and Stevens, such an assumption may introduce great errors because the imbalance is also a function of the global temperature pattern.

    Thus, if the global temperature pattern changes from the initial state of the planet when the radiative forcing began to change, the ECS value calculated from the simple energy balance models will be in error. It seems to me that the only way to calculate ECS values considering the changing temperature patterns is by using advanced climate models because such models also describe the changed temperature pattern. One may even question if a single ECS value is the best measure of the sensitivity of the planet to radiative forcing.

    See also these comments in ACP by me and the authors:

    Click to access acp-2017-1236-SC3.pdf

    Click to access acp-2017-1236-AC3.pdf

  227. “aTTP I’m still unclear about whether “true” sensitivity is an artifact of the real world, or an artifact of model world. The fact that the energy balance model approach has to apply an adjustment factor derived from GCMs to convert their output to a GCM consistent estimate makes me think its a GCM artifact and likely to be unknowable by empirical means. “

    The definition of TCR from Wikipedia:

    … transient climate response (TCR) which is defined as the average temperature response over a twenty-year period centered at CO2 doubling in a transient simulation with CO2 increasing at 1% per year (compounded), i.e., 60 – 80 years following initiation of the increase in CO2

    Now this is something we can directly calculate in a model as we can set up the forcings to follow this scenario and estimate it directly by measuring the temperature change in that particular 20 year period (note it still has a non-zero variance as you would get a numerically different result from each run). However we can attenuate this variance by averaging over many model runs (as the internal variability is not coherent) to estimate the “true” (rather than estimated) TCR of the model. In this sense, TCR is a model metric, a number that tells us some property of the model.

    It is not, however, as clear cut for the real world as we can’t set up the forcings to follow the scenario in the definition. We can however try and work out what TCR would be if we followed that scenario (multiple times and took the average) by trying to take account of the changes in forcings that have actually occurred during the period of observation. Thus it isn’t a climate metric (as we can’t measure it), but it is still a property of the climate system, and gives us an indication of what we might expect to see as the result of increases in the forcings.

    Note that if you had multiple parallel Earths with the same forcings, but different initial conditions, then the estimate of TCR would be numerically different for each one, but the “true” TCR is the same for all, as the physics of the climate system is the same for them all. We can find the “true” TCR by averaging the results from the parallel Earths. Unfortunately we can’t do that, as we only have access to our reality, but the thought experiment explains why treating the estimate of TCR from the observations as the “true” TCR is essentially assuming this variance is zero, which is physically implausible.

    That is my view of it anyway, see my previous caveat.

  228. Chubbs says:

    Below is the decade average temperature change from a 1860-79 base period from the Otto et. al (2013) EBM paper. Starting in the 1980s the climate models have done a reasonable job of predicting this ramp starting. For projecting the near future there isn’t a big difference between TCRs estimated with L&C EBM and the climate models. My main take-away from L&C2018 is that forcing is increasing rapidly.

    1970s – 0.22
    1980s – 0.39
    1990s – 0.57
    2000s – 0.75
    2017 – 0.98

  229. Hyperactive Hydrologist says:

    I am highly sceptical about using a final period that coincides with a prolonged period of rapid ocean energy uptake. This has the potentially supress surface temperatures, which if you are using short time periods could cause you DT to be biased low. Correct me if I am wrong but is there not a lag in the temperature response to GHG forcing? Could this also cause a low bias to DT due to recent emissions while maximising DF?

    Also how can you provide a quantitative analysis of the internal variability by only looking at ENSO and AMO? Surely there are numerous other factors that impact internal variability and every El Nino/ La Nina is different.

    If I get chance I will have a go at recreating the results and testing different period lengths tonight. Does anyone know if the L&C18 data is available?

  230. paulski0 says:

    Hyperactive Hydrologist,

    There’s a lag in surface response due to the inertia of the Earth system, but this should be reflected in the TOA imbalance/heat uptake rate estimates.

    ATTP, I think I have a couple of duplicate comments in auto-moderation.

  231. Dave_Geologist says:

    HAS

    Provided the decision rules for selecting the periods are robust, the method will be robust.

    But are they? Kinda hard to tell, according to Dessler. “Lots of adjustments here and there, lots of places in the analysis where “choices” are made.” And he is/i> an expert, whose judgement I’m inclined to go with. In part because of the lukewarmer track record of obvious cherrypicking. What goes around comes around.

    It is well matched with the 1995−2016 and 2007−2016 final periods as regards mean volcanic forcing as well as AMO and ENSO state

    Really? That’s it? Robust? That kinda demands some numbers and a statistical test, doesn’t it? Given his dislike of vagueness in pattern matching, I’m surprised SM isn’t all over that like a rash. Tell you what, I’ll say that in my opinion, it’s not robust. The match is poor, the forcings and AMO/ENSO states are poorly constrained, and anyway there are 25 other factors which also need to match. Now, according to the doctrine of equal time, you have to assign as much weight to my claim as to LC18. Actually, I’ll say it’s my strong opinion. Now you have to give me more weight.

    My view is that given the energy balance paradigm this is only acceptable if the GCMs are constrained to the instrumental climate.

    Your view is wrong, but that particular dead horse has been flogged so often now it’s barely a stain on the ground.

    On your second comment…

    Unclear if that refers to the “read-the-paper/read-AR5” trope, in which case the issue is perhaps better represented by dikran’s “No, I don’t think you do understand” comment than not having read.

    If it’s the “submit a Comment then” one, sorry, those are the Rules of Science (TM). If there’s something wrong with a published paper, no amount of blogology will impact the view of scientists, or of the scientifically-minded on this forum. Hand-waving is fine for rhetorical or polemical purposes, but expect to be treated by the conventions of rhetoric or polemics, not those of science.

  232. zebra says:

    dikran m,

    Note that if you had multiple parallel Earths with the same forcings, but different initial conditions, then the estimate of TCR would be numerically different for each one, but the “true” TCR is the same for all, as the physics of the climate system is the same for them all. We can find the “true” TCR by averaging the results from the parallel Earths. Unfortunately we can’t do that, as we only have access to our reality, but the thought experiment explains why treating the estimate of TCR from the observations as the “true” TCR is essentially assuming this variance is zero, which is physically implausible.

    This seems to be devolving into yet another “definitions debate”.

    First, it would help if you could explain what you mean by “parallel Earths…[with] different initial conditions.” What parameters make them “parallel”?

    But here you are using a lot of words to say your are defining “true” TCR as the average value derived from some sample. Why should anyone care, without knowing the distribution of TCR in the set?

    Which brings us back to “different initial conditions of parallel Earths.” ??

  233. zebra, “This seems to be devolving into yet another “definitions debate”.”

    HAS asked about the meaning of TCR, I gave an explanation. A bit to early to say the discussion is devolving into a “definitions debate” at this point, don’t you think?

    “First, it would help if you could explain what you mean by “parallel Earths…[with] different initial conditions.” What parameters make them “parallel”?”

    if you are going to troll, at least make it subtle. The use of this in science fiction is common enough that someone ought to have heard of it even if they don’t like science fiction. All you need to do is to google it to find out. It is pretty clear who wants this discussion to devolve into yet another “definitions debate” and it isn’t me.

    “But here you are using a lot of words to say your are defining “true” TCR as the average value derived from some sample. “

    No, that is not what I am saying. The average value from MULTIPLE samples is a low-variance estimate of the true value. The true value is a property of the climate system that we want to estimate.

  234. zebra,

    First, it would help if you could explain what you mean by “parallel Earths…[with] different initial conditions.” What parameters make them “parallel”?

    Really? This doesn’t seem such a difficult thing to understand. Imagine a hypothetical scenario in which you could evolve multiple Earths with the same physics and the same arrangement of the continents, but in which you vary the details of the initial climate (pattern of sea surface temperatures, for example).

  235. There is also the “many worlds” interpretation of quantum theory, which is often the basis for the science-fiction usage.

    Yes Mr Einstein, but what do you mean by “elevator”? ;o)

  236. paulski0 says:

    Chubbs,

    I plugged a quick approximation of the consequences of L&C2018’s forcing revisions into a simple 2-box model tuned to L&C’s ECS and TCR calculation (and temperature change + heat uptake rate observations per L&C2018), and found that it produced a surface warming of about 3.8K by 2100 since pre-industrial under RCP8.5, compared with about 4.5K by the CMIP5 mean. Only about a 15-20% difference.

    This surprisingly small difference is partly due to stronger future forcing increases under the RCP8.5 emissions/concentrations scenario implied by those revisions. Also partly due to the fact that, while forcing keeps increasing strongly (as it does under RCP8.5), the rate of warming is more determined by the transient response than the ECS, which has a smaller difference from the CMIP5 mean.

    The proportional difference should be greater beyond 2100, and in scenarios such as RCP4.5 where forcing stablises.

  237. zebra says:

    ATTP,

    Well, when I use the Parallel Earth approach, it is to explain the effect of different forcings– in fact I think that is one of the better ways to communicate with/educate “the public” on CO2. Identical Earths, different energy transfer.

    But here, you have continents the same, physics the same, forcings the same, but SST patterns are different. How did that happen? How do we arrive at those different initial conditions?

  238. zebra,

    How did that happen? How do we arrive at those different initial conditions?

    It’s rather beside the point. The point is that there are timescales over which the initial conditions matter. If you try to estimate something like climate sensitivity over a such a timescale, the result may not be a good respresentation of the systems actual sensitivity (i.e., how it would respond over longer timescales when internal variability does not have such a large impact).

  239. But here, you have continents the same, physics the same, forcings the same, but SST patterns are different. How did that happen? How do we arrive at those different initial conditions?

    perhaps on one parallel Earth someone trod on a butterfly and so the tornado never happened?

    Lorenz made many of his discoveries about chaos theory from work on weather modelling, where IIRC he noticed that he got different results when he re-ran a model with the initial conditions only slightly rounded to n decimal places, rather than m d.p. (or something like that).

  240. Willard says:

    > it is one thing for Willard to write rubbish

    You’re too kind, HAS. It’s one thing to say that I write rubbish and another thing to show that I do. One simple way to show that I write rubbish would be to quote where in AR5 the IPCC constrains “the ensemble members to the measured climate” as you claim.

    As for the acknowledgement you claim is missing:

    IPCC assessments often show model averages as best estimates, but such averages can underestimate spatial variability, and more in general they neither represent any of the actual model states (Knutti et al., 2010a) nor do they necessarily represent the joint best estimate in a multivariate sense.

    Click to access WG1AR5_Chapter12_FINAL.pdf

    That’d be a strange thing to say if single runs represented the climate.

    Please don’t turn this thread into another rope-a-dope of talking points. You can do that at Judy’s.

  241. zebra says:

    dikran,

    “But here you are using a lot of words to say your are defining “true” TCR as the average value derived from some sample. “

    No, that is not what I am saying. The average value from MULTIPLE samples is a low-variance estimate of the true value. The true value is a property of the climate system that we want to estimate.

    The average value derived from some sample: “The average height of Norwegian males is 6ft, based on a sample of 1,000 individuals.” You really don’t understand that?

    So, could you just answer the question? What is the utility of the number you come up with?

    Explain what you mean by “a low-variance estimate of ” the [value of the property of the climate system that we want to estimate].

    If you are saying that the average is close to the value for the individual member of the sample we are interested in, then, as I said, you would have to know the variance of the samples,
    and it would have to be low, correct?

  242. “The average value derived from some sample: “The average height of Norwegian males is 6ft, based on a sample of 1,000 individuals.” You really don’t understand that?”?

    Zebra, of course I understand that. The point is that there is a difference between the true value of a population statistic and an ESTIMATE of that statistic given by the average of some finite sample.

    “So, could you just answer the question? What is the utility of the number you come up with? “

    It is an estimate of TCR, it gives you information on how GMSTs are likely to change in response to a change in the forcings, however it is important to realise that it is only an ESTIMATE of the TCR of the actual climate, and to have some appreciation of the uncertainties.

    “Explain what you mean by “a low-variance estimate of ” the [value of the property of the climate system that we want to estimate].”

    If you don’t know what “a low-variance estimate of” means, perhaps you shouldn’t be asking me ( a statistician*) whether I understand what an average is. I have already defined variance at least once on this thread, so go read it.

    “If you are saying that the average is close to the value for the individual member of the sample we are interested in, then, as I said, you would have to know the variance of the samples,
    and it would have to be low, correct?”

    No, you appear not to understand the difference between the variance of a sample and the standard error of the mean. Again, best not to try be condescending about averages if you then follow that by demonstrating a complete lack of familiarity with STATS101 basics.

    * my field is machine learning, which is essentially the interface between statistics and computer science.

  243. verytallguy says:

    Either constrain the ensemble members to the measured climate as per IPCC…

    HAS, I think you’ve misunderstood how models are tuned.

    As I understand it, models are usually tuned to the *current* climate, not the hindcast.

    ie seasonal change, TOA balance etc.

    have a read here

    http://www.realclimate.org/index.php/archives/2016/10/tuning-in-to-climate-models/

  244. Hyperactive Hydrologist says:

    The IPCC doesn’t do any climate modelling. 🙂

  245. zebra says:

    dikran,

    I see you still aren’t answering my last question.

    Variance: “Informally, it measures how far a set of (random) numbers are spread out from their average value”. Stats101, courtesy Wikipedia.

    So, we can only assume that the average is close to your “true” value if the variance is low, correct?

  246. zebra,
    There’s a difference between precision and accuracy. As I understand it, something can be precise (very little scatter about an apparent mean) while still not being accurate (the mean of the sample is not actually an accurate representation of the “true” value).

  247. Willard says:

    Seems that even Nic can make Chewbacca roar:

    Your arguments are complete nonsense.

    https://judithcurry.com/2018/04/30/why-dessler-et-al-s-critique-of-energy-budget-climate-sensitivity-estimation-is-mistaken/#comment-871466

    All this because Nic can’t even accept that his “Andrew Dessler initially claimed that the [L18] energy budget methodology caused bias” was imprecise.

    Why clarify anything when we can play endless ClimateBall ™?

  248. zebra, look up “standard error of the mean”. The standard error of the mean is the square root of the variance the sampling distribution of the mean of a sample. So less of the “stats 101” hubris if you please, especially if you go on to show once again that you wouldn’t pass on stats 101.

    “So, we can only assume that the average is close to your “true” value if the variance is low, correct?”

    Yes, but the variance of what? A. the variance of the estimator (or equivalently the standard error of the mean), not the variance of the sample. Hint: the mean is itself a random variable.

  249. zebra writes “I see you still aren’t answering my last question.”

    Zebra had asked “If you are saying that the average is close to the value for the individual member of the sample we are interested in, then, as I said, you would have to know the variance of the samples,
    and it would have to be low, correct?”

    My answer to the question started with the word “No”, how much more directly could I have answered it?

  250. angech says:

    Chubbs says:
    Below is the decade average temperature change from a 1860-79 base period from the Otto et. al (2013) EBM paper. Starting in the 1980s the climate models have done a reasonable job of predicting this ramp starting. For projecting the near future there isn’t a big difference between TCRs estimated with L&C EBM and the climate models. My main take-away from L&C2018 is that forcing is increasing rapidly.
    1970s – 0.22 1980s – 0.39 1 990s – 0.57 2000s – 0.75 2017 – 0.98

    Interesting set of figures.
    20 year base from 1860, right?
    so we have 0.17 C increase in1980’s, 0.18 90’s, 0.18 2000 and 0.23 up til 2017.
    That left 0.22C to be distribute over 100 years, correct?
    We were talking hockey sticks last post but this seems to take the cake yet at the same time too low.

  251. BBD says:

    So less of the “stats 101” hubris if you please, especially if you go on to show once again that you wouldn’t pass on stats 101.

    But he’s the smartest guy in the room, dikran.

  252. zebra says:

    dikran,

    There are only a couple of “things” in this discussion, so why don’t you stop with the silly jargon evasion and let’s get to the answer:

    You want to get a number of values of TCR, by using different initial conditions in the model. Call them TCRn. The “true” value TCRr would be that of our “real” Earth.

    Then you claim that the average of those values will be close to TCRr.

    For that to be true, the values of TCR1…n must not be spread out very much.

    Very simple, yes or no?

  253. paulski0 says:

    (Try this again without the link)

    Andrew Dessler,

    I think the key issue is that Lewis’ paper does make allowances for internal variability in surface temperature and heat uptake: sd of 0.08K and 0.045W/m2 respectively (based on an old GCM unforced control run), for both start and end periods. If those allowances are removed and the variability implications of Dessler et al. substituted, it apparently makes no clear difference.

    Part of the reason for this is that there are about a thousand uncertainties contributing to the final ECS range, most of which are taken to follow a normal distribution. So one more normal distribution uncertainty doesn’t make much difference.

    One issue with Lewis’ setup in this respect is that internal variability in surface temperature and heat uptake appear to be treated as independent variables. In fact, the text of his latest post suggests he believes they are anti-correlated, which is not correct. Brown et al. 2014, periods of faster (slower) surface warming actually tend to correlate with higher (lower) TOA imbalances in models. I’m not sure how much difference it makes to treat them as dependent.

  254. For that to be true, the values of TCR1…n must not be spread out very much.

    Very simple, yes or no?

    No, I would think. As Dikran has already pointed out, if you have a large enough sample, the error on the mean can be very small, even if the standard deviation of the sample is large.

  255. paulski0 says:

    angech,

    Should note that L&C only allow internal variability of about 0.1K over multi-decadal periods, and their results indicate responses to solar and volcanic forcing will be very small. If you believe in the results of L&C you necessarily must believe that the hockey stick accurately represents climate history.

  256. zebra says:

    ATTP,

    So, TCRr cannot be an outlier?

  257. Paul,
    Something that struck me is that I think Nic Lewis’s analysis assumes that even if internal variability is influencing energy balance estimates, that it’s more likely that the result will be close to the “true” value, than far from the “true” value (based on Andrew Dessler’s tweet showing the distribution of ECS values resulting from an LC18 approach). However, the LC18 result is clearly somewhat lower than other estimates and is hard to reconcile with the physics that suggests that the ECS is probably above 2K, rather than below 2K. Hence, that might suggest that even if the chance of an energy balance approach producing a result that is far from the “true” value is small, the possibility that that is what has actually happened is high.

  258. zebra,

    So, TCRr cannot be an outlier?

    If you could run the perfect experiment many times, then the chance that the mean value is far from the “true” value would be small.

  259. “There are only a couple of “things” in this discussion, so why don’t you stop with the silly jargon evasion and let’s get to the answer:”

    Sorry, I have better things to do than respond to this sort of trolling. I’ve already told you the answer is “no” and why.

  260. JCH says:

    Oh my gawd, they’ve erased the “putative” MDP (Medieval Dumb Period) and the “putative” LIA.

  261. “So, TCRr cannot be an outlier?”

    TCRr is a physical property of the climate system so by definition it can’t be an outlier. When we estimate TCR from the observational data, that isn’t TCRr itself, it is just an estimate of TCRr.

  262. zebra says:

    ATTP,

    No idea what you mean by “the perfect experiment”, nor what “far” means, nor what “small” chance means.

    Could you give an example and some numbers?

  263. zebra says:

    dikran,

    Makes no sense, once again. You have TCR1…n. Why can’t TCRr be an outlier in that set of data?

  264. I also posted my comment on Climet Etc:
    https://judithcurry.com/2018/04/30/why-dessler-et-al-s-critique-of-energy-budget-climate-sensitivity-estimation-is-mistaken/#comment-871495

    Here is my reply to Lewis:

    Nic Lewis, it is interesting that you claim that I have not understood the problem properly, while Dessler et al. do agree entirely with my comment in ACP.

    Click to access acp-2017-1236-AC3.pdf

    We agree entirely with this comment. Our revised energy balance framework (Eq. 4 in our paper) is a “proof of concept” that demonstrates that it is possible to do a better job describing Earth’s energy balance than the conventional approach does. However, we don’t expect it to be the final answer and agree with the commenter that a version using several regional temperatures may be superior.

    In my comment in ACP, I have supported my arguments with equations that suggest that the radiative imbalance is a function of the global temperature pattern.

    Click to access acp-2017-1236-SC3.pdf

    Thus, assuming that the radiative imbalance is only a function of the global mean temperature must result in a systematic error unless the global temperature pattern is not changing due to the radiative forcing. However, it seems to me unlikely that there should be no change in the temperature pattern because it is very reasonable that the warming of the planet should result in probably complex changes of the many various climate zones.

  265. “Why can’t TCRr be an outlier in that set of data?”

    what part of “When we estimate TCR from the observational data, that isn’t TCRr itself, it is just an estimate of TCRr.” did you not understand?

    Say we have a data generating process that generates random numbers from a standard normal (Gaussian) distribution. The population mean for this process, mu, is zero. If we then generate a sample of data from this process and compute its sample average, mu’, it’s value will not generally be zero. TCRr is analogous to mu, it is a property of the climate system. When we estimate TCR from the observations or from model runs, that is analogous to mu’, it is an estimate of a system/population parameter, not the parameter itself. If we are very unlucky, our sample might include all; negative numbers. In that case mu’ might well be an outlier, just as if we are unlucky with the internal variability on the real earth, the estimate of TCR we get from the observations may be an outlier.

  266. JCH says:

    Back during the heyday of the warming hiatus Nic Lewis posted an article at Climate Etc. about Knutson’s Prospects for a prolonged slowdown in global warming in the early 21st century

    Oh happy day, the warming hiatus was going to last for more decades.

    In Knutson’s paper is this paragraph:

    So in one model run they got an almost 1 ℃ increase in 15 years. I pointed this out at CargoCult Etc. several times: stone-cold silence. Since then there has been a lot of warming.

    So I don’t think there will be almost 1 ℃ of warming from 2015 to 2030, but say that actually happens. What would that do the observations-based estimate of ECS?

  267. I have tried to post my reply to Nic Lewis at Climate Etc. but didn’t succeed, while the posting worked fine here at ATTP.

  268. The Very Reverend Jebediah Hypotenuse says:


    Why can’t TCRr be an outlier in that set of data?

    Because it ain’t data. No one ever measured the “true” or “real” TCR.

    If science were that easy scientists would all be out of a job.

    TCR is inferred from data, not part of a data set.
    The best anyone can do is make estimates of TCR – using as much applicable data and as many independent methods as possible. Some data and some methods may be better that others. No one knows an “ideal” data set and the “true” methodology.

    There is no “real” TCR that we can measure to arbitrary precision using a single set of data.

  269. Hyperactive Hydrologist says:

    I’ve not been able to find the L&C18 data set so I have used the AR5 radiative forcing data and CRU land and ocean global data for the period 1850 to 2011. I just want to test the impact of period lengths on the estimate of TCR. I evaluated the most recent 10, 20, 30 and 50 years for the present compared with rolling historic period of the same length for the interval between 1850 and 1950.

    I found an interesting result – the longer the period the higher the value of TCR, with means of 1.41K, 1.54K, 1.61K and 1.7K for 10, 20, 30 and 50 year periods respectively. Not sure what to make of this…

  270. Verytallguy,

    Thanks, your link is to my original comment at Climate Etc. that I have also posted here, But Nic Lewis replied (and ATTP defended me, thanks!):
    https://judithcurry.com/2018/04/30/why-dessler-et-al-s-critique-of-energy-budget-climate-sensitivity-estimation-is-mistaken/#comment-871499

    My reply to Nic Lewis’ reply I couldn’t post at Climate Etc. but I succeeded to post it here.

  271. zebra says:

    Reverend J,

    Take it up with dikran. He says there is a “true” TCR. You get it by averaging the numbers generated by models with different initial conditions.

    So the data is the set of numbers TCR1…n. When you average them, you get a number.

    I don’t know why you think there would be “measurement” in the physical sense?

  272. Zebra,

    No idea what you mean by “the perfect experiment”, nor what “far” means, nor what “small” chance means.

    Could you give an example and some numbers?

    Wow, it’s virtually impossible to have a discussion with you. I’ve also rather lost track of what is being discussed. All that I was suggesting is that if you had, for example, a model that properly represented a chaotic/complex system a single model run may not produce a result that properly represent the typical value of some system parameter. You could then repeat this so as to produce a large sample, the mean of which would be expected to be close to the system parameter that you’re trying to determine.

  273. Willard says:

    > I can see your comment there

    If it was your first comment at Judy’s, Pehr, it automatically goes into the Pending comments. Judy would need to release it.

    ***

    Meanwhile, beyond the realms of ClimateBall:

  274. The Very Reverend Jebediah Hypotenuse says:


    Reverend J,
    Take it up with dikran.
    He says there is a “true” TCR.

    Thanks for dropping by, zebra. Next time, you might want to try reading for comprehension.

  275. Dave_Geologist says:

    HH

    The IPCC doesn’t do any climate modelling

    And yet HAS would have us believe he’s read AR5. Perhaps just not for comprehension? Although in fairness 😉 , he’s dodged the question and tried to cast it back rather than admit he hasn’t or directly claim he has.

  276. zebra wrote “Take it up with dikran. He says there is a “true” TCR. You get it by averaging the numbers generated by models with different initial conditions. ”

    That is just dishonest. I did not say that, I said that there is a “true” TCR (a property of the climate system) but you can get a low variance estimate of it by averaging the numbers generated by the models.

    What TVJRH wrote is completely consistent with what I was saying.

  277. Willard,

    My first comment today on Climate Etc. was not my first comment ever, and it was posted directly. But when I tried to post my reply to Nic Lewis’ reply I didn’t succeed to get the posting button to respond, so I posted it here. I will try again later to post it at Judy’s, perhaps the communication line from here to there was overloaded.

  278. Dave_Geologist says:

    We were talking hockey sticks last post but this seems to take the cake yet at the same time too low.

    Pretty much angech. I thought everyone knew the big acceleration in temperatures was in the last few decades. partly due to faster-than-exponential CO2 emissions, partly to aerosols suppressing CO2-induced warming until the acid-rain cleanup measures of the 1970s and 1980s.

    Where have you been this century?

  279. Is the climate system chaotic at all scales? What’s exciting about recent climate science progress is the reappraisal of long time-scale phenomena that have been previously thought to be chaotic. This will be huge in pinning down and then compensating for natural variability

  280. zebra says:

    ATTP,

    We’re not discussing model runs, if by that you mean multiple runs of the identical model with identical initial conditions.

    We’re discussing results where the initial conditions are different.

  281. Dave_Geologist says:

    paulskio

    L&C only allow internal variability of about 0.1K

    I take it that’s another of those poorly documented decisions, defended I see with a hand-wave that changing it makes no difference. But no demonstration to that effect.

    Is it just me, or does anyone else think that setting internal variability to a small value, less than inter-annual variation, effectively makes everything dependent on forcing and eliminates the internal-variability confounders reported by Dessler et al. and Marvel et al? How very convenient.

    And as you say, it implies that the hockey-stick is real and the uncertainty monster is dead. How ironic that the blogosphere is full of people who trumpet LC18 one week and shout “chaos” or “internal variablity” the next. Talk about having your cake and eating it!

  282. zebra wrote “We’re not discussing model runs, if by that you mean multiple runs of the identical model with identical initial conditions.”

    That would be rather silly (as the model, at least in principle, would give exactly the same results each time), so common sense should tell you that is not what ATTP meant. You have just demonstrated that not only would you fail STATS 101, but CLIMATE MODELLING 101 as well.

  283. Dave_Geologist says:

    JCH

    So I don’t think there will be almost 1 ℃ of warming from 2015 to 2030, but say that actually happens. What would that do the observations-based estimate of ECS?

    The cynic in me says nothing because they’ll disappear from the literature. Mainstream science always knew it’s a crap way to assess global warming in-the-pipeline. Non-mainstream science will look for something else that gives them the answer they want for a decade or two, then move on again.

  284. zebra says:

    dikran,

    ” We can find the “true” TCR by averaging the results from the parallel Earths”

    Your words, not mine.

  285. zebra, O.K. I missed out the “infinite number of parallel earths” but. However I did also say that it was a low variance estimator (the variance being zero for an infinte sample*), and I have repeatedly said that TCRr is a property of the climate system and we can only estimate it, so there is no excuse for your misrepresentation by quote mining.

    * if we have parallel Earths in a thought experiment, we can have an infinite number of them as well. Of course we can do neither in the real world, which is why we make do with a finite number of GCMs instead, but the principle (Monte Carlo simulation) is the same for both.

  286. The Very Reverend Jebediah Hypotenuse says:

    So – If we only had a bunch of parallel Earths we could possibly determine the “true” TCR of our One True Earth, zebra.

    But we don’t.

    Maybe that’s why dikran’s “true” is in scare quotes.

    Like “true” is here:
    https://andthentheresphysics.wordpress.com/2018/04/27/lewis-and-curry-again/#comment-117795
    and here:
    https://andthentheresphysics.wordpress.com/2018/04/27/lewis-and-curry-again/#comment-117796

  287. Of course we can’t find the “true” TCR even with an infinite ensembe of GCMs, because unlike parallel Earths they don’t have infinite spatial and temporal resolution and the climate physics isn’t exactly correct. However an infinite ensemble of parallel Earths are a good thought experiment for what a perfect climate model would tell us.

  288. verytallguy says:

    Zebra, you seem to be playing gotcha with experts in their field.

    Could I suggest you take the opportunity to learn, rather than being a [redacted]

  289. ordvic says:

    Willard, Thanks again for link. Now I know where Mosher went. Interesting discussion here have been here before once or twice but I think I started to loose interest in Climate Science for a while. Seems to be no reply button here?

  290. ordvic says:

    Hmmm, Just wondering does this mean the IPCC models are wrong? Are they biased Low? We must be in worst shape than we thought? Or imprecise? (on the low side lol)

  291. Willard says:

    > Seems to be no reply button here?

    No nested threading here, Ordvic. Everyone talks at the same time. Listening is facultative.

    Quote what you’re responding to.

    If you need edits, ask.

  292. ordvic says:

    Funny thing (my education or lack there of} The way ATTP explained the formula in his missive makes it much easier to understand. Thanks ATTP. Thanks for newbie tips W

  293. JCH says:

    HH, don’t use html for images. Use an image’s web address that ends with jpg or png and just paste it bare naked.

    1st decade of the 21st century versus the unfolding 2nd decade. Little purple trend at the end is the trend of 2018 so far, which is almost straight up. Nick Stokes thinks the GISS anomaly for April could be the the highest monthly anomaly in the last 12 months, ~.94 ℃. Heatwave; ongoing; an El Niño free 2018 is a contender for 3rd warmest year; and, if an El Niño starts at the end of the 3rd 1/4 and the PDO assists, it will be a photo finish for 2nd warmest.

  294. izen says:

    I think the adoption of TCR/ECS, however estimated, as an input to policy is based more on the fact it is a determinable single metric, less on any genuine utility.

    Not to deny some value in a single figure for the global average rise in temperature, but I think it gets used, and abused, because it can be calculated rather than a strong indicator of impacts.

    SM pointed to one flaw in questioning what PATTERN of warming gives rise to different end states, and the emergent TCR/ECS those patterns in modulz, or observation, generate.
    I agree with Willard’s suggestion (I think?) that further disputing which method produces the most accurate estimate is at best an Angel:pin discussion, more often another bout of Climateball(TM).

    Consider a parallel Earth, or Alternate Universe, in which those that insist that back-radiation cannot exist, therefore CO2 is not a GHG are right. In fact it blocks energy from reaching the surface, and speeds its exit. (no back-radiation,Maxwell’s deamon ensures all the photons head out!)
    The Ice-Age scare of the 1970s would be real. Temperatures would have fallen in an inversion of the hokey stick. Perhaps there would be a dispute over how much was solar, how much increasing aerosols (including CO2). Estimates of how much cooling might be made from a specific increase in negative forcing, or quantity of Joules lost.

    But a result claiming that London might cool by 2.5 C, matching Stockholm(?) would not be much use if it omits that it will be under half a mile of ice in a century.
    The local impacts of falling below freezing are rather more profound than a temperature rise from 15 C to 17 C. Although the regions where the transition is from below zero are likely to show impacts that any estimate of TCR/ECS is incapable of capturing.

    I am unable to find words that can describe the experience of watching the exchange in which zebra contested statistics with dikran.
    https://izenmeme.files.wordpress.com/2018/05/dk_2.gif?w=720&zoom=2

  295. HAS says:

    dikranmarsupial way back when on abstractions. Up until before this comment you were preoccupied with variance that emerged from GCMs runs. I was simply saying that this source of variance was not relevant to energy balance models. They have their own sources of variance (from observations etc). I had regarded the observation that there will be uncertainty and thus variance in any abstraction process as obvious, and therefore it didn’t occur to me that this was the point you were trying to make.

    Given that we both agree on the principle, and you now are distinguishing between sources of variances in the two different models it looks as though we are one.

    aTTP “The question is whether or not current energy balance estimates can be used to accurately project future warming. The answer would appear to be that one should be cautious of assuming that this is the case.” And in the interest of balance one would be forced to say the same about GCMs.

    The reason why the status of climate sensitivity measures is interesting is because if they are GCM artifacts then it is reasonable to assess them against GCMs, but if we are (as you say and I agree) concerned about the next century’s temp etc, then that is the test and both line up at the starting gate together. First test should be their performance outside sample.

    It seems however that there is a significant body of published literature that tests other approaches against GCMs as the gold standard. In this case Dessler et al import variance from GCMs into an energy balance model, when the latter instead uses other sources of uncertainty. My point is that is improper unless the models are being constrained to the known instrumental period climate in some way.

    Pehr Björnbom ” such an assumption may introduce great errors because the imbalance is also a function of the global temperature pattern.” The thought experiment to do is to ask the question what would the energy balance model system have produced by way of an estimate had another one of the model runs been reality and all the observational estimates and uncertainty had been done with that new reality. The L&C work and Lewis’ more recent comments are suggestive that the energy balance model methods are reasonably robust to this possible eventuality (noting however in one sense the thought experiment doesn’t matter because it didn’t happen).

    dikranmarsupial in your comment re TCR in response to aTTP. An empiricist would say the “true” TCR is the one we experience. Modellers might consider that there is a different “true” TCR in all possible worlds, but as you note that is unknowable outside model world. Your inference that “treating the estimate of TCR from the observations as the ‘true’ TCR is essentially assuming this variance is zero, which is physically implausible.” only applies if you are trying to estimate model “true” TCR. As someone noted in the real world we’ve opened Schrödinger’s box and uncertainty has collapsed.

    This is not to say (as I’ve said above) that empirical TCR doesn’t have its own sources of uncertainty.

    Dave “Your view is wrong, but that particular dead horse has been flogged so often now it’s barely a stain on the ground.”. Humour me read the above and see if it helps you understand the issue.

    My subsequent comment referred to your AR5 comment. You quote something Willard made up and you now tell me I don’t understand.

    VTG “As I understand it, models are usually tuned to the *current* climate, not the hindcast.” Aspects are runed to the performance of the climate over the instrumental period, as Real Cli ate says, also Mauritsen 2012 referenced is more explicit. But note I was just making the point by way of analogy that constraining GCM output to observations isn’t novel and necessarily a violation of their use.

    HH the IPCC has models prepared for it to certain standards, see https://www.wcrp-climate.org/wgcm-cmip/wgcm-cmip6 for the current run up.

    paulski0 “the key issue is that Lewis’ paper does make allowances for internal variability in surface temperature and heat uptake” since you quote the source of these errors as model runs, I assume they are the variability in GCM world. In energy budget world the equivalent is derived from those models. Lewis says more generally if he substitutes model variability for energy budget world allowances he gets a more constrained result.

    dikranmarsupial “TCRr is a physical property of the climate system..” I think for the reasons stated it is a property of GCMs. It’s effectively unknowable from observations, apart from the giving an instance that should be accommodated if GCMs are to be regarded as physical. Very weak test.

  296. izen says:

    fluffed the punchline, or messed up the html…

  297. The Very Reverend Jebediah Hypotenuse says:

    izen,
    That’s not right.
    Dunning-Krugers are always measured in banana equivalent dose per microfortnight.

  298. Willard says:

    > You quote something Willard made up

    I did not “make up” anything, HAS. It’s straight from AR5. Now, I suggest that your next comment substantiates this claim of yours:

    [T]o help it seems one has two options. Either constrain the ensemble members to the measured climate as per IPCC, or acknowledge the members individually don’t represent the actual climate and that they need to be averaged in some way before using them for that purpose.

    Everything else won’t cut it.

    This claim is a tad stronger than your “I was just making the point by way of analogy that constraining GCM output to observations isn’t novel and necessarily a violation of their use.”

    Meanwhile, please take a look at this quote from a paper on the page you just cited:

    Large ensembles of AMIP simulations are encouraged as they can help to improve the signal-to-noise ratio (Li et al., 2015).

    Click to access gmd-9-1937-2016.pdf

    Best of luck.

  299. HAS, There are ways to analytically solve a GCM for an important but restrictive set of boundary conditions.

  300. HAS says:

    Paul Pukite, thank-you, yes, I can imagine that. I don’t recall seeing any specific papers but haven’t had a look. I have a day job 🙂 A quick look at Google Scholar doesn’t immediately throw up any obvious candidates, so any suggestions not pay-walled and preferably a decent recent review paper or the like?

  301. HAS wrote ” Up until before this comment you were preoccupied with variance that emerged from GCMs runs. I was simply saying that this source of variance was not relevant to energy balance models. They have their own sources of variance (from observations etc).”

    The issue is exactly the same for both the GCMs and the observational estimates, indeed the variance of affecting the GCMs is the best estimate we have of the magnitude of the variance affecting the observational estimates. There is no good reason to think that the observational estimate of TCR is any closer to the “true” TCR than the TCR estimate from any individual GCM run is to the true TCR of the model. That is the point.

  302. “dikranmarsupial in your comment re TCR in response to aTTP. An empiricist would say the “true” TCR is the one we experience.”

    Only an empiricist that didn’t understand statistics very well. There is a difference between a population parameter and a sample estimate. The future depends on the value of the population parameter, so the sample estimate is only useful if we have reason to believe it has low variance (and preferably is unbiased).

    “Modellers might consider that there is a different “true” TCR in all possible worlds, but as you note that is unknowable outside model world. Your inference that “treating the estimate of TCR from the observations as the ‘true’ TCR is essentially assuming this variance is zero, which is physically implausible.” only applies if you are trying to estimate model “true” TCR. As someone noted in the real world we’ve opened Schrödinger’s box and uncertainty has collapsed.”

    No, that is a bit like saying the average value you get from a six sided die is 5 because you rolled a 5. Again you are failing to grasp the difference between a physical parameter of the climate system and an estimate of that parameter from a particular sample of data. They are not the same thing. This is applies equally to models and observations.

    “dikranmarsupial “TCRr is a physical property of the climate system..” I think for the reasons stated it is a property of GCMs. It’s effectively unknowable from observations, apart from the giving an instance that should be accommodated if GCMs are to be regarded as physical. Very weak test.”

    Sorry, that is nonsense. The mass of a star is a physical property of that star. If the star is beyond the edge of the visible universe, does that mean it is no longer a physical property of that star just because we can’t observe it? No, of course it doesn’t!

  303. HAS says:

    dikranmarsupial I beg to differ. Observations are the best estimate we have of the magnitude of the variance affecting the observational estimates. GCMs corrupt that by introducing all their own limitations into the estimates. Perhaps think about it from an information flow perspective – what the GCM introduces by way of new information is going to be dominated by GCM stuff rather than observational stuff, and we usually discount the former because it isn’t what we are measuring.

    I agree with your penultimate sentence. In fact it is likely that the GCM run will do better because the information in it was party to the production of the “true” TCR (I suspect) where as the observational estimate was just a casual bystander.

  304. “Observations are the best estimate we have of the magnitude of the variance affecting the observational estimates. “

    this is obvious nonsense, we only have one realization of the unforced response that gives rise to the variance, so it provides little indication of the variance. It is a bit like saying rolling a 5 gives us the best indication of the variance of future dice rolls.

    what the GCM introduces by way of new information is going to be dominated by GCM stuff rather than observational stuff, and we usually discount the former because it isn’t what we are measuring. “

    That is what is known as an “opinion” and you have given no evidence whatsoever to support it.

  305. HAS,
    I think there are two somewhat different things here. One is how much variability in the actual observational record? We can try to estimate that and can try to take that into account (as LC18 does). The next thing is whether or not, having done so, the result is a record that reasonably represents the forced response. The answer to that appears to be “possibly not”. In other words, the internal variability that we’ve actually experience may have produced a observational record that is not a good representation of what we would experienced on much longer timescales. So, we can try to correct for the variability we’ve actually experienced, but that does not guarantee that the result is somehow representative of what we would experience on much longer timescales.

  306. angech says:

    ATTP, I don’t think it is good enough to just say Lewis and Curry, again and then disparage their result.

  307. HAS says:

    aTTP, yes. I think an alternative (more mainstream?) way to say it is as you move out of sample your estimate deteriorates. This explains why robust out of sample testing is so critical, the utility of a lot of these models depends on their performance out of sample.

    But to take it right back to the beginning if evaluating a balance model estimate within the instrumental period, inserting unconstrained GCM variances into the process is inappropriate.

  308. HAS says:

    dikranmarsupial the point about observations being the best estimate of the observational variance is I think a tautology. As I said think of it in information terms. If there is some source of information that is going to give abetter estimate where did it come from, if not observations? Assumptions and theoretical constructs can go into estimates, but if they aren’t grounded in observations they don’t add anything. We could dive off into a deep philosophical discussion at this point, but rest assumed it isn’t a nonsense assertion.

    On your second point what I said is an assertion, but run your eye along the various absolute temperature estimates the GCMs produce (after tuning) for the instrumental period. Compare that with what you get from observations. What is the GCM adding?

  309. verytallguy says:

    Dikran nails it

    There is no good reason to think that the observational estimate of TCR is any closer to the “true” TCR than the TCR estimate from any individual GCM run is to the true TCR of the model. That is the point.

  310. ” I think an alternative (more mainstream?) way to say it is as you move out of sample your estimate deteriorates. ”

    Unless, of course, your estimate happened to be the correct value of the true TCR, in which case it wouldn’t deteriorate (future estimates would vary from it only because of the variance due to the unforced response).

  311. HAS wrote “dikranmarsupial the point about observations being the best estimate of the observational variance is I think a tautology. “

    only if you don’t understand statistical estimation, but I can’t be bothered to explain it anymore to someone who is simply not listening.

  312. angech says:

    If observations give an ECS figure this is an important outcome.
    Firstly it is in the ballpark range.
    If we only have one set of observations and one rule for working on them then it is the correct result. It is the only one we have.
    If we have different ranges of observations we can repeat at different intervals and establish a range.
    When we have enough length of time [AD] we can eventually say this is the ECS for this sort of system.
    Maps are not the territory. When we do models we do a much simplified model of the territory.
    When we do multiple runs we are no longer determining properties of the territory we are determining the properties of the model.
    When we achieve a result, ECS, we are merely obtaining the ECS that that model is programmed to have. Unlike a die with a choice of sides the poor model is condemned to throw to the one result for infinity. ECS of 3.0 in, ECS of 3 out.

  313. HAS says:

    dikranmarsupial my responses above might be a bit garbled because I didn’t refresh my browser while working on other stuff, so I missed your comment at 6.44. I probably should declare that I do have graduate level Statistics; post graduate studies in aspects of time series analysis and worked in a field that involved the application of some of those skills for 7 years. I note also that isn’t my primary area of qualification or experience, but I think you can relax over trying to teach me the basics. They couldn’t do it when I was a youth, so I don’t like your chances now.

    I’m trying to frame the issue in a way that you can see what I’m talking about. I’ll try a different way. In your own terms the population is what actually occurred in nature over the instrumental period, nothing to do with what GCMs might suggest might have happened over the period. We want to measure the evolution of the temperature, and we sample it in a variety of unsatisfactory ways. But the estimate of the variance is based on the observations and how they were done, not what might have been on another earth if it is an empirical estimate of what actually happened on earth. Want to estimate what might happen if we move out of sample then a whole variety of other factors can be bought into play.

    In the case of your second comment what I’m saying (simplifying and analogy alert Willard) is the actual value you got from a six sided die is 5 because you rolled a 5. Note the change in tense.

    In the case of your third comment the point is there is a difference between a property that can in theory be verified empirically (the mass of a star) and a property that only exists in models and can’t be verified empirically (or not with the earth’s history so far as I understand it ie stable climate; whack it with a great big forcing and no change until stability is established again). Only in model world does that exist.

  314. HAS “but I think you can relax over trying to teach me the basics. They couldn’t do it when I was a youth, so I don’t like your chances now.”

    Yes, well that much is evident, and says more about you than it does about me.

    ” In your own terms the population is what actually occurred in nature over the instrumental period, “

    no, it isn’t, as I have explained repeatedly, but you have made it perfectly clear that you are not listening, so I won’t bother explaining it again.

  315. HAS says:

    dikranmarsupial “Unless, of course, your estimate happened to be the correct value of the true TCR,” In which case your model validates outside sample and you have more confidence in ti.

    dikranmarsupial “only if you don’t understand statistical estimation”. But statistical estimation deosn’t create empirical information, where is it coming from to improve the measure of variance?

  316. HAS says:

    dikranmarsupial got work to do, and I’m not sure that our squabbling is adding value, but in respect of your last comment if you are interested in what happened to the temperature over the instrumental period, what did they teach you in statisitics the population under study is?

  317. HAS,
    But the goal here is not not so much to understand what happened over the instrumental period (we kind of know that) it’s to understand if we can use this to aid projections of what could happen in the future. The answer appears to be that one should be cautious of assuming that the instrumental period is a good indicator of what could happen in future.

  318. ATTP, I don’t think it is good enough to just say Lewis and Curry, again and then disparage their result.

    A bit sensitive, maybe?

  319. HAS says:

    aTTP yes, but your last sentence is conditional (in this case). Only if you feel that GCMs are better at short run projections than simpler/alternative models. That’s why I said some time long long time ago up there that greater effort needs to go into thinking about the range of models we have at our potential disposal. I’ve had enough experience of complex econometric models that attempt to build from deep granularity to forecast future GDP on a multi-decadal scale, for example, to know the weaknesses compared with models that identify the major influencers on the timescales in question and work from there.

    In my view it is quite possible that energy balance done well is part of the solution rather than part of the problem.

  320. HAS wrote “In my view it is quite possible that energy balance done well is part of the solution rather than part of the problem.”

    Nobody is saying it (done well) is part of the problem.

  321. angech wrote “If we only have one set of observations and one rule for working on them then it is the correct result. It is the only one we have.”

    If we have only one means of estimating the mass of an exoplanet, does that mean that our estimate is the correct result (i.e. the actual mass of the planet)? No, of course not. Angech falls into the same basic error as HAS.

  322. angech wrote “ATTP, I don’t think it is good enough to just say Lewis and Curry, again and then disparage their result. ”

    What matters is whether the criticism is valid. In science we (try to learn) not to take criticism of our theories/hypotheses personally but to evaluate it dispassionately to avoid introducing our cognitive biases into our work. Good scientists welcome constructive criticism.

  323. HAS,
    Noone is claiming that energy balance estimates are not worth considering. All that’s being suggested is that one should be cautious of assuming that they are somehow the best estimates, especially given all theother estimates that suggest thgat ECS could be higher than suggested by energy balance estimates.

  324. Dave_Geologist says:

    HAS

    And in the interest of balance one would be forced to say the same about GCMs

    No. Because physics. The GCMs have known unknowns (difficulat parameterisations, chaotic variation between runs), and unknown unknowns. LC18-style energy-balance models have both of those (however deterministic they try to make it look) and known flaws – to wit, that they’re not measuring what they think they’re measuring. For example, the GCM runs show that even if all the other conditions are met and we believe all you have to account for is volcanic forcing AMO and ENSO, and that they’ve been done right, the conditions for the calculation are not satisfied if the warming distribution is different at the start and end of the interval. That it’s not is a slam dunk, given greater Arctic, nighttime, Northern Hemisphere and land warming.

    In this particular case, it has been demonstrated that the spherical-cow model is inadequate, because you want to understand its movement and for that it needs legs so it can walk rather than roll. A head, however, is optional

    Humour me read the above and see if it helps you understand the issue.

    Yes, it helps me understand that you don’t understand the issue. But I knew that already, hence the dead-horse comment. There’s no need to add to dikran’s masterclass.

    (Willard answered the AR5 bit himself.)

  325. zebra says:

    izen,

    “I think the adoption of TCR/ECS, however estimated, as an input to policy is based more on the fact it is a determinable single metric, less on any genuine utility.”

    Once again, we are in agreement, much as you might decide to deny it– in fact, until the last sentence, I would have written much the same thing you did.

    But, I was not contesting statistics with dikran. I was questioning to what extent the method he described had utility in predicting what TCR we might measure in 100 years, here on Earth-R.

  326. izen says:

    @-HAS
    “…that energy balance done well is part of the solution rather than part of the problem”

    What problem is that ?
    And how would a well done energy balance result be part of the solution ??

    A well done energy balance calculation is an accurate measure of what has happened up till now. The amount of credible information about what may happen in the future is about the same as a linear extrapolation of the observational record.

  327. verytallguy says:

    what TCR we might measure in 100 years, here on Earth-R.

    There is no such thing as measuring TCR.

    Even if, on actual Earth, CO2 was raised in exactly the way prescribed in the TCR definition, the TCR could only be estimated at the end, using statistical and physical models.

    Even in these ideal circumstances, different methodologies would come up with different estimates.

  328. Zebra wrote “But, I was not contesting statistics with dikran. I was questioning to what extent the method he described had utility in predicting what TCR we might measure in 100 years, here on Earth-R.”

    yeah, right…

    Zebra earlier wrote “Makes no sense, once again. You have TCR1…n. Why can’t TCRr be an outlier in that set of data?”

  329. Just to try and elaborate a bit on the discussion between Dikran & Zebra, I think that if we had historical observations from multiple Earths (i.e., an idealised/hypothetical scenario in which we could rerun the Earth since the mid-1800s with everything the same apart from changing the initials conditions of the climate) then the mean TCR that we would get from using an energy balance approach on all these realisations would probably be closer to the “true” TCR than an estimate using a single realisation. However, there are potential non-linearities and time dependencies that mean that even this may not be exactly the “true” TCR. However, it would still probably be closer than an estimate from a single realisation.

  330. ATTP, exactly. The best predictor of the transient response to future forcings is given by the “true” TCR of the physical climate system, which is not directly measurable. If we had an infinite ensemble of parallel Earths that only varied in their initial conditions then we would know the “true” value of TCR, but I seem to have misplaced mine somewhere, has anyone seen it? In the absence of such a perfect climate model we have to rely on the best estimates which we can make, which come from the GCMs (which have more physics), from observational estimates (which have less physics, but direct observations) and from paleoclimate. We need to use all of them, but we should be aware of the limitations and strengths of each approach. We certainly shouldn’t fall into the trap of thinking that the observed estimate is the true value (c.f. exoplanet example I mentioned earlier).

  331. I should have made clear that I was referring to a case where we had multiple realisations up until now. Of course, if we wait until atmospheric CO2 doubles, then we would have a much better estimate of the TCR, but that would require waiting a few more decades.

  332. zebra says:

    very tall guy,

    How is measuring the temperature in 100 years different from what we do now? (Apart from more/better instrumentation, perhaps.)

  333. zebra says:

    ATTP,

    “we would have a much better estimate”

    Why estimate and not measurement? We measure now, we measure in the future, and we subtract.

  334. VTG said “There is no such thing as measuring TCR. “

    zebra said “How is measuring the temperature in 100 years different from what we do now?”

    I’m fairly sure those goal-posts aren’t where VTG left them! I’m not sure what benefit zebra et al. gain from this sort of behaviour.

  335. zebra wrote “Why estimate and not measurement? “

    because we can only estimate, not measure. If only VTG had pointed that out… oh, he did.

  336. verytallguy says:

    zebra,

    wot Dikran said; (1) measuring temperature and (2) using those measurements to estimate TCR are not synonymous. (2) requires a physical model

    To exemplify:
    (a) the initial conditions will affect the outcome
    (b) internal variability will affect the outcome
    (c) the forcing will not match the TCR definition, so model parameters need to be estimated from the measured temperature.

    And also bear in mind that temperature measurement has uncertainties, some of which are structural.

  337. zebra says:

    very tall guy,

    We measure the temperature now.
    We measure the temperature in 100 years.
    TCR is defined by IPCC as the difference in temperature at the time of CO2 doubling– assumed about 70 years, so we will have the data for several decades.

    Why are you talking about models?

  338. That is not how TCR is defined by the IPCC. If you had been paying attention to the discussion you would know that (or indeed if you had read the relevant section of the IPCC report). For a start it isn’t considering the changes in the other forcings that may occur.

  339. Steven Mosher says:

    Im slowly getting clarity. thanks to dk.

    So we have one world.
    since 1850 forcings have been applied, F
    Temperature changed from around 14C to 15C.

    If we believe the results from the GCM, then does that mean
    if we could run the real earth over and apply the exact same F,
    That we could see temperature increases both greater than 1C and less than 1 C?
    max and min would be what, if we believe the GCM

  340. zebra,

    We measure the temperature now.
    We measure the temperature in 100 years.

    No, we don’t really. We have lots of temperature measurements that need to be combined in a way that allows us to estimate something like how much we’ve warmed. However, when I used estimate instead of measure it was simply intended to highlight that nothing we determine can ever really be absolute; it’s almost always an estimate, even if it is quite precise and quite accurate.

  341. zebra says:

    dikran,

    “That is not how IPCC defines TCR.”

    https://www.ipcc.ch/ipccreports/tar/wg1/345.htm

    “The �transient climate response�, TCR, is the temperature change at the time of CO2 doubling and the �equilibrium climate sensitivity�, T2x, is the temperature change after the system has reached a new equilibrium for doubled CO2,”

  342. The TCR is actually a model metric in which atmospheric CO2 is increased at 1% per year (while all other possible external factors are kept fixed). At 1% per year it double in 70 years. The TCR is then the difference between the initial temperature and the average over 20 years centered on 70 years (i.e., 10 years either side of the time at which atmospheric CO2 has doubled).

  343. zebra says:

    ATTP,

    “nothing we determine can ever really be absolute”

    Sure, if you want to be philosophical about it. But that is completely irrelevant to the discussion, and not how we normally talk about measurement in physics, as far as I know.

    We publish a value for GMST now, combining all the different measurements as you say. We will do the same in 100 years, perhaps with adjustments for more/better instruments, as I said.

    Still, we can then subtract and get a number.

  344. paulski0 says:

    ATTP,

    Nic Lewis’s analysis assumes that even if internal variability is influencing energy balance estimates, that it’s more likely that the result will be close to the “true” value, than far from the “true”

    Isn’t that just a basic consequence of combining normal distributions?

    There may be a case that the EffCS distribution from Dessler et al. isn’t quite normal, with a flatter top allowing more probability either side of the mean. Not sure it makes much difference allowing for that though.

    One important issue with Dessler et al. with regards to quantification of recent real-world variability influence on EffCS is that it’s entirely based on free-running GCMs, and it has been established in a number of papers that current-generation GCMs simply don’t produce something like the observed variability pattern of the past few decades at any sort of frequency, if at all. Therefore, applying the given general distribution of multiple runs from a single model (which has fairly average-to-weedy decadal variability compared to some others from what I can see) is extremely unlikely to quantitatively represent the true real-world uncertainty of attempting to find ECS using data from this period.

    I think looking at AMIP-type simulations where the SST pattern is prescribed is likely to bear more fruit quantitatively (e.g. Marvel et al. 2018, Andrews and Webb 2017).

  345. zebra,
    The point is that we’re not really simply measuring a temperature, we’re trying to determine a global temperature anomaly. We can then use these anomalies to do things, like determine warming trends, climate sensitivity (if we combine it with estimates for forcings, etc).

  346. Paul,

    Isn’t that just a basic consequence of combining normal distributions?

    Yes, and I think that is one of the issues with what he is doing. He’s then also summing the uncertainty in quadrature and claiming that the effect is small.

  347. zebra says:

    ATTP,

    Yes, that is the definition I’m using.

    If CO2 doesn’t double in 70 years, but some other period, then of course some further calculation is involved.

    Either the period will be less or it will be more. Still, this is not relevant to the original question, which is how useful Dikran’s method is in determining the number.

  348. I think looking at AMIP-type simulations where the SST pattern is prescribed is likely to bear more fruit quantitatively (e.g. Marvel et al. 2018, Andrews and Webb 2017).

    I think this is actually quite key. My understanding is that some of this work is suggesting that we’ve experienced SST patterns that have lead to less warming than otherwise might be the case. In other words, energy balance estimates are more likely be producing results lower than expected, rather than higher.

  349. The Very Reverend Jebediah Hypotenuse says:


    Still, we can then subtract and get a number.

    Actually, we will have a bunch of initial estimates of GMST and a bunch of final estimates of GMST.
    So we will have a whole bunch of numbers.
    Which ones to use? All of them? What if they are not all the same? Are some numbers better than others?

    What is the “true” angle of repose?
    https://en.wikipedia.org/wiki/Angle_of_repose#Measurement
    There are numerous methods for measuring angle of repose and each produces slightly different results. Results are also sensitive to the exact methodology of the experimenter. As a result, data from different labs are not always comparable.

    Working this sort of estimation problem isn’t just philosophical, it’s natural-philosophical, aka ‘doing science’.
    And it’s exactly how we normally talk about measurement in physics.

  350. zebra says:

    Reverend J,

    Still not relevant to the question of how useful Dikran’s method is in determining the number.

  351. zebra,
    I don’t think Dikran is proposing a method of determining the TCR. I think he’s trying to explain why the energy balance approach (which essentially uses only a single realisation of our climate history) could produce a result that is not close to the actual climate sensitivity of the system.

  352. The Very Reverend Jebediah Hypotenuse says:

    OK, zebra. Whatever you say.
    I’m fairly sure your completely non-idiosyncratic, good-faith approach to this topic will eventually lead to something that you consider relevant to determining “the” number.
    Good luck!
    Do let us know what it is.
    With units and 5-sigma, if you please.

  353. izen says:

    @-SM
    “If we believe the results from the GCM, then does that mean
    if we could run the real earth over and apply the exact same F,
    That we could see temperature increases both greater than 1C and less than 1 C?”

    I would guess ‘Yes’, because although the value would be somewhere on the envelope of values generated by the same ‘attractor space’ (F), the position in that space of possible values is affected by the flapping butterfly wings.

  354. verytallguy says:

    We measure the temperature now.
    We measure the temperature in 100 years.
    TCR is defined by IPCC as the difference in temperature at the time of CO2 doubling– assumed about 70 years, so we will have the data for several decades.

    Why are you talking about models?

    OK, my final attempt to help understanding.

    TCR has a very specific definiton. It’s not just the temperature at doubling, it requires the doubling to be over a specific time period, and CO2 rise to instantly stop at doubling. Neither of these things will actually be true.

    Additionally, there will be other forcings than CO2, notably volcanic, which are not predictable.

    All this means that the very best that can be hoped for is that the instrumental record will allow fitting of parameters in physical models. There will be uncertainty in the fit, both because of the model being imperfect (it’s a model!) but also that the inherent internal variability of the system is high realtive to the temperature measurement, even on the 70-year timescale of TCR definiton.

    For instance, heat transfer to the ocean will affect how the actual temperature responds, and this will be different according to the rate of forcing change.

    Thus, a physical model is essential to any estimate of TCR (or ECS). Lewis and Curry depends on such a model.

    What the *best* model to use, is a subject of debate.

  355. Willard says:

    When I see all these gentlemen respond to zebra on yet another question without an obvious point, I say: “Whoa, these gentlemen are way better at that than I am”. But, 99% of the population says: “Whoa, zebra is just as good at that as these gentlemen.”

  356. verytallguy says:

    Which is a subset of the catch 22:

    “Why are climate scientists afraid of debate?”

    “OK, let’s have a debate then”

    “See! We told you the science is debatable!”

  357. The Very Reverend Jebediah Hypotenuse says:


    But, 99% of the population says: “Whoa, zebra is just as good at that as these gentlemen.

    When I see Willard respond to all the responses to zebra with a meta-comment with an obvious point, I say, “Thanks, Willard. But how do you know that 99% is the true number?”

    I’ll see myself out.

  358. Willard says:

    > I’ve had enough experience of complex econometric models that attempt to build from deep granularity to forecast future GDP on a multi-decadal scale, for example, to know the weaknesses compared with models that identify the major influencers on the timescales in question and work from there.

    An example would be nice, HAS, and I’m not sure what would be the “major influencers” of an energy balance model, say compared to a general circulation model. Also note that the “timescales in question” may be outside our observational window.

    Time for another AR5 quote:

    Anthropogenic or natural perturbations to the climate system produce RFs that result in an imbalance in the global energy budget at the top of the atmosphere (TOA) and affect the global mean temperature (Section 12.3.3). The climate responds to a change in RF on multiple time scales and at multiyear time scales the energy imbalance (i.e., the energy heating or cooling the Earth) is very close to the ocean heat uptake due to the much lower thermal inertia of the atmosphere and the continental surfaces (Levitus et al., 2005; Knutti et al., 2008a; Murphy et al., 2009; Hansen et al., 2011). The radiative responses of the fluxes at TOA are generally analysed using the forcing-feedback framework and are presented in Section 9.7.2.

    CMIP5 models simulate a small increase of the energy imbalance at the TOA over the 20th century (see Box 3.1, Box 9.2 and Box 13.1). The future evolution of the imbalance is very different depending on the scenario (Figure 12.15a) […]

    Click to access WG1AR5_Chapter12_FINAL.pdf

    That’s the start of section “12.4.3.4 Energy Budget.”

  359. Willard says:

    > how do you know that 99% is the true number?

    Because it is a rare contribution to our community by zebra himself.

  360. Willard says:

    > Which is a subset of the catch 22: […]

    By serendipity, how PaulM heard “debate” when Andrew said “debate on Twitter”:

  361. verytallguy says:

    I love it when my prejudices are so nicely proved right

  362. Joshua says:

    VTG:

    Which is a subset of the catch 22:

    “Why are climate scientists afraid of debate?”

    “OK, let’s have a debate then”

    “See! We told you the science is debatable!”

    While I think the dynamic you describe there is real, I think its impact can also be overstated.

    I would suggest that for a whole lot of folks, the existance or non-existance of debate is largely irrelevant in thst at most, it only is used to reinforce preexisting opinions. Not sure I’d say for 99%, but of course I don’t have the striped one’s grasp of statistics.

  363. BBD says:

    I would suggest that for a whole lot of folks, the existance or non-existance of debate is largely irrelevant in thst at most, it only is used to reinforce preexisting opinions. Not sure I’d say for 99%, but of course I don’t have the striped one’s grasp of statistics.

    It’s not black and white…

  364. zebra says:

    verytallguy,

    You are telling me stuff I know and have already acknowledged– I know the definition, and I said that we would have to do some calculations if the parameters change.

    But I suggest you and (ATTP) go back and read izen’s comment, 10:24 May 1, which I said was pretty much congruent with my view, (except for the obligatory zebra-bashing at the end.)

    Either it is important to predict what the numerical value of GMST will be at the doubling of CO2, or not. You say:

    the very best that can be hoped for is that the instrumental record will allow fitting of parameters in physical models

    Why would you want to, unless you wish to predict what is going to happen in the next 100 years?

    So, ATTP, you are also telling me stuff I understand just fine. I am not questioning the value of GCM, although I have always said as in the izen comment that it is getting higher resolution of specific effects on regional scales that will make them really useful. But you and dikran are the ones emphasizing the “true” value of TCR.

    Anyway, still the question is about a very simple physics and statistics process, and people want to talk about everything else but that. Perhaps I am misunderstanding what dikran said, and perhaps I am wrong about some very basic principle, but it appears we will never know.

  365. verytallguy says:

    zebra, you’ve lost me.

  366. verytallguy says:

    I would suggest that for a whole lot of folks, the existance or non-existance of debate is largely irrelevant in thst at most…

    Not sure about this. Creating the appearance of doubt, or accentuating it has been a key strategy of those whose ideology or wealth is threatened by factual analysis and has a long and successful history.

    Merchants of Doubt and all.

  367. Joshua says:

    BBD –

    Sure. It isn’t black and white. But I’m suggesting that the effect is often overestimated.

    Important caveat – I’m speaking about the U. S.

    There is little doubt (that some “skeptics” who monitor the issue argue that a “mainstream scientist” declining to debate is proof that we needn’t worry about ACO2 emissions, even as they argue that the debates that are undertaken (which they don’t have the skills to evaluate) are proof that we needn’t worry about ACO2 emissions.

    It’s like how they argue that “gatekeeping” deligitimizes climate science even as they argue that peer-reviewed publications legitimize their views (if they like the conclusions of the articles).

    Sameosameo

    But I don’t think that very many people (relatively speaking) actually have much of any idea what scientists actually say in their publications or have an evidence-based opinion as to whether or how much scientists engage in public debates about climate change.

    I suspect the fact of actually debating or not is largely irrelevant, for many.

  368. Zebra,
    If you already know all of this, why are you asking all these questions? If you have a point to make, make it. If you want to know something, ask a question. If you’re asking questions so as to make some kind of point, maybe you could stop doing this and simply make the point.

  369. Joshua says:

    VTG –

    Merchants of Doubt and all.

    I think that we’d agree that the doubt they sell is neither fact, or reality based.

    They don’t need scientists actually debating or not debating to sell doubt about whether scientists debate or don’t debate, and what we can conclude from whether they debate nor not.

  370. dikranmarsupial says:

    zebra wrote ““That is not how IPCC defines TCR.”

    https://www.ipcc.ch/ipccreports/tar/wg1/345.htm

    “The �transient climate response�, TCR, is the temperature change at the time of CO2 doubling and the �equilibrium climate sensitivity�, T2x, is the temperature change after the system has reached a new equilibrium for doubled CO2,””

    zebra didn’t read as far as:

  371. dikranmarsupial says:

    “The temperature change at any time during a climate change integration depends on the competing effects of all of the processes that affect energy input, output, and storage in the ocean. In particular, the global mean temperature change which occurs at the time of CO2 doubling for the specific case of a 1%/yr increase of CO2 is termed the �transient climate response� (TCR) of the system.”

    Don’t know what happened there, this was supposed to be the end of the previous comment

  372. dikranmarsupial says:

    ATTP wrote “I don’t think Dikran is proposing a method of determining the TCR.”

    indeed. I am explaining why observational estimates of TCR have a source of error that we can’t usefully estimate from the observations, but the models give the best available indication of its plausible magnitude.

    Zebra wrote “So, ATTP, you are also telling me stuff I understand just fine.”

    If someone was demonstrating the Dunning-Kruger effect, that is probably what they would say as well.

  373. Willard says:

    > If you’re asking questions so as to make some kind of point, maybe you could stop doing this and simply make the point.

    It’s not like I’ve not warned zebra more than two weeks ago, AT:

    If you have a point, make it.

    https://andthentheresphysics.wordpress.com/2018/04/11/criticising-the-critics/#comment-115836

  374. dikranmarsupial says:

    Zebra writes “So, ATTP, you are also telling me stuff I understand just fine. … But you and dikran are the ones emphasizing the “true” value of TCR.

    The bit you don’t understand is what is meant by the “true” value of TCR.

  375. zebra says:

    ATTP,

    I am not asking questions about “all of this”, just about dikran’s method. People keep ‘splaining things that have nothing to do with that, and I try to answer them.

    My “point” is that I don’t see how you can find the “true” value of TCR using the method proposed.

  376. zebra says:

    dikran,

    “what is meant by the “true” value”

    Yes, I keep asking and you don’t answer. How would I know if a value is the “true” value?

  377. dikranmarsupial says:

    SM wrote

    So we have one world.
    since 1850 forcings have been applied, F
    Temperature changed from around 14C to 15C.

    If we believe the results from the GCM, then does that mean
    if we could run the real earth over and apply the exact same F,
    That we could see temperature increases both greater than 1C and less than 1 C?

    Yes, exactly, provided the initial conditions were not exactly the same. This is essentially the “statistical interchangability” interpretation of GCM ensembles used by the IPCC.

    max and min would be what, if we believe the GCM

    If the GCMs were perfect (i.e. “parallel Earth” perfect) then the values we get from rerunning the real earth would be within the spread of the CGMs. However as we know the GCMs are necessarily simplifications of real climate physics, and there are also other uncertainties (such as in the parameters of the parameterisations) that are not included, so the spread may not adequately represent the uncertainty (I’d strongly recommend reading Victor’s blog post on that subject that was mentioned earlier).

    This is one of the problems with assessing the variance of the observational models, we know the GCMs aren’t perfect, so the difference could be due to the influence of internal variability on the observational estimates, or it could be that the GCM range isn’t quite right, or more likely a bt of both. However it is currently the best guide available AFAICS.

  378. Joshua says:

    zebra –

    People keep ‘splaining things that have nothing to do with…

    Some advice (with exactly what you paid for it?) Try reflecting a bit on why that might happen. I tried exchanging views with you but found it unproductive and gave up.

    Perhaps it isn’t about you being a victim?

  379. dikranmarsupial says:

    zebra “Yes, I keep asking and you don’t answer.”

    I’ve told you repeatedly.

    How would I know if a value is the “true” value?”

    You can’t know, for the reasons that have been explained to you.

  380. dikranmarsupial says:

    zebra wrote “My “point” is that I don’t see how you can find the “true” value of TCR using the method proposed.”

    What do you think the “method proposed” is? As ATTP says, I haven’t propose a practical method for finding the “true” value of TCR, just one you can only perform in a thought-experiment to illustrate what is meant by the “true” value.

  381. verytallguy says:

    Joshua, you’ve lost me too.

    Or perhaps I’ve just lost it.

  382. “and it has been established in a number of papers that current-generation GCMs simply don’t produce something like the observed variability pattern of the past few decades at any sort of frequency, if at all. “

    That’s true, and even beyond that the research papers are divided as to whether variability is red noise, stochastic resonance, a chaotic regime, or a complicated non-chaotic deterministic regime.

  383. zebra says:

    dikran,

    Is there some way to illustrate what is meant by the “true” value other than performing the thought experiment you describe?

  384. Joshua says:

    VTG –

    heh. I thought that might be indecipherable. Maybe there’s no way to clearly explain muddled thinking. Fwiw, I’ll try again later.

  385. Willard says:

    > Is there some way to illustrate what is meant by the “true” value other than performing the thought experiment you describe?

    Sure, it’s like hitting a bull’s eye:

    Source: http://climatica.org.uk/climate-science-information/uncertainty

    I rather like this limerick:

    ACcurate is Correct. (or Close to real value)
    PRecise is Repeating. (or Repeatable)

    https://www.thoughtco.com/difference-between-accuracy-and-precision-609328

    Everyone should agree that we do not have any idea what the one real true value is. Hence the usual ballpark, known since Charney. That being said, we have a fairly good idea when we’ll know it:

    A number of recent studies suggest that equilibrium climate sensitivities determined from AOGCMs and recent warming trends may significantly underestimate the true Earth system sensitivity (see Glossary) which is realized when equilibration is reached on millennial time scales (Hansen et al., 2008; Rohling et al., 2009; Lunt et al., 2010; Pagani et al., 2010; Rohling and Members, 2012). The argument is that slow feedbacks associated with vegetation changes and ice sheets have their own intrinsic long time scales and are not represented in most models (Jones et al., 2009). Additional feedbacks are mostly thought to be positive but negative feedbacks of smaller magnitude are also simulated (Swingedouw et al., 2008; Goelzer et al., 2011). The climate sensitivity of a model may therefore not reflect the sensitivity of the full Earth system because those feedback processes are not considered (see also Sections 10.8, 5.3.1 and 5.3.3.2; Box 5.1). Feedbacks determined in very different base state (e.g., the Last Glacial Maximum) differ from those in the current warm period (Rohling and Members, 2012), and relationships between observables and climate sensitivity are model dependent (Crucifix, 2006; Schneider von Deimling et al., 2006; Edwards et al., 2007; Hargreaves et al., 2007, 2012). Estimates of climate sensitivity based on paleoclimate archives (Hansen et al., 2008; Rohling et al., 2009; Lunt et al., 2010; Pagani et al., 2010; Schmittner et al., 2011; Rohling and Members, 2012), most but not all based on climate states colder than present, are therefore not necessarily representative for an estimate of climate sensitivity today (see also Sections 5.3.1, 5.3.3.2, Box 5.1). Also it is uncertain on which time scale some of those Earth system feedbacks would become significant.

    Click to access WG1AR5_Chapter12_FINAL.pdf

    If we could build Earth twins or Earth holodecks before that, that would be great.

  386. Hyperactive Hydrologist says:

    Have I got a comment stuck in moderation?

  387. Chubbs says:

    FWIW, thought it might be interesting to estimate an effective climate sensitivity going all the way back to pre-industrial. The advantage vs. standard EBM is that uncertainty in Delta F- Delta Q is reduced. I don’t have access to the # from L+C2018, so I used Haustein et. al 2017 forcing and temperatures. For the period 2007-2016 vs 1750 I get the following:
    Delta F – 2.23
    Delta T – 0.9 (low) to 1.2 (high), this based on Hawkins et. al 2017 estimate of pre-industrial to 1886-2005 temps, 0.55 to 0.8, plus the change in Hadcrut (low) and Hadcrutcw (high) from 1986-2005 to 2007-2016
    Delta Q – 0.8, slope of 0-2000M OHC from 2005 to 2017 divided by 0.8
    2xCO2 – 3.7

    ECS = 2.1 (low) to 2.8 (high)

    Of course using L&C2018 forcing will reduce ECS somewhat and some will quibble with the # used. Still my overall take is that standard EBM are overstating the differences between models and observations.

  388. I think the reason that some scientists have problems with the definition of TCR is that they are accustomed to doing controlled experiments. The way TCR is defined, by having a 1% increase in CO2 every year until it doubles after 70 years ~ 1.01^70, implies that it is some sort of controlled experiment. Yet, this exact progression is very unlikely to happen and so the measure is really a hypothetical exercise. My suggestion is that if you can’t do a controlled experiment, don’t configure a measure that expects those parameters.

    I will provide an example of a controlled experiment. Since the way we understand CO2 response is as a sequestering diffusional process, I can describe how I would do a controlled experiment with a dopant diffusion in a semiconductor fab. I would calibrate the feedback control on the furnace to increase the partial pressure by 1% every minute until the partial pressure doubled. Then I would measure the dopant profile. I could compare that to another experiment where I ramp up the partial pressure immediately and wait the same amount and then measure the dopant profile again. After that compare the two and see what the difference is. Of course, this kind of controlled experiment is done all the time as part of the standard characterization process.

    Again the problem is presenting these kinds of definitions to scientists that are accustomed to doing controlled experiments is that you will be met with quizzical looks. Perhaps that is what is causing zebra heartburn.

  389. Hyperactive Hydrologist says:

    I’ll try again for about the forth time 🙂

    TCR estimates based on 10, 20, 30 and 50 year periods for the present, up to 2011, and rolling means of corresponding length for the 1850 to 1950 historic period. The radiative forcing data is taken from AR5 and the temperature data from CRU.

    My initial thoughts are that the 2000 – 2015 period has been dominated by negative PDO, not directly considered by L&C18, which has cause an increase in energy accumulating in the ocean, therefore suppressing surface temperature. By increasing the length of the present period we remove influence of internal variability (PDO), which increases the estimate of TCR.

  390. Hyperactive Hydrologist says:

    Scratch that I think volcanic forcing is screwing the results. With a 60% reduction in volcanic forcing (Gregory et al. 2016).

  391. Christian says:

    Hyperactive Hydrologist,

    Go a step further, rethink about OHU, then you get:

    1) The most important Layer is 0-100m of Global Oceans, because simplified its the layer(in reality its not uniform arround our planet), which is well mixed and therefore interacting with atmosphere, which is mean, thats not only important, that there is OHU, also in which Layer of ocean the energy went, if we look Levitus 2010 et al, we see that arroud 2/3 of Energy goes below 0-100m, if you rethink this, you see the ocean below the mixing layer dampen the increase of temperature. In other words, the same OHU can result to different TCR/ECS, if its mainly near surface or deep below surface..

    So if ECS/TCR is determine like Lewis/Curry it becomes to simply, because of we dont know, is OHU below the first 100m is constant over time or can vary on fast or slow timescales, but i think its should slow down in consequenz that the lower layers becomes on a long time nearly in balance.

    2) That recent Temperature increase from last El-Nino is not “killed” by the La-Nina(we continue very warm temperatures in compare to the La-Ninas a few years ago) could simply a result, that internal variability and human caused change interacting, which is described in this Paper here: https://agupubs.onlinelibrary.wiley.com/doi/full/10.1002/2017GL076500

    “A 0.24°C jump of record warm global mean surface temperature (GMST) over the past three consecutive record‐breaking years (2014–2016) was highly unusual and largely a consequence of an El Niño that released unusually large amounts of ocean heat from the subsurface layer of the northwestern tropical Pacific. This heat had built up since the 1990s mainly due to greenhouse‐gas (GHG) forcing and possible remote oceanic effects. ”

    Greets

  392. zebra says:

    paul pukite,

    Thanks for the input but that is not the problem for me; I have already said that I realize the hypothetical rate of increase may not be met.

    The problem for me is that dikran is saying that there is something called the “true” TCR. But, so far, is not providing an explanation as to what that means. Now he says:

    I haven’t propose a practical method for finding the “true” value of TCR, just one you can only perform in a thought-experiment to illustrate what is meant by the “true” value.

    Maybe you can translate that?

    To me, it means that the “true” TCR is defined as “what you get when you use my method”, whether as an “estimate”, using a limited number of different initial conditions, or, what you would hypothetically get if there were an infinite number of different initial conditions.

    But averaging the results from using different initial conditions just gives… the average.

  393. zebra,
    Good grief. All that I’m meaning by the term “true” TCR is the value that the real system probably has. What we’re trying to point out is that an energy balance approach may not return a result that is a good representation of the system’s actual climate sensitivity.

  394. zebra says:

    ATTP,

    “an energy balance approach may not return a result that is a good representation of the system’s actual climate sensitivity.”

    I agree. Have I ever questioned that?

    Sorry, ATTP, but I am frustrated myself when I agree with people but somehow they treat it as disagreement. Your definition of “true” TCR is exactly what my definition would be.

    I have been very clear: What I disagree with, unless I am really misunderstanding something, is the idea that averaging the results you get from different initial conditions will give the value you and I both think of as the “true” TCR.

  395. Zebra,

    What I disagree with, unless I am really misunderstanding something, is the idea that averaging the results you get from different initial conditions will give the value you and I both think of as the “true” TCR.

    Andrew Dessler posted the tweet below which shows the distribution of ECS results they get from determining the ECS in an ensemble of model runs using the same method as in Lewis & Curry. If I’ve read the paper correctly, the “true” ECS for the model being used is about 2.93K. Hence, it would seem that the ECS one would get from the mean of the distribution shown below would indeed be close to the “true” ECS. All that I think Dikran is pointing out is that if you could use the energy balance approach on an ensemble of historical warmings, the mean of that would probably be close to the “true” ECS. Since we can’t do that, we should be careful of assuming that the result from an energy balance approach using a single sample is close to this “true” ECS.

    Let's match the periods chosen in our model ensemble analysis as closely as possible to L&C. We can calculate ECS using these base periods: 1869-1882, 1995-2005. 3/ pic.twitter.com/uYuzn493NX— Andrew Dessler (@AndrewDessler) May 1, 2018

  396. zebra, I can’t answer the question about the premise of averaging a number of initial conditions because I am exploring a different take. From what I see, large-scale natural variability in climate is not an initial-condition problem as much as it’s a boundary-condition problem. All these standing-wave dipoles represented as climate indices are by definition boundary-condition driven in space and also likely in time. Just from observing that there is an annual synchronization lends credence to this possibility.

    Someday we will get a resolution to this conflict via statistical tests (since we do not have the luxury of controlled experiments). Here is one that I just came across

    https://agupubs.onlinelibrary.wiley.com/doi/epdf/10.1002/2017GL076912
    “To shed more light on the issue of determinism versus stochasticity and nonstationarity versus stationarity, we develop a new statistical test. “

  397. Willard says:

    > I am frustrated myself when I agree with people but somehow they treat it as disagreement.

    Perhaps you should refrain from saying things like:

    The problem for me is that dikran is saying that there is something called the “true” TCR. But, so far, is not providing an explanation as to what that means.

    right after I told you what it was, i.e. the sensitivity that we discover once it’s realized, and when it’s quite clear that your claim is false. Let me recap what Dikran said right from the beginning, in case you missed it.

    First, April 28, 2018 at 10:57 am, where he indicates that “trueness” and “bias” go hand in hand, and why:

    I think part of the problem is there is too much focus on bias, and that people mean different things by it. In a statistical sense, the energy balance model may be unbiased in the sense that if you were to use it many times on independent realisations of the climate (parallel Earths / research equipment from Magrathea) the average value you got from the estimator would be the true value. However that doesn’t mean that the value you get from one particular realisation won’t be substantially lower or higher than the true value. In statistics we would call that “variance” rather than “bias”.

    Second, he gets a bit more technical:

    Bias means the estimator is systematically wrong (i.e. it’s average value over multiple realisations of the experiment is not equal to the true value); variance is the error due to estimating some quantity on a particular finite sample of data (i.e. the difference you would see if you repeatedly perform the experiment on independent samples). The use of bias in the paragraph you quote seems likely to be bias in a statistical sense, whereas the “bias” due to internal variability is more likely to be “variance”.

    Third, there’s this long comment on TCR:

    Now this [TCR] is something we can directly calculate in a model as we can set up the forcings to follow this scenario and estimate it directly by measuring the temperature change in that particular 20 year period (note it still has a non-zero variance as you would get a numerically different result from each run). However we can attenuate this variance by averaging over many model runs (as the internal variability is not coherent) to estimate the “true” (rather than estimated) TCR of the model. In this sense, TCR is a model metric, a number that tells us some property of the model.

    It is not, however, as clear cut for the real world as we can’t set up the forcings to follow the scenario in the definition. We can however try and work out what TCR would be if we followed that scenario (multiple times and took the average) by trying to take account of the changes in forcings that have actually occurred during the period of observation. Thus it isn’t a climate metric (as we can’t measure it), but it is still a property of the climate system, and gives us an indication of what we might expect to see as the result of increases in the forcings.

    Note that if you had multiple parallel Earths with the same forcings, but different initial conditions, then the estimate of TCR would be numerically different for each one, but the “true” TCR is the same for all, as the physics of the climate system is the same for them all. We can find the “true” TCR by averaging the results from the parallel Earths. Unfortunately we can’t do that, as we only have access to our reality, but the thought experiment explains why treating the estimate of TCR from the observations as the “true” TCR is essentially assuming this variance is zero, which is physically implausible.

    That was May 1, 2018 at 11:12 am. You started to Just Ask Questions May 1, 2018 at 12:39 pm. I think that his first response to you was rather good:

    “The average value derived from some sample: “The average height of Norwegian males is 6ft, based on a sample of 1,000 individuals.” You really don’t understand that?”?

    Zebra, of course I understand that. The point is that there is a difference between the true value of a population statistic and an ESTIMATE of that statistic given by the average of some finite sample.

    “So, could you just answer the question? What is the utility of the number you come up with? “

    It is an estimate of TCR, it gives you information on how GMSTs are likely to change in response to a change in the forcings, however it is important to realise that it is only an ESTIMATE of the TCR of the actual climate, and to have some appreciation of the uncertainties.

    https://andthentheresphysics.wordpress.com/2018/04/27/lewis-and-curry-again/#comment-117774

    I think it’s safe to say that you’ve extended well beyond courtesy any requirement for more room service.

  398. HAS says:

    Dikranmarsupial “If we have only one means of estimating the mass of an exoplanet, does that mean that our estimate is the correct result” and if we have two ways of estimating the mass we don’t import the estimates of uncertainty in one way into the other. Each stands on their own.

    If we estimate the surface temperature from the instrumental period using observations we have one set of estimation errors, if we do it by way of climate model runs we have another, but we don’t use the climate model errors in the observational method, it has its own much more direct ways of getting there. (I’d note that the climate model method won’t be very good because it isn’t independent of what is being measured – tuning parameter estimation and the like).

    aTTP “Noone is claiming that energy balance estimates are not worth considering ..” a brave man pulling that signal out of all this noise!

    Dave “LC18-style energy-balance models have …” Forget all the limitations in both types of models and move to evaluate the approaches on their ability to estimate global temperatures during the 21st century (arguably there most important use). Which class on average is doing better for the first two decades?

    Izen “The amount of credible information about what may happen in the future is about the same as a linear extrapolation of the observational record.” Don’t knock the simple assumption that that TCR observed in the instrumental period (with its pdf) will persist through the 21st century. That contains a lot of information, it is directly and easily testable, and early indications are that it could be doing better than GCMs. I also think I recall seeing work showing that the global temp output from GCMs over the instrumental period and the 21st century could be emulated by a relatively simple set of linear equations. After all at these timescales the climate is a great big averaging machine with significant inertia.

    Dikranmarsupial “The best predictor of the transient response to future forcings is given by the “true” TCR of the physical climate system, which is not directly measurable” Whether or not it is the best predictor depends on assumptions about “true” TCR, but putting that aside if you are using GCMs to estimate the parameters for decadal prediction/projection we would want to be using models that were constrained to the instrumental period.

    Mosher “Im slowly getting clarity …” But if you believe the observations, the more usual way to think of this (as I’m sure you are aware) is how well do the models fit the instrumental period? The observations appear an outlier, and I suspect that continues through the first two decades of the 21st century. The question is what went wrong, or was the instrumental period just the luck of the draw? Also to note again, the GCM result isn’t truly independent of the observational one given the process of the former’s development. That would I think suggest we would expect the observational results to tend to be closer to the GCM results than if the two method were completely independent.

    aTTP “He’s then also summing the uncertainty in quadrature and claiming that the effect is small.” There is a further update at Climate etc where Lewis discusses the point I think you raise.

    aTTP “I think looking at AMIP-type simulations ….” I looked at Marvel et al and noted earlier it doesn’t seem to apply best practice balance model techniques (see end period selections and length of period). The risk is that it deals with a strawperson.

    Dikranmarsupial “we know the GCMs aren’t perfect, so the difference could be due to the influence of internal variability on the observational estimates, or it could be that the GCM range isn’t quite right, ..” Just by the by internal variability is still a potential problem for the GCMs too as is their handling of some forcings.

  399. RICKA says:

    In reading this discussion I once again wonder what is the use of TCR or ECS as a real world climate metric?

    As I understand it from reading this thread, and other reading online, each model will have its own TCR, based on its own simulation of 1% increase over 70 years (the doubling). So there are an infinite number of TCR’s, each trying to simulate the real world, but all doing it differently.

    Meanwhile, nobody seems to care about the temperature difference between GMST at 280 ppm and GMST at 560 ppm (when that occurs). THAT temperature differences isn’t TCR – it is something else, something which nobody cares about.

    All that matters is what the value is in the model simulation.

    How are we to test and verify whether the model simulations are accurate and correct?

    Beats me!

    I am with zebra on this issue.

    What matters to me is what is the temperature difference between:

    GMST at 280 ppm and GMST at 560 ppm
    GMST at 300 ppm and GMST at 600 ppm
    GMST at 320 ppm and GMST at 640 ppm
    and so on.

    Get enough data for enough CO2 doublings and we can get an average of whatever you call the real world TCR (maybe effective transient climate response).

    Who cares what the models say – they are all wrong anyway.

    I want to know what the real climate will do in the real world.

    All we have to do is wait for the real world to start hitting the doubling points over the period when we have instrument records of temperatures and we can start measuring effective transient climate response (eTCR).

  400. RICKA said:

    “Get enough data for enough CO2 doublings and we can get an average of whatever you call the real world TCR (maybe effective transient climate response).”

    I agree with this “effective” approach. I wonder if we actually get to a doubling of CO2 (280 to 560) and the measured change turns out to be ~2C (ocean) to 3C (land), will Lewis pragmatically acknowledge this in his math? I am sure he will still say that TCR is ~1.2C.

  401. Willard says:

    > Forget all the limitations in both types of models and move to evaluate the approaches on their ability to estimate global temperatures during the 21st century (arguably there most important use). Which class on average is doing better for the first two decades?

    More reading from the AR5:

    A few recent studies indicate that some of the models with the strongest transient climate response might overestimate the near term warming (Otto et al., 2013; Stott et al., 2013) (see Sections 10.8.1, 11.3.2.1.1), but there is little evidence of whether and how much that affects the long-term warming response. One perturbed physics ensemble combined with observations indicates warming that exceeds the AR4 at the top end but used a relatively short time period of warming (50 years) to constrain the models’ projections (Rowlands et al., 2012) (see Sections 11.3.2.1.1 and 11.3.6.3). GMSTs for 2081–2100 (relative to 1986–2005) for the CO2 concentration driven RCPs is therefore assessed to likely fall in the range 0.3°C to 1.7°C (RCP2.6), 1.1°C to 2.6°C (RCP4.5), 1.4°C to 3.1°C (RCP6.0), and 2.6°C to 4.8°C (RCP8.5) estimated from CMIP5 .

    Click to access WG1AR5_Chapter12_FINAL.pdf

    Not sure where Nic acknowledges that circulation models are observation-based, yet he goes so far as to claim that:

    The energy budget framework provides an extremely simple physically-based climate model that, given the assumptions made, follows directly from energy conservation.

    This implies that circulation models might be physically-based too, an omission Nic will correct next ClimateBall blog post, no doubt.

  402. izen says:

    At the risk of over-simplifying, and over-doing the grafix…

    Suppose you want to find the thermal capacity of an unknown mixture. Although you are pretty sure it is 90% water.
    Or more specifically you wanted to know how much and how fast it would warm up if a gradually ramping measured amount of energy was added.

    One method would be to put equal amount into 4 saucepans and warm them all with identical amounts of energy via a thermal source. Measuring the rate and extent of the temperature rise in each pan would give a value for the thermal capacity under slightly different conditions. The experiment would give you an average and a measure of how much the different starting conditions could vary the outcome. Perhaps the wide ones warm up fast, but then the narrow ones catch up and overtake… but the intermediate ones warm fasted with a lot of variance.

    Doing the experiment with many different starting conditions, and repeating it with identical pans would give you an even better (Truer?) estimate of the rate of warming and thermal capacity, of the fluid. And how certain you could be that any ONE measurement was truly accurate.

    But back on this planet, we have one set of observations of a temperature rise, (actually several different versions…) and a good estimate of the amount of energy added so far. (well, pretty good!)
    But we are less certain about the size of the sample, the shape of the saucepan or how much that all matters.

    Thermal capacity in this context is climate sensitivity.

  403. Willard says:

    I like this, izen. Should be a post.

    ***

    To return to HAS’ laundry list, especially his I’d note that the climate model method won’t be very good because it isn’t independent of what is being measured – tuning parameter estimation and the like, I’d note that simplicity of balance models have their share of inconveniences. From another chapter of AR5:

    Simple energy balance calculations rely on a very limited representation of climate response time scales, and cannot account for nonlinearities in the climate system that may lead to changes in feedbacks for different forcings (see Chapter 9). Alternative approaches are estimates that use climate model ensembles with varying parameters that evaluate the ECS of individual models and then infer the probability density function (PDF) for the ECS from the model–data agreement or by using optimization methods (Tanaka et al., 2009).

    Click to access WG1AR5_Chapter10_FINAL.pdf

    Seems that there’s life beyond EBMs and GCMs.

  404. Steven Mosher says:

    Stupid question. Did andrew use air temps or a combination of sst and sat.

  405. Steven Mosher says:

    “would guess ‘Yes’, because although the value would be somewhere on the envelope of values generated by the same ‘attractor space’ (F), the position in that space of possible values is affected by the flapping butterfly wings”

    The reason I am pressing this is that after looking at the spread of results in Andrew’s 100 run ensemble I am wondering how that squares with attribution studies of post 1950 studies.

    If the same forcings can lead to wildly different temperatures over a 150 year period, then how does that result square with the attribution studies of the last 70 years..in those the observed surface temp seems to be taken as some sort of truth.

  406. Andrew E Dessler says:

    Mosher: I used 2-m air temperature from the model. I’ve done the calculation with surface skin temperature and the results are basically the same.

  407. izen says:

    @-SM
    “..in those the observed surface temp seems to be taken as some sort of truth.”

    It IS some sort of truth, or at least a BEST estimate we have of it.

    AFAIK the attribution studies relied on the pattern, or fingerprint of the available truth, the rise in troposphere and fall in stratosphere temperatures; polar and nocturnal amplification, more than the precise observed magnitude matching specific modulz.

  408. izen says:

    @-HAS
    “I also think I recall seeing work showing that the global temp output from GCMs over the instrumental period and the 21st century could be emulated by a relatively simple set of linear equations. After all at these timescales the climate is a great big averaging machine with significant inertia.”

    While that is probably true for GMST, and that could be curve-matched with simple equations without physical referents, It is less accurate for the distribution of temperature change and extreme events.

    Linearity is not an obvious feature of ice mass reduction or sea level rise.

  409. HAS says:

    izen “Linearity is not an obvious feature of ice mass reduction or sea level rise.” but within context of the coming century? What I’d instinctively do in this context is use simpler modelling of the dominant short-term drivers/constraints of overall global temps, and use the more complex modelling to explore extremes and non-linearities. What will occur at the local level (which is where the policy issues lie) will be dominated by other impacts over this time period – gcms need to forced by pretty unrealistic assumptions to produce anything much by way of extreme events or non-linearities over this timescale. And in the end all that complexity doesn’t deliver a lot at the local level that crude measures of projected global temperature wouldn’t do.

    While I mentioned the linear fitting to gcms more to make the point that complexity can be reduced to quite simple models without sacrificing much information, as I recall the work it was basically using just the forcings and the initial conditions. I should try and find it or replicate it rather than just surmising.

  410. zebra wrote “Is there some way to illustrate what is meant by the “true” value other than performing the thought experiment you describe?”

    I’ve given several other examples on this thread. Go read them.

  411. zebra wrote “The problem for me is that dikran is saying that there is something called the “true” TCR. But, so far, is not providing an explanation as to what that means”

    That is just a lie, as your earlier comment shows

    zebra wrote “zebra wrote “Is there some way to illustrate what is meant by the “true” value other than performing the thought experiment you describe?”

    How can I have not explained what “true” TCR means and have given an illiustration (I’ve actually given more than one). You owe me an apology.

  412. HAS wrote “Dikranmarsupial “The best predictor of the transient response to future forcings is given by the “true” TCR of the physical climate system, which is not directly measurable” Whether or not it is the best predictor depends on assumptions about “true” TCR

    No, it doesn’t. If you have an oracle and know the true value of the physical property, there are no assumptions about it, unlike an estimate.

  413. Nice analogy izen (can’t have too many graphics ;o)

  414. SM “If the same forcings can lead to wildly different temperatures over a 150 year period, then how does that result square with the attribution studies of the last 70 years..in those the observed surface temp seems to be taken as some sort of truth.”

    I suspect the attribution studies try to separate the effects of internal climate variability from the effects of the forcings (for instance we have “measures” of ENSO that are independent of temperature). The simple model used in the Cawley et al paper mentioned earlier in discussion does that. I suspect you could do the same with the individual model runs for a like-for-like comparison and I suspect that would narrow down the spread of the ensemble.

  415. HAS says:

    dikranmarsupial “If you have an oracle and know the true value of the physical property, there are no assumptions about it, unlike an estimate.” Even if you know the value you have to assume things about it to make it the best predictor.

  416. HAS yawn, sorry you are just obfuscating the point being made with pedantry. I don’t know what you think you gain from that, but it doesn’t create a good impression.

  417. Besides if you have an oracle to tell you the “true” value of the parameter, it can probably give you the true model as well! ;o)

  418. … and to anticipate further pedantry, the true initial conditions.

  419. HAS says:

    dikranmarsupial good to see you agree. The substantive point is that the difference between model worlds and the real world is not well respected. This was the basis of our earlier discussion, and this was just another example of model artifacts being given a status beyond their due.

    For example I’m still not sure if you accept the subsequent statement in my original comment, “if you are using GCMs to estimate the parameters for decadal prediction/projection we would want to be using models that were constrained to the instrumental period.”

  420. HAS wrote “The substantive point is that the difference between model worlds and the real world is not well respected.”

    no, that was not the central point. The central point is that estimates of observed TCR are affected by internal climate variability, and there is no viable means of estimating this variance without a model.

    “For example I’m still not sure if you accept the subsequent statement in my original comment, “if you are using GCMs to estimate the parameters for decadal prediction/projection we would want to be using models that were constrained to the instrumental period.”

    No, explained earlier in the thread why that approach is invalid. If you do that the ensemble mean is no longer a predictor of the forced response. This is because the observational estimates are affected by internal climate variability, and so don’t tell you the “true” TCR that “governs” the climate’s future course. That is why it is the central point.

  421. verytallguy says:

    the difference between model worlds and the real world is not well respected

    Assertion without evidence. Citation please – not respected by whom?

    Are you questioning whether Lewis and Curry should place such reliance on their EBM?

  422. verytallguy says:

    RickA

    we can start measuring effective transient climate response (eTCR).

    No, you can’t. You can better constrain the estimates with more data, but you cannot measure this parameter, as real world forcing will not match its definition.

    But we’ve been here before.

  423. Another point to make. The TCR, and ECS, are defined in terms of a change in temperature after a change in forcing *equivalent* to a doubling of atmospheric CO2 (either transient, or equilibrium). This is not necessarily going to be the same as when atmospheric CO2 has doubled in the real world because there are other external factors that influence the net change in external forcing (aerosols, short-lived GHGs, the Sun, volcanoes).

  424. HAS says:

    dikranmarsupial “there is no viable means of estimating this variance without a model” in one sense nothing gets estimated without a model, but I assume you are referring to GCMs. There are other viable ways to handle internal variance in TCR without a GCM, and they have their relative pros and cons.

    “This is because the observational estimates are affected by internal climate variability, and so don’t tell you the “true” TCR that “governs” the climate’s future course.” The problem with that line of argument is that the climate’s evolution is path dependent and once we know the recent climate it changes the best estimates of what comes next. There’s a whole set of your ensemble that can’t occur in the real world. You will recall Bayes had some thoughts on the subject.

    As I said, the difference between model worlds and the real world is not well respected.

  425. verytallguy says:

    Contemplating analogies.

    Consider performance of a racing car.

    We could think of a couple of metrics, say 0-60mph acceleration and top speed in a straight line. They are fundamental properties of the car.

    These are the equivalents of TCR (dynamic response) and ECS (final quasi steady state).

    However, timing a car around a track during a race (ie instrumental record, real world) can only give data allowing us to estimate these fundamental properties. In the same way the instrumental record can only allow us to estimate ECS and TCR.

    At no point in an actual race does the car do 0-60 in a straight line. In the same way, at no time in reality will CO2 changes exactly match 1% a year for 70 years as specified in the TCR definition.

  426. HAS,
    I’m not quite sure what you mean by the end of your comment. It’s possible that you misunderstand what those who use models as their value. It’s not that they think that models can perfectly represent the system they’re modelling (some models are better than others). Models are mostly used to answer questions – what happens if this changes, for example? Even if a model doesn’t represent a system perfectly, it’s still often the case that they can do a job of understanding how a system responds to changes (with caveats relating to suitable timescales and spatial scales).

  427. verytallguy says:

    As I said, the difference between model worlds and the real world is not well respected.

    You’ve said it at least twice. It’s meaning is opaque. The only relevance I can see to the discussion at hand is to say that the L&C don’t “respect” the difference between their simple EBM and the complex real world. But I’d disagree with that – I think simple models can be very useful, although perhaps you do have a point in that Lewis seems to argue his methodology gives the One True Answer at times.

    Anyway, please clarify what you mean?

  428. “dikranmarsupial “there is no viable means of estimating this variance without a model” in one sense nothing gets estimated without a model, but I assume you are referring to GCMs. There are other viable ways to handle internal variance in TCR without a GCM, and they have their relative pros and cons.”

    O.K. how do you estimate the variance of a random process from a single (short) realization of that process. It is a bit like saying we can work out the variance of a die from a single roll. We can’t.

    “The problem with that line of argument is that the climate’s evolution is path dependent and once we know the recent climate it changes the best estimates of what comes next. “

    yes, but the “true” value of TCR is not path dependent, and that is what we actually want to know.

  429. As I understand it, climate modellers are just about able to make worthwhile decadal projections of climate. This is much more difficult than centennial scale as you need to predict the effects of internal variability as well as the forced response to get it right. So even if the climate’s evolution is path dependent we can’t usefully predict the path dependent component anyway beyond a decade, and so pruning the ensemble for this particular purpose does not make much sense.

  430. HAS says:

    HAS “the difference between model worlds and the real world is not well respected” is a bit of polemic really, to draw attention to propensity to down weight empirical evidence in the face of conflicting output from models.

    The preoccupation with “true” ECS (a model construct) is a case in point, and the statement made in the conclusion of Dessler et al (and repeated here a number of times) was a what got my attention : “Given that we only have a single ensemble of reality, one should recognize that estimates of ECS derived from the historical record may not be a good estimate of our climate system’s true value.” It rings alarm bells.

    And this is just a particular case of treating the observed climate as just one more GCM model run.

    I have no beef with what aTTP says about the utility of increasingly complex models, and our capability to experiment in this way. But what they produce ain’t on a par with reality, that’s something special in science.

    I blame video games with too flash VR and AR, we’ve lost a whole generation.

  431. HAS says:

    dikranmarsupial if you start with all model runs in the ensemble and then eliminate (or down weight) those according to their fit to what passes for reality, you can still have tons of runs in what remains.

    “but the “true” value of TCR is not path dependent”. Only because it got defined that way, and that definition is dependent on constraints on the initial conditions to the models to be included. After the observations the Bayesian would say “true” TCR changed to reflect the more restricted universe we’re modelling in. That’s why I enjoyed the Schrodinger box reference.

  432. HAS wrote “HAS “the difference between model worlds and the real world is not well respected” is a bit of polemic really, to draw attention to propensity to down weight empirical evidence in the face of conflicting output from models. “

    Nobody is down weighting empirical evidence, just making sure a source of variance in the empirical evidence is properly considered. The best estimate we currently have of this variance is from the models. I pointed out earlier that we need to consider all sources of information, observational, model based and paleoclimate. In this case, the observational evidence does not conflict with the outputs from the models, as I said upthread, the observational estimates we have are plausibly samples from the distribution of model estimates.

  433. HAS wrote “dikranmarsupial if you start with all model runs in the ensemble and then eliminate (or down weight) those according to their fit to what passes for reality, you can still have tons of runs in what remains.”

    but it will give a biased estimate of the true TCR, so it would be a bad thing to do. I note you have not addressed the reasons I have given for why this is not a good thing to do.

    ““but the “true” value of TCR is not path dependent”. Only because it got defined that way, and that definition is dependent on constraints on the initial conditions to the models to be include”

    Rubbish, the “true” value of TCR is independent of initial conditions, as I have repeatedly pointed out. Yet again you are demonstrating that you don’t understand what is meant by the “true” value of TCR.

  434. “The preoccupation with “true” ECS (a model construct)”

    “true” ECS is not a model construct, that would be just “ECS” on its own.

  435. “And this is just a particular case of treating the observed climate as just one more GCM model run.”

    If you had a perfect climate model, made up of an ensemble of parallel Earths, that treatment would be exactly correct. It is reasonable to treat them as being exchangeable with GCM runs, provided you remember the limitations and uncertainties of the models. That is the basic principle of Monte-Carlo simulation, which were developed by some very smart people for the Manhattan project, and which have proved extremely useful in many fields of scientific research, including climate. If you don’t understand why this is a reasonable thing to do, you don’t understand how the models are used.

  436. ” After the observations the Bayesian would say “true” TCR changed to reflect the more restricted universe we’re modelling in. “

    O.K we’ll add Bayesianism to the list of things that you don’t understand. A Bayesian would say that there is an unknown fixed “true” TCR, but our state of knowledge about its value changes according to changes in our prior knowledge and the evidence. “True” TCR is not a random variable.

  437. zebra says:

    I’m going to follow up on izen’s analogy with one that is more congruent with dikran’s method, which stipulates identical forcings on identical Earths, but with different initial conditions.

    We wish to estimate the time (t real) it takes for a car to go a distance (s) given a particular initial velocity (v real).

    So, t real would be “true” TCR. (As ATTP (and I) define “true” TCR.)

    zebra,
    Good grief. All that I’m meaning by the term “true” TCR is the value that the real system probably has.

    Now, using dikran’s method, we create a model t = s/v, and we calculate t(n) for a number (N) of different initial velocities v(n). Then we find the average of the t(1….N).

    I would say that t average is not an estimate of t real, whether N is 100 or 1000. There is no reason to believe that t real is not an outlier distant from the average.

  438. of course different initial velocities rather than different initial distances is the key reason why that analogy is bogus. The differences in initial conditions of the climate system don’t give rise to long term warming (which would require a planetary energy imbalance), so they are not at all analogous to different initial velocities.

  439. zebra,

    I would say that t average is not an estimate of t real, whether N is 100 or 1000. There is no reason to believe that t real is not an outlier distant from the average.

    Why would you believe that? One can’t rule this out, but if you have multiple sets of data of some system and if you have some kind of method for estimating something about that system from that data, why would you expect the “true” value to somehow be very far from the average of your estimates? It’s possible, but why would that be your expectation?

  440. The Law of Large Numbers suggests this is a very reasonable thing to do.

    “I would say that t average is not an estimate of t real, whether N is 100 or 1000. ”

    the reason for this is there is no “t real” as it depends on the initial conditions, which “true” TCR doesn’t (only “observed” TCR, which is not the same thing).

  441. Scrap my previous comment.

    I would say that t average is not an estimate of t real, whether N is 100 or 1000. There is no reason to believe that t real is not an outlier distant from the average.

    This comment suggests that Zebra is still not making a distinction between “true” TCR (a physical property of the climate system) and “real” TCR (meaning the TCR we estimate from the observations). They are not the same thing. “true” TCR is what governs future climate, and so is the quantity we want to estimate. The whole problem with L&C is indeed that internal climate variability may mean that their “real” TCR estimate is not particularly close to the average of a perfect model ensemble, which would be the “true” TCR we want to know.

    The analogy still isn’t congruent with “dikrans method” (how many times do I need to say that there isn’t a “dikrans method”) as true TCR is independent of the initial conditions, but Zebras t variables are not, so there is no “true” t.

  442. Nic Lewis already addressed Dessler’s criticism here: https://judithcurry.com/2018/04/30/why-dessler-et-al-s-critique-of-energy-budget-climate-sensitivity-estimation-is-mistaken/
    On face value, it doesn’t seem that Dessler understands L&C, which is strange because it is not that hard to follow and it is only the latest in a pretty substantial literature on the energy budget approach that is now at least 16 years old. Some of us have been skeptics for a lot longer than that precisely because “back-of-the-envelope” EBM-type calculations have always suggested AOGCMs are too sensitive.

  443. zebra says:

    ATTP,

    You misread that sentence.

    “There is no reason to believe that t real is not an outlier” doesn’t mean that I believe it is an outlier.

    I’m talking about the validity of the method. Please read the whole comment again carefully.

  444. verytallguy says:

    propensity to down weight empirical evidence in the face of conflicting output from models

    I’m still confused.

    You don’t like Dessler’s work but Lewis and Curry also rely on a model.

    Are you saying their work should be down weighted?

  445. zebra said:

    “I would say that t average is not an estimate of t real, whether N is 100 or 1000. There is no reason to believe that t real is not an outlier distant from the average.”

    In trying to understand zebra’s issues, it’s important to realize that the “ensemble” used here is not the same as an ensemble used for example in the model for statistical mechanics. For stat mech, the ensemble is used to represent an enormous number of distinguishable (or indistinguishable) particles.

    Yet, for climate, it’s known that there is only one collective behavior that contributes to the “ensemble” and that’s the equatorial standing wave dipole. This can not be thought as a statistical ensemble since it’s a single collective behavior.

    As an analogy, think of it as a tidal behavior and we are measuring a local sea level height. If we did a statistical ensemble of tides with different forcing conditions, the ensemble average would result to be zero! Yet, we know that only one of the ensemble members reflects the “true” value of the tide. The rest are false. I think that is what zebra is getting at. By doing an ensemble average, we don’t know what the “true outlier” is.

    And this all revolves around the fact that research hasn’t come up with a good model for the climate dipole. We always resort to averaging it to zero.

  446. angech says:

    The atmosphere seems to have changed a little here since the publication in a journal of this reputable paper. Or was that the publication in a previously respectable journal of this paper?
    Good to see nobody arguing that it need more peer review or the editor should be sacked for letting it through with pal review.
    And even better that 1.6 C now seems to have become an acceptable lower limit [not an accepted true value] instead of the cries of it must be above 2.0C.

    I missed this comment by DG though Mosher did pick it up.
    Kudo’s Mosher.
    Dave_Geologist says: April 27, 2018 at 2:43 pm
    ” Dessler’s point AFAICS is that if you take a suite of physics-based models, where you know the ECS in advance, and try to calculate it the way LC13 and LC18 do, you get the wrong answer.”

    Did you really mean to say that Dave?

  447. Paul writes “In trying to understand zebra’s issues, it’s important to realize that the “ensemble” used here is not the same as an ensemble used for example in the model for statistical mechanics.”

    No, but the idea of a Monte Carlo simulation is exactly the same.

    “Yet, for climate, it’s known that there is only one collective behavior that contributes to the “ensemble” and that’s the equatorial standing wave dipole. “

    I’m not so sure of that, citation please.

  448. RickA says:

    Verytallguy says:

    “No, you can’t. You can better constrain the estimates with more data, but you cannot measure this parameter, as real world forcing will not match its definition.”

    No. I am measuring something different than TCR. I am calling it eTCR, but what I am measuring is the temperature difference over a doubling of CO2, but which takes all forcings into account, including solar changes, volcanos and everything else which impacts GMST.

    So what I am measuring is not the same as TCR from the simulated model.

    What I am measuring actually is relevant to the real world, because it is a real world measurement.

    My point is we should dump model TCR and model ECS as they have very little meaning.

    They vary from model to model and change as the models change.

    I expect that scientists will be measuring eTCR (or whatever they chose to call it), starting with the first doubling at 560 and taking measurements every doubling thereafter. I have no doubt scientific publications will look at these measurements, graph them and draw conclusions about them.

    Since these eTCR measurements will be from the real world, they will mean quite a bit more than model metrics, which frankly mean nothing (at least to me).

  449. Jonathan,

    On face value, it doesn’t seem that Dessler understands L&C

    I don’t think that Andrew Dessler means that he doesn’t understand L&C. I think his point is that there are a lot of details, assumptions, and choices that end up being hard to follow. The basics of L&C are essentially trivial, so someone with Andrew Dessler’s background clearly undertands them.

    pretty substantial literature on the energy budget approach that is now at least 16 years old.

    Yes, but many other people who have used this approach accept that it might not produce a result that best represent how sensitive our climate actually is. There’s also a growing body of work that illustrate potential issues with this approach.

  450. Rick,
    There’s nothing wrong with trying to understand how much we will probably warm at some point in the future in the real world. The problem, though, is that the real world is complicated and many factors can influence this result (internal variability, the Sun, volcanoes, aerosols, short-lived GHGs, long-lived GHGs). TCR/ECS estimates are an attempt to provide a result that is broadly independent of these factors (i.e., they illustrate how sensitive our climate is to external perturbations). We can then use these to try to estimate how much we will actually warm under different possible future pathways. Throwing them out and replacing them with something that is ultimately much more complicated doesn’t seem like much of an improvement.

  451. Here is a citation for my comment that there is “only one collective behavior that contributes to the “ensemble” “

    ENSO as an Integrating Concept in Earth Science, Science, 314(5806), 1740–1745, 2006.
    https://www.pmel.noaa.gov/pubs/outstand/mcph2969/mcph2969.shtml
    “The El Niño–Southern Oscillation (ENSO) cycle, a fluctuation between unusually warm (El Niño) and cold (La Niña) conditions in the tropical Pacific, is the most prominent year-to-year climate variation on Earth.”

    I think it’s the most prominent natural variation by far. Volcanic disturbances are second but that’s a different class of perturbation. There is also a multidecadal variation that aligns more with PDO. Variation in TSI is way down the scale.
    ENSO is the sore thumb of climate science. No citation for that comment.

  452. I don’t think that citation supports that claim, especially not in its original form:

    “Yet, for climate, it’s known that there is only one collective behavior that contributes to the “ensemble” and that’s the equatorial standing wave dipole. “

    Important, yes, “only”, no.

  453. BBD says:

    Throwing them out and replacing them with something that is ultimately much more complicated doesn’t seem like much of an improvement.

    Unless, like RickA, your aim is to obfuscate the issue for political motives. In case of raised eyebrows, this statement of fact is based on a number of years interaction with RickA.

  454. > I think his point is that there are a lot of details, assumptions, and choices that end up being hard to follow.
    They all seem very well explained and justified and I can’t see any radical assumptions. One of the reasons I never wrote a paper on this stuff, despite coming to a similar conclusion a long time ago, is that there are so many forcing, feedback, and heat sink sources, and so much literature on each one, and so many contradictory studies, that it is almost impossible for someone to write a serious academic paper as a hobby in this area (no matter how smart you are). IOW, the complexity is characteristic of the field, not of the approach L&C take (in my humble opinion). It’s not surprising to me that it took someone like Nic to do this work (not to discount Judith’s contribution).

  455. The Very Reverend Jebediah Hypotenuse says:


    …it is almost impossible for someone to write a serious academic paper as a hobby in this area (no matter how smart you are).

    And how is that different from every other field of scientific research?

  456. Jonathan,
    But another point to consider is that if you are aware of lots of other work that produces different results, one doesn’t always need to delve into the details in order to highlight a potential issue. It’s clear that these energy balance approaches are quite simple and rely on a number of assumptions (feedbacks constant, for example) and choices (temperature dataset, forcing timeseries, system heat uptake rates). If our understanding of the physics of the system suggests, for example, that the ECS is probably above 2K and someone comes along and produced a detailed analysis using a simple model suggesting that it is probably below 2K, my first suspicion would be that the simple model is just too simple. If you’re going to produce a result that is largely at odds with our understanding of the system being studied, then it is also important to explain this difference.

  457. ENSO is the only recognized collective behavior that is compensated out of temperature time-series, search for “ENSO-corrected”. For example:

    Forced responses that are compensated out of the time-series are volcanic and then solar.

    This paper DOI:10.1002/2013EF000216 discusses ENSO contributions, where they show the correlation of the 30°N–30°S temperature to NINO3.4, with the volcanic disturbances indicated by vertical hatches.

  458. BBD says:

    my first suspicion would be that the simple model is just too simple.

    “Everything should be made as simple as possible, but not simpler…”

    Unless of course the purpose of the exercise is not elucidation.

  459. [AT, please. -W], I think it is the other way around. The EBMs represent our best understanding of the physics of the system: you can’t get much more fundamental than energy conservation for the planet as a whole. IMO (and this has been my opinion for at least 20 years), the AOGCMs are not a good way to try and figure out bulk properties of the planet because they rely on far too many assumptions about physical processes that we still don’t understand well. In some cases we don’t even really know how uncertain we are about those processes (e.g. clouds). (let me preface that I was a theoretical physics/pure math undergrad and theoretical physics PhD before switching to machine learning/AI. So I know something about physics).

    So unless there are really good reasons why the EBM approach won’t work, it seems to me to be a better place to start. I agree that some of the assumptions may not be correct, such as the assumption that lambda is independent of the climate state. But when the evidence against the assumption is weak and comes only from AOGCM simulations, then I am more than a little skeptical. Especially given that in other approaches to estimating climate sensitivity (e.g. paleo-based estimates last time I checked), the dependence of sensitivity on climate state is pretty much ignored, even though it is much more likely to be an issue than in shorter-term EBM-based estimates (a cold, dry earth likely has a very different sensitivity than a warm, wet one for example).

  460. Jonathan,

    The EBMs represent our best understanding of the physics of the system: you can’t get much more fundamental than energy conservation for the planet as a whole.

    Except the other estimates also assume energy balance. The EBMs also typically assume feedbacks will be constant and that internal variability has not impacted the observed warming (other than in ways that can be compensated for by choosing the starting and end points). Neither of these assumptions are probably correct. Hence, we almost know in advance that the EBM results will be “wrong” in some way. Of course, all models are wrong, so the question is how big of an error is this likely to be.

    Clearly energy balance estimates are consistent with other estimates, so they’re not “wrong”. However, assuming that climate sensitivity will probably be low, based on EBM results, is an assumption that may well turn out to be wrong.

  461. John Hartz says:

    The Lewis & Curry paper has garnered some attention in the right-wing media. For example,

    Here’s One Global Warming Study Nobody Wants You To See, Editorial, Investor’s Business Journal, Apr 25, 2018

    To date, I have not come across any discussion of it in the MSM.

  462. The EBMs represent our best understanding of the physics of the system

    Yikes! EBMs might plausibly be out best method of estimating TCR/ECS, but they definitely don’t represent our best understanding of the physics of the system!

    IMO (and this has been my opinion for at least 20 years), the AOGCMs are not a good way to try and figure out bulk properties of the planet because they rely on far too many assumptions about physical processes that we still don’t understand well.

    EBMs deal with those processes by effectively assuming they don’t exist. Ignoring known unknowns is not generally a good idea. One of those unknowns is the variance of the EBM estimator due to internal climate variability. GCMs can give an estimate of that, which can’t be obtained from the observations.

  463. RICKA says:

    ATTP:

    I hear your point, but still don’t understand what the use is of model TCR and ECS.

    We simulate a 1% increase per year of CO2 and then run models to see what happens.

    However, any effort to compare the model metric to the real world results in pushback.

    I am told the real world cannot be compared to the model metric, because of volcanoes or other non-CO2 forcings chaning over the period.

    Well, what is the point of a model metric which cannot be tested and verified using real world data.

    I see this in the Lewis and Curry paper, which is criticized and challenged, and yet is using real world data to constrain TCR and ECS.

    My question is how will we ever know what TCR or ECS are if it cannot be measured, but must always be based on a particular model, and therefore can vary from model to model and changes as the model is changed over time?

    On the other hand, science can easily measure GMST at 560 ppm or 580 ppm and so forth.

    Sure, keep model TCR and ECS – but real eTCR seems more real and more useful to me.

    There is an answer and while there is some small debate over different GMST data sets, they are far more constrained than the debate over different model TCR and ECS estimates, which range from 1.5C to 4.5C (for ECS).

  464. BBD says:

    Especially given that in other approaches to estimating climate sensitivity (e.g. paleo-based estimates last time I checked), the dependence of sensitivity on climate state is pretty much ignored, even though it is much more likely to be an issue than in shorter-term EBM-based estimates

    No, that’s not correct. The state dependence of sensitivity is examined in the literature overview and methodological comparison carried out by the PALAEOSENS Project, which remains the most wide-ranging and detailed palaeoclimate sensitivity investigation to data AFAIK. And which found ECS to be >3C for 2 x CO2e.

  465. Rick,

    I hear your point, but still don’t understand what the use is of model TCR and ECS.

    1. It’s an indicator of sensitivity. The bigger it is, the more sensitive.

    2. Even though not exact, it would still give an indication of how much we would warm for a given change in external forcing.

    real eTCR seems more real and more useful to me.

    Except your real TCR is ill-defined. When we’ve double atmospheric CO2 (560ppm) what will be the aerosol forcing, the solar forcing, the short-lived GHG forcing, the volcanic forcing, etc? Some of these will also depend on the emission pathway, which is not unique. We can’t predict volcanic eruptions. Some of these have longer lifetimes than others. So, your eTCR doesn’t really have a clear definition.

  466. BBD says:

    I see this in the Lewis and Curry paper, which is criticized and challenged, and yet is using real world data to constrain TCR and ECS.

    Please read the thread, RickA.

  467. But the AOGCMs are not constrained by energy balance in the way that EBMs are. Too much warming? Chuck in more aerosols. Evidence against aerosol forcing being that negative? Chuck in aerosol/cloud interactions with positive correlation (yay, seeding works! the model says so 🙂 ).

    The assumption that feedbacks will be constant for a small change in forcing (small relative to the changes in forcing on a paleoclimate time-scale) doesn’t seem unreasonable. And L&C address the uncertainty associated with internal variability. Unless you want to claim that some specific properties of the pattern of surface warming have conspired to mislead us and bias the EBM estimate significantly downwards. But then that kind of behavior must be happening all the time, with or without forcing changes, and so wouldn’t we expect far greater variability in the historical temperature record? You can’t have a flat hockeystick shaft *and* have climate sensitivity depend in weird and wonderful ways on the particular surface temperature field at any point in time.

  468. BBD says:

    In order that we do not waste too much time, RickA’s ‘argument’ – made many, many times elsewhere, is that we just have to wait until we hit 560ppm before we can get a handle on TCR. And this being the case, there is no good argument for emissions reductions etc at this point.

  469. Willard says:

    AT,

    Please don’t misunderestimate the importance of DrJ’s point regarding Andrew’s understanding:

    His point can easily reach biblical proportions.

  470. BBD says:

    The assumption that feedbacks will be constant for a small change in forcing (small relative to the changes in forcing on a paleoclimate time-scale) doesn’t seem unreasonable.

    Out of curiosity, when in the Cenozoic did forcing last change so rapidly as it has done over the last 50 years and is projected to do over the next 50 years?

  471. Ignoring known unknowns is not generally a good idea.

    Modeling a physical system at the appropriate scale for the phenomena you are trying to predict *is* a good idea. We don’t run many-body quantum mechanical simulations of the fundamental particles in I-beams in order to figure out how big the beam should be.

  472. Joshua says:

    Jonathan –

    …not to discount Judith’s contribution

    I’m hoping you might describe Judith’s contribution. I asked Nic what criteria he used to determine co-authorship, but he has declined to answer. Should we use the same logic that Nic used, in reverse engineering to make determinations about co-authors’ contributions on other papers?

  473. Willard says:

    > Unless you want to claim that some specific properties of the pattern of surface warming have conspired to mislead us and bias the EBM estimate significantly downwards.

    The alternative being, of course, to recognize that the main selling point of oversimplistic modulz based on basic accounting is that it can easily be promoted by luckwarm megaphones, e.g.:

  474. Out of curiosity, when in the Cenozoic did forcing last change so rapidly as it has done over the last 50 years and is projected to do over the next 50 years?

    I don’t know, but that is not the point. I was referring to the absolute size of the change, not its derivative.

  475. Jonathan,
    A few points. In a GCM, climate sensitivity is emergent. In an EBM you can either do as (LC18 does) try and estimate climate sensitivity. Or, you can assume it and run an EBM to project future warming. However, in neither case is climate sensitivity really emerging from the model.

  476. “Modeling a physical system at the appropriate scale for the phenomena you are trying to predict *is* a good idea.”

    Indeed, and EBMs ignore physical processes that have impacts on this scale (that is what the discussion has been about)

  477. Jonathan,

    I don’t know, but that is not the point. I was referring to the absolute size of the change, not its derivative.

    Relative to the natural greenhouse effect, we can induce a change greater than 10%. This is probably not small. Also, bear in mind that some of the non-linearity could be realy non-linearities in the feedbacks, but some could simply be different regional warming rates (polar amplification, for example).

  478. BBD says:

    @Jonathan

    ATTP beat me to it. I think you have no argument wrt the size of modern forcing forcing change and nonlinear feedbacks vs palaeoclimate forcing change. That’s twice you’ve been wrong about palaeo in a short space of commenting. Maybe best leave it alone?

  479. Willard says:

    > Modeling a physical system at the appropriate scale for the phenomena you are trying to predict *is* a good idea.

    The best model of a cat is a cat, preferably the same cat. By the same token, here would be what the best model of the Earth would look like:

    This thing is 4.5 billion years old.

    Its surface area is more than 500 million square kilometer.

    It inhabits more than 7.5 billion human beings.

    So, what’s the appropriate scale, again?

    Alternatively, pray tell more about how I-beams relate to that thing we call the Earth.

  480. paulski0 says:

    Steven Mosher,

    The reason I am pressing this is that after looking at the spread of results in Andrew’s 100 run ensemble I am wondering how that squares with attribution studies of post 1950 studies.

    What the IPCC attribution statement ultimately says is that there is a less than 5% chance that natural factors alone could have caused more than 0.32K warming over 1951-2010. Given that there is no indication of a source for positive natural forced trend we can reasonably assume that the majority of that allowance (say, a bit more than 0.2K) is allotted to internal variability.

    The CMIP5 MPI-ESM models appear to have similar variability statistics to the version used in the 100-member ensemble, so I’ll use the picontrol runs from those to find statistics for 60-year trends. I found that the 95% upper bound is 0.12K, well within the allowance of the attribution statement.

    I think the only CMIP5 model with variability which could challenge the confidence of the attribution statement is GFDL-CM3, with a 95% upper bound of 0.33K. I did some EffCS testing with the 5-member historical ensemble for this model, using 1869-1882 and 1996-2005 periods, and found a full range of 2.3-4.6K based on a reasonable approximated forcing increase. More than double the 5-95% range of the 100-member MPI-ESM ensemble.

  481. Steven Mosher says:

    just think.. year after year we will be able to update nics estimate, and have the same discussion.

    on the other hand if his method had shown 3c, desslers paper and a few others would never have been written. sometimes errors drive us closer to understanding.

    i wish more of gcms would do large ensembles..

  482. paulski0 says:

    Jonathan Baxter,

    But the AOGCMs are not constrained by energy balance in the way that EBMs are. Too much warming? Chuck in more aerosols…. Evidence against aerosol forcing being that negative? Chuck in aerosol/cloud interactions with positive correlation (yay, seeding works! the model says so

    It seems that you really don’t understand the L&C paper, or EBMs. It’s in EBMs where you can literally set aerosol forcing to whatever you want. In AOGCMs aerosol forcing is an emergent property.

    By the way, one cause of excessive negative aerosol forcing in some models has been found to be due to use of a simple empirical linear model to represent a key aerosol-cloud interaction process. It turns out that didn’t represent the complexity of the processes involved, causing significant biases.

  483. Willard says:

    > It seems that you really don’t understand the L&C paper, or EBMs.

    No need to worry, Paul. DrJ left the building.

  484. Dave_Geologist says:

    HAS

    Dave “LC18-style energy-balance models have …” Forget all the limitations in both types of models and move to evaluate the approaches on their ability to estimate global temperatures during the 21st century (arguably there most important use). Which class on average is doing better for the first two decades?

    Don’t know and don’t care. 17 years is to short to measure a static climate let alone evaluate trends in a varying one. As has been known since the 1950s.

    The mainstream GCMs track nicely within the observed range, and have done since forever (at least since aerosol forcings were introduced). The myth that climate models run hot is so 2008. Where have you been this last decade? Same safe-space as Nigel Lawson? If ECS was really 1.5°C, GCM’s would run ‘way too hot, or we’d have the forcings very wrong. The same forcings LC18 used, so they’d be wrong too. They don’t run hot (if you claim that they do, that merely reveals that you’re in denial). Ergo, ECS can’t be 1.5°C. Dessler and Marvel have given sound physical reasons why that is the case. You can’t measure what they claim to measure from one realisation. End of.

    That contains a lot of information, it is directly and easily testable, and early indications are that it could be doing better than GCMs

    Only if you ignore every year since 2014. And even then, see above.

    I also think I recall seeing work showing that the global temp output from GCMs over the instrumental period and the 21st century could be emulated by a relatively simple set of linear equations

    So do I. It was either disingenuous to the point of dishonesty, or complete bullshit from someone whose understanding was below undergraduate level.

    The observations appear an outlier, and I suspect that continues through the first two decades of the 21st century

    I, on the other hand, know that is doesn’t continue much beyound 2008, and certainly not beyound 2013.

    I looked at Marvel et al and noted earlier it doesn’t seem to apply best practice balance model techniques

    Which you with your demonstrated expertise are ideally placed to adjudicate. Not.

  485. The Very Reverend Jebediah Hypotenuse says:

    Speaking of no need to worry…
    Rather predictably, L&C18 is being flogged all over the denialopshere as the latest final nail in the coffin of AGW:

    E.G.:
    By E. Calvin Beisner, Ph.D. of the (in)famous Cornwall Alliance…

    On November 10, 1942, after British and Commonwealth forces defeated the Germans and Italians at the Second Battle of El Alamein, Winston Churchill told the British Parliament, “Now this is not the end. It is not even the beginning of the end. But it is, perhaps, the end of the beginning.”
    In The Hinge of Fate, volume 3 of his marvelous 6-volume history of World War II, he reflected, “It may almost be said, ‘Before Alamein we never had a victory. After Alamein we never had a defeat’.”
    The publication of Nicholas Lewis and Judith Curry’s newest paper in The Journal of Climate reminds me of that. The two authors for years have focused much of their work on figuring out how much warming should come from adding carbon dioxide to the atmosphere. In this paper they conclude that it’s at least 30% and probably 50% less than climate alarmists have claimed for the last forty years.
    In fact, there are reasons to think the alarmists’ error is even greater than 50 percent. And if that is true, then all the reasons for drastic policies to cut carbon dioxide emissions—by replacing coal, oil and natural gas with wind and solar as dominant energy sources—simply disappear. Here’s another important point.
    For the last 15 years or more, at least until a year or two ago, it would have been inconceivable that The Journal of Climate would publish their article. That this staunch defender of climate alarmist “consensus science” does so now could mean the alarmist dam has cracked, the water’s pouring through, and the crack will spread until the whole dam collapses.
    Is this the beginning of the end of climate alarmists’ hold on climate science and policy, or the end of the beginning? Is it the Second Battle of El Alamein, or is it D-Day? I don’t know, but it is certainly significant. It may well be that henceforth the voices of reason and moderation will never suffer a defeat.

    That meme-salad is about as close to a Godwinning as you can get without actually mentioning
    Mr Hilter.

    And – Isn’t it spiffy that L&C18 has not only dealt a mortal blow to temperature alarmists, but also ocean acidification alarmists. Wind-farm alarmists and solar panel alarmists everywhere – rejoice!

  486. Dave_Geologist says:

    gcms need to forced by pretty unrealistic assumptions to produce anything much by way of extreme events or non-linearities over this timescale.

    Just out of curiosity, HAS, have you ever looked at the output of a single model run (simulation), as opposed to the average of hundreds of runs which you see in the summary graphs? An not just the global average, but the geographical distribution. If you had you’d know how silly that claim is.

    I think I know why you haven’t answered Willard’s question about reading AR5. I’m pretty sure there are some in there.

    And if you’d read the Methods section of a few papers, you’s also know how silly the forcing part of the claim is. Go on, provide us with a link to the literature where a GCM has been run with pretty unrealistic forcings.

  487. Willard says:

    > It was either disingenuous to the point of dishonesty, or complete bullshit from someone whose understanding was below undergraduate level.

    It could be neither, Geo. Here’s something from a friend of mine:

    https://judithcurry.com/2012/12/04/multidecadal-climate-to-within-a-millikelvin/

    As long as we all agree that this is exploratory work, we shouldn’t mind much.

    On a general note, I think this ClimateBall episode (in fairness, I think the same of all episodes) would be more profitable if it was taken as an opportunity to explore further. Editorial comments tend to reach a fixed point. When they dominate exchanges, all threads read the same.

    HAS’ contributions are so far quite minimal. A few pokes here and there, with some intimation that he really knows what he talks about without offering anything tangible. There’s no need to respond blow by blow.

    Think of exchanging with him with this ClimateBall energy balance model. If you make more effort then he does to bait people, he wins. If he needs to scream into his dogwhistle to make himself heard, he’ll lose interest. If more and more scientific tidbits get added on the table, his sideswipes and his concerns fizzle.

    Always consider that ClimateBall is more a race than a boxing match.

  488. GCMs are arguably the best approach as they are able to represent the natural variability that lies at the root of the uncertainty in climate sensitivity estimates.
    Variability and sensitivity go hand in hand.

  489. The Very Reverend Jebediah Hypotenuse says:


    Always consider that ClimateBall is more a race than a boxing match.

    Some days, perhaps a combination of both a race and a boxing match – i.e. this.

  490. Dave_Geologist says:

    Willard
    Re your general note, probably why ATTP had second and third thoughts! Although I think (or at least hope) that more than 1% will recognise the stream of evasions and unanswered questions, and the acceleration into a Gish Gallop rather pulls aside the curtain of “genuine inquiry”. The Gallop is sufficiently well know outside Climateball circles that it ought to be a giveaway for at least the low tens of percent in the wider audience.

    I must admit though, that some of my posts are selfish. I’m one of those people who speaks-to-think (or writes-to-think), i.e. I need the discipline of setting something out in words to organise my own thoughts and understanding. So I benefit even as I realise I have no hope of swaying HAS, even if he is sincere.

  491. zebra says:

    dikran,

    The analogy still isn’t congruent with “dikrans method” (how many times do I need to say that there isn’t a “dikrans method”) as true TCR is independent of the initial conditions, but Zebras t variables are not, so there is no “true” t.

    I tried to make it as simple as possible but you still seem confused. The analogy is really pretty tight:

    “Zebra’s t variables” are the equivalent of the TCR(n) that would be generated in your method.

    So we have your initial condition(1) (e.g. SST distribution), we run the model, and we get TCR(1).
    As we have my initial condition v(1), we run the model, and we get t(1).
    And so on.

    Your “true” TCR is then generated by averaging TCR(1….N). TCR(1…N) obviously are> dependent on the initial condition. (Remember, you are the one who described this process originally.)

    But really, what you call “true” TCR is simply “the average”.

  492. Dave_Geologist says:

    So, on the “explore further” theme:

    Is this a ridiculously naive thought? Perhaps dikran can adjudicate.

    EBMs ultimate reduce to finding the gradient of a line, yes? Perhaps of multiple line segments. In a simple linear regression (OLS), the more scatter there is in the cross-plot, the lower the gradient you extract. If I start with x = y, and use a random number generator to perturb the y values, the regression will return a slope close to 1 (and an R² close to 1) for small perturbations. For larger perturbations, the R² will be smaller but the slope will also increasingly be less than 1. By the time the data are very scattered (noisy) and the regression is barely significant, the slope is getting close to zero.

    Could there be something similar going on with EBMs? If we define signal as the forced response, and noise as everything else (measurement and forcing uncertainty but also internal variability), the more noise, the more scatter in the cross-plot and the more the OLS slope will underestimate the slope of a noise-free dataset. So is an EBM inherently biased to return low values of ECS (relative to the “true” noise-free value). Especially so in an analysts which assigns confident, narrow ranges to temperatures and forcings and downplays internal variability? IOW by ignoring the confounding effects of internal variability, you not only get an ECS estimate which varies according to when and how you measure it, but one which will always be less than the “true” value, to an increasing degree as more noise is mixed into the putative signal.

    Just a thought, hopefully not too incoherent.

  493. BBD said:

    “In order that we do not waste too much time, RickA’s ‘argument’ – made many, many times elsewhere, is that we just have to wait until we hit 560ppm before we can get a handle on TCR. “

    I didn’t realize that this was his ultimate argument. I figured he knew that doubling can be appropriately scaled by applying a logarithm to the current CO2. I misread where he was going with that!

  494. Willard says:

    Nice example, Very Tall. It may illustrate when trueness meets realness.

    ***

    I hear ya, Dave. If you want to know why you need to think out loud, you might like this guy’s ideas:

    [W]e are all cyborgs, in the most natural way. Without the stimulus of the world, an infant could not learn to hear or see, and a brain develops and rewires itself in response to its environment throughout its life. Any human who uses language to think with has already incorporated an external device into his most intimate self, and the connections only proliferate from there.

    https://www.newyorker.com/magazine/2018/04/02/the-mind-expanding-ideas-of-andy-clark

    You might even like his shirts.

    ***

    In any event, I think ClimateBall players ought to try to reach out to some kind of audience beyond their own selves. Without forgetting one’s own communication objective, since charité bien ordonnée commence par soi-même. My own cutoff is to balance idiosyncrasies with interesting stuff to read. See the link above, or the one below, which is more related to the topic of the thread:

    One way to extend my Energy Balance Model of ClimateBall would be this response:

  495. zebra wrote “Your “true” TCR is then generated by averaging TCR(1….N). TCR(1…N) obviously are> dependent on the initial condition. (Remember, you are the one who described this process originally.)”

    That is only true for an infinite ensemble of parallel Earths, not GCMs, I have already pointed that out to you, so it is utterly dishonest to continue to misrepresent me in that way. I note you haven’t apologised for your earlier misrepresentation, it’s almost as if the misrepresentations were deliberate.

  496. D_G “Could there be something similar going on with EBMs?”

    I think it is a bit more a matter of “over-fitting”. If you have a problem with y=alpha*x + beta +noise then if you are unlucky you could have a sample where the noise tended to be higher on average in the first half and lower than average in the second half, which will mean the OLS estimate of alpha will be too low, and if we use the residuals of the model to estimate the noise variance, we will underestimate that as well, because a spurious trend in the noise has been mis-attributed to alpha. If we want to know by how much the estimate might differ from the true value, we could do with having an unbiased estimate of the noise variance. In this case, the GCMs provide a method of estimating the total variance, not just the variability seen in the realisation of the physical process we actually observed.

  497. Hyperactive Hydrologist says:


    The method seems very sensitive to minor temperature changes. The use of BEST temperature data (bottom) vs CRU (top) results in an increase in the estimate of TCR by approximately 0.2K.

  498. Interesting tweet:

    ISTR being told that high ECS is required for high internal climate variability, but I can’t quite remember why.

  499. RICKA says:

    ATTP says:

    “Except your real TCR is ill-defined. When we’ve double atmospheric CO2 (560ppm) what will be the aerosol forcing, the solar forcing, the short-lived GHG forcing, the volcanic forcing, etc? Some of these will also depend on the emission pathway, which is not unique. We can’t predict volcanic eruptions. Some of these have longer lifetimes than others. So, your eTCR doesn’t really have a clear definition.”

    1. It doesn’t matter. Because they are real world measurements, the GMST at the CO2 doubling time has integrated all of these various forcings, so the eTCR takes every forcing into account.

    2. If it is important to identify each forcing, I am sure yearly values can be measured for each forcing (this is already being measured – isn’t it)? For example, we track volcanic eruptions as they occur and put numbers of emissions from them – don’t we?

    3. My eTCR is actually pretty clearly defined as the temperature difference at CO2 doubling, and takes all forcings into account simply by measuring the GMST temperature at 280 ppm (already done) and the GMST temperature at 560 ppm (have to wait for that). Rinse and repeat at subsequent doublings, such as 580 ppm or 600 ppm and so on.

    4. Of course we can project to 560 today (as Paul Pukite suggested above) – but that is a linear projection and therefore an estimate, rather than an actual measurement. That drags in what ATTP was talking about – what if every year between now and when we hit 560 ppm we have a huge volcanic eruption – the linear projection to 560 ppm estimated today will be inaccurate.

    5. I am simply advocating that when we hit 560 ppm, we can measure eTCR and we can measure again at 570 and 580 and 590 and 600 and so on, and over time those eTCR measurements will be much more accurately able to tell us what TCR and ECS are than the models (in my opinion). Or rather much more able to tell us what effect a CO2 doubling has on the real world (rather than in a particular model). Not to say we cannot debate how many angels can dance on the head of a pin until we hit 560 ppm – we have been doing that for decades, and I don’t see it stopping anytime soon.

  500. Rick,
    Yes, I now what you’re advocating. If all we were interested in was knowing what the temperature will be when atmospheric CO2 hits 560ppm, then we could simply wait till this happens. However, there are reasons why we may want to have some idea of what this might be so that we can make decisions as to whether or not we actually want to get there.

  501. RICKA says:

    ATTP:

    Even if the models were perfect – would that really tell us what the temperature will be when we hit 560 ppm? In the model maybe – but what about in the messy real world?

    The difference between 1.5C or 3.0C or 4.5C is so huge, and we are so far from actually having a handle on what we think the temperature will be when CO2 hits 560 ppm (in the real world) that we will probably have to wait until it actually happens.

    Saying the temperature could be anywhere from 1.5C to 4.5C higher when we hit 560 ppm is demonstrably not able to allow us to make any collective decision (as of yet), despite almost 30 years of trying.

    But carry on. It is not as if science will refuse to measure eTCR when we hit 560 ppm – so we can do both approaches. While I think the models have been totally useless so far – maybe I am wrong. Or maybe they will dramatically improve in the next few years.

    I guess we will see.

  502. BBD says:

    5. I am simply advocating

    [Panto voice:] Ooooh no you’re not!

  503. “Even if the models were perfect – would that really tell us what the temperature will be when we hit 560 ppm? In the model maybe – but what about in the messy real world?”

    if the model didn’t model the mess it wouldn’t be a perfect model. We don’t go from knowing nothing directly to knowing perfectly, generally science proceeds by bounding uncertainties, the observational estimates, the models and paleoclimate are all tools that can be used for that task.

  504. BBD says:

    The difference between 1.5C or 3.0C or 4.5C is so huge, and we are so far from actually having a handle on what we think the temperature will be when CO2 hits 560 ppm (in the real world) that we will probably have to wait until it actually happens.

    It never ceases to amaze me that however many times it is pointed out that this is false it never stops you repeating it.

  505. Rick,

    Saying the temperature could be anywhere from 1.5C to 4.5C higher when we hit 560 ppm

    That’s the ECS.

  506. John Hartz says:

    Meanwhile, back in the real world…

    It’s time for your annual reminder humans have pushed the planet into a state unseen in millions of years.

    Carbon dioxide measurements at Mauna Loa Observatory in Hawaii averaged 410.31 parts per million (ppm) in April. That bests last May’s record of 409.65 ppm, is well above the pre-industrial value of 280 ppm, and means the atmosphere of April 2018 was unparalleled in human history. Its reign will be short-lived, as May will almost surely set another record.

    Right now, carbon dioxide levels in the atmosphere are higher than they’ve been in the past 2 million years (though it may be quite a bit longer). As long as we keep burning fossil fuels like they’re going out of style, this will keep happening.

    If current emissions trends continue, we’ll create an atmosphere that resembles the one from 50 million years ago. Back then, crocodiles patrolled the Arctic, and oceans were dramatically higher. Most importantly, the climate was wildly different from the one that’s allowed humans to thrive.

    Carbon Dioxide Has Never Been Higher in Humanity’s Existence by Brian Kahn, Science, Earther, May 2, 2018

    PS — Mother Naure always</strong? bats last!

  507. RICKA said:

    “4. Of course we can project to 560 today (as Paul Pukite suggested above) – but that is a linear projection and therefore an estimate, rather than an actual measurement. That drags in what ATTP was talking about – what if every year between now and when we hit 560 ppm we have a huge volcanic eruption – the linear projection to 560 ppm estimated today will be inaccurate.”

    Now I know you are off the rails. I am not projecting anything. Pro-rated (i.e. scaled) CO2 as of now will give an effective TCR around 2C and an effective ECS around 3C. At first I thought you were being pragmatic with your concerns, now I think you just have a mental block (or worse, some sort of agenda).

  508. RICKA says:

    Paul Pukite:

    Projecting, scaled, pro-rated – you are using what has happened to estimate what will happen in the future. But that is quite different than measuring what has actually happened at the CO2 doubling point (560 ppm). That is all I am really saying. Sure we can estimate what we think the temperature will be at the future point of 560 ppm, but it is still not a measurement. Past performance is no guarantee of future performance (and all that).

  509. No, RICKA, I am using what has happened up to TODAY to estimate what it is TODAY. In 1979, the Charney report said that ECS was ~3C for a doubling of CO2. Here we are almost 40 years later and the scaled land temperature increase is right on track with this estimate. Yet, this does not make your brain explode because you seem to have a mental block.

    I thought this was what you were getting at with your first comment here, but I guessed wrong.

  510. HAS says:

    Dikranmarsupial various comment way back when about variance and “true” TCR and ECS.

    It all comes down to a definitional issue. Both terms apply to model experiments done in the various general climate models that have been used over time to replicate the climate, and are defined by the IPCC in terms of these experiments. So they are, as I have noted a number of times, simply artefacts/metrics of the models currently in use by the IPCC. As they put it in TAR “The range of TCR values serves to illustrate and calibrate differences in model response to the same standardised forcing. Analogous TCR measures may be used, and compared among models, for other forcing scenarios.” and “The equilibrium climate sensitivity is a straightforward, although averaged, measure of how the system responds to a specific forcing change and may be used to compare model responses, calibrate simple climate models, and to scale temperature changes in other circumstances.”

    So to the extent there are “true” values these change according to what the set of models is under consideration. As climate models have become more sophisticated, so the values have shifted and this will continue. And imbuing the terms with meaning that becomes a universal within the climate is an example of the difference between model worlds and the real world not being well respected. They are just metrics for comparing models and model responses.

    Further these metrics (a better term when I think about it) really only have meaning in models that can replicate the experiments on which they are based. This can be seen by the convoluted steps L&C go through to move from the output of their experiment with an energy balance model to get something comparable to these terms.

    So using the IPCC meaning of them, they are relative measures, relative to the model set we are using and that are also capable of doing the experiment. Note a model can be a good model of the actual climate and be unable to replicate the experiment. The actual climate is most unlikely to ever replicate the experiments (less so perhaps TCR), as some have noted here.

    Therefore to the extent that there are some absolute measures of the climate, and more particularly how it responds to forcings, these will be defined in terms of the actual climate we are living through. That’s the only one that can be specified in absolute terms. That will be pathway dependent and the primary condition that any such measure will need to comply with is that they replicate what we have had. That’s the variance we need to understand.

    As we get more data and better models so too they will be increasingly constrained to the past, and their various metrics will change accordingly. Any “true” IPCC measures of sensitivity will change accordingly.

    Finally I’d also repeat that while these IPCC measures might be useful to discuss the performance of various models in model sets, and perhaps help explain differences (“that one runs hot because it has a high ECS”), there are better metrics to use to evaluate the complete range of models’ ability to forecast the future.

  511. HAS says:

    Dave “provide us with a link to the literature where a GCM has been run with pretty unrealistic forcings.” Riahi et al (2011) and Riahi et al (2017) imply the RCP8.5 scenario over the 21st century would be classified by the IPCC as exceptionally unlikely ( 0−1% probability). A quick look at google scholar for “antarctica” AND “sea ice” (as an example of an extreme event) shows of those studies that only mentioned one RCP, 65% were RCP8.5 with RCP4.0 being 30%. The pattern of RCP8.5 dominance in the literature carries through the other combinations.

  512. Willard says:

    > Riahi et al (2017)

    One such reference has for title The Shared Socioeconomic Pathways and their energy, land use, and greenhouse gas emissions implications: An overview. Its Plain English:

    We present an overview of the Shared Socioeconomic Pathways (SSPs), which were developed as a community effort over the last years.

    The SSPs comprise five narratives and a set of driving forces.

    Our SSP scenarios quantify energy and land-use developments and associated uncertainties for greenhouse gas and air pollutant emissions.

    We conduct an SSP mitigation analysis, and estimate mitigation costs. We find that very low climate targets might be out of reach in SSPs featuring high challenges.

    The SSPs are now ready for use by the climate change research community.

    https://www.sciencedirect.com/science/article/pii/S0959378016300681

    A quote:

    We use the baseline SSP scenarios as the starting point for a comprehensive mitigation analysis. To maximize the usefulness of our assessment for the community scenario process, we select the nominal RCP forcing levels of 2.6, 4.5, and 6.0 W/m2 in 2100 as the long-term climate targets for our mitigation scenarios. A key reason for selecting these forcing levels is to provide a link between the SSPs and the RCPs developed in the initial phase of the community scenario process. Establishing this link is important as it will enable the impacts, adaptation and vulnerability (IAV) community to use the information on the SSPs in conjunction with the RCP climate projections archived in the CMIP5 database (Taylor et al., 2012). We thus try to get as close as possible to the original RCP forcing pathways, which sometimes deviate slightly from the 2100 forcing level indicated by the RCP-label (see Section 2 and Section 5 of the Supplementary material). In addition, we explore mitigation runs for a target of 3.4 W/m2. This intermediate level of radiative forcing (approximately 550 ppm CO2-e) is located between very stringent efforts to reduce emissions given by RCP2.6 (approximately 450 ppm CO2-e) and less stringent mitigation efforts associated with RCP4.5 (approximately 650 ppm CO2-e). Exploring the level of 3.4 W/m2 is particularly policy-relevant, considering, for example, recent discussions about scenarios and the attainability of the 2 °C objective, which is broadly in line with scenarios aiming at 2.6 W/m2 (Kriegler et al., 2015, 2014b; Riahi et al., 2015; Victor and Kennel, 2014). On the other hand, recent developments in international climate policy (e.g., the newly adopted Paris Agreement under the United Nations Framework Convention on Climate Change) have renewed attention to the importance of exploring temperature levels even lower than 2 °C, in particular a long term limit of 1.5 °C. These developments were too recent to be taken up already, but are considered in forthcoming work.

    Asking for citations is often a good idea.

  513. HAS says:

    Willard that got you a lot of real estate without quoting anything about RCP8.5.

  514. Willard says:

    Forty five comments and you still have to quote one single paper, dear HAS. Until you pay me, I’m not your monkey. What I quoted is enough to put your implicit “But CAGW” meme into some kind of perspective. There are reasons why scenarios are built the way they are, and reading about them helps defuse conspiracy ideation such as this one, circa April 26, 2018 at 5:51 am.:

    It must be a conscious shift because the authors clear thought carefully about their claims as they crafted the paper. It is perhaps possible that they truly believe that models trump reality and only changed the paper to get it past review, but one has to say its much more likely that with their other peer groups simply they saw an opportunity to keep a cultural meme going through some exaggerated PR.

    Forgot to ask – any work on your GRANGER CAUSALITY ALL CLIMATE TIME SERIES yet?

    Vintage 2010. Time flies.

  515. Willard says:

    HAS’ suspicions seem to have evolved:

    In other words I suspect more sophisticated models will be required but its a start.

    https://judithcurry.com/2010/12/27/scenarios-2010-2030-part-ii-2/#comment-26190

    Vaughan’s response, as always, is brilliant:

    Ah, the good ol’ days at Judy’s.

  516. HAS says:

    Willard “I’m not your monkey”. I suspect you are. Your quote from Riahi et al (2017) (while not talking to the specific point I made) helpfully makes the wider point that that it is the scenarios lower than PCP8.5 that are particularly policy relevant.

    People will talk.

  517. Willard says:

    > People will talk.

    That’s indeed what you do, HAS. Nothing much more.

    Perhaps I can shorten the quote so you can read it better:

    To maximize the usefulness of our assessment for the community scenario process, we select the nominal RCP forcing levels of 2.6, 4.5, and 6.0 W/m2 in 2100 as the long-term climate targets for our mitigation scenarios.

    It’d be tough to argue that they chose RCP 8.5 in their list of long-term climate targets for their mitigation scenarios, but you don’t need to.

    You just need to talk.

  518. HAS says:

    Willard. You’re making up what I said again.

  519. Ragnaar says:

    “The alternative being, of course, to recognize that the main selling point of oversimplistic modulz based on basic accounting is that it can easily be promoted by luckwarm megaphones…”

    It does sell. But we could add up who has sold the most? The world believes as pointed out by we and Lower Slobbovia being the only two countries that threw the planet into an abyss recently.

    LC18 should sell better. It’s been teed up to be hit for a single yet only a few have swung so far. So we are stuck with the usual crowd promoting it and claiming this or that.

    The problem seems to me is not the science. We aren’t happy with a bunch of wind turbines and solar panels. Hybrids and few electric vehicles or Ethanol either. We want more. We aren’t happy with oil companies or even utilities. And hurling eggs at Trump and Pruitt isn’t going to solve anything.

    I guess the point was supposed to be about marketing. The polls say we want green energy and we’d be Okay if big oil was made an example of. I want a smart phone. I got one. I want a furnace, got one of those too. I got a job, a bed, a truck and a retirement account. Wanting something, as a mass of people, as a world, is nice. Miss America wants World Peace. And bitching about Trump gets us no less CO2 and no less wars. What we need is green energy. Trump is not standing in the way. It’s the STEM people who are not delivering green energy.

    So what are the reasons?
    A conspiracy of rich white men? No.
    Trump. No.
    Rednecks? No.
    The Soviets? No.
    China? No.
    Mexico? No.

    Where’s the value? The value will sell itself.

    Marketing or sales that deliver no value has a negative return.
    We are Miss America.

  520. Willard says:

    > You’re making up what I said again.

    I’m not sure where, HAS. That’s the problem with never quoting and citing.

    I suppose it depends on what your pattern of RCP8.5 dominance in the literature means, and perhaps also your “imply” in

    Riahi e al (2011) and Riahi et al (2017) imply the RCP8.5 scenario over the 21st century would be classified by the IPCC as exceptionally unlikely ( 0−1% probability).

    What’s quite clear is that you can’t argue that R17 selected RCP8.5 for their scenarios.

    Your underhanded claims are supposed to meet Dave’s challenge to quote something, anything that refers to RCP8.5. This response was in response to your:

    gcms need to forced by pretty unrealistic assumptions to produce anything much by way of extreme events or non-linearities over this timescale.

    which is quite handwavy.

    Rest assured that since nothing you say is quite explicit and clearly stated, I don’t really rely on anything you say to make my own points.

    What you did not say which is more interesting is your work on “grangerizing” climate science.

    I predict we’ll soon switch to “it’s not science, but it’s important.”

  521. Marco says:

    RickA, your eTCR in a very roundabout way just is the response to the specific RCP we’ve followed over the period in question. It’s not a useful metric at all, because we would still need to determine which forcings have changed and in what way, in order to be able to predict what the next period will bring us, contingent on the pathway we will take.

  522. HAS says:

    willard I trust no one here really cares about the rubbish you spout and treats it as noise. Also this is all a bit peripheral to the matter at hand. But you make a series of wild accusations, so against my better judgement I’ll step you through it real slow.

    At the end feel free to apologise, but I promise everyone else I’ll ignore any other attempt to respond.

    Dave says:“provide us with a link to the literature where a GCM has been run with pretty unrealistic forcings.”

    Now this requires two things, to establish what are unrealistic forcings, and then to show where this is commonly used in the literature .

    I start with RCP8.5 as an example of unrealistic forcings, and turn to the literature that actually developed that pathway, Riahi et al (2011), and that is looking at the scenarios for the next round, Riahi et al (2017).

    You show no sign that you are familiar with those papers so to help :

    Riahi et al (2011) describes RCP8.5 as a conservative business as usual scenario. Business as usual is defined as no attempt at mitigation and conservative meant the assumptions made within within it were at the 90th percentile. With your excellent statistical knowledge you’d know that it would be reasonable to assume that with quite weak assumptions about independence this means this pathway is exceptionally unlikely in IPCC terms (0−1% probability) even with no mitigation.

    Riahi et al (2017) mentions RCP8.5 three times, and I don’t know how you missed the first hit. With over 100 runs they report “we find that only one single SSP baseline scenario of the full set (SSP5) reaches radiative forcing levels as high as the one from RCP8.5.” Again RCP8.5 is in IPCC exceptionally unlikely country when it comes to the baseline scenarios. Now there is room to argue about “pretty unrealistic” and “extremely unlikely”, but only in 5th grade.

    So two foundation papers both providing evidence for what I said: “RCP8.5 scenario over the 21st century would be classified by the IPCC as exceptionally unlikely ( 0−1% probability)”.

    What I hadn’t bothered with, but my self described monkey kindly added, was that Riahi et al (2017) regards RCP8.5 as not in the game when it comes to policy relevant pathways – further somewhat weaker evidence that the authors regard it as unrealistic.

    We now turn to the second thing Dave needs. He needs evidence that RCP8.5 is used in the literature.

    I go further and turn to the body of literature in a significant area for climate change. Papers that mention “antarctica” AND “sea ice” and at least one RCP. This bit should be easy for you to follow. RCP8.5 is used about twice as much as the pathways Riahi et al (2017) recommend. Not only is RCP8.5 used, its the main scenario.

    So even you should be able to understand my response to Dave by now.

    For penance go back and read my immediately preceding comment to the one Dave asked about. It’s much more apposite to what this thread is about. It tries another way to explain why in the end reality is the one that counts.

    You won’t like it, most of all because I’m going to ignore whatever you reply (apart from an apology).

  523. HAS wrote

    Dikranmarsupial various comment way back when about variance and “true” TCR and ECS.

    It all comes down to a definitional issue. Both terms apply to model experiments done in the various general climate models that have been used over time to replicate the climate, and are defined by the IPCC in terms of these experiments

    No, I have repeatedly said that “true” TCR is a property of the physical climate system that we can only estimate using models or observations. You have just demonstrated that you are willfully evading the point being made, so there is little point discussing this with you any further.

  524. Hyperactive Hydrologist says:

    Does low TCR imply minimal climate impacts? And if so what is the evidence for this?

    I am asking as a hydrologist and hydraulic modeller designing infrastructure for the next 100+ years. For example HS2, major road upgrades, private and commercial developments and flood defense infrastructure.

  525. HAS says:

    Try just one more time. ” ‘true’ TCR is a property of the physical climate system that we can only estimate using models or observations.”

    First that isn’t how the IPCC define it, I quoted their definitions from FAR and that was a property of models based on experiments done in them. So just recap for me how your definition differs?

    Also when you say “physical” climate system, is this the one we experience, or is that just one possible manifestation of some abstraction from that? If so what is that abstraction and how is it formed?

  526. “First that isn’t how the IPCC define it”

    The IPCC don’t define “true” TCR, they define TCR.

  527. HAS,
    I think you’re getting overly pedantic. Yes, TCR and ECS are *formally* model metrics. However, they are intended to represent the climate sensitivity of the real system. We can’t really measure the *real* system’s equivalent sensitivities because there are either confounding factors (it’s not just CO2 and forcing estimates require models) or we can’t wait as long as would be required. However, the real system almost certainly has a climate sensitivity and we use the TCR/ECS estimates as an indicator of that.

  528. HAS says:

    aTTP one needs to be pedantic, because without it one ends up treating models like observations. I’m not sure you can say the “real” system has a climate sensitivity, or even what its utility would be apart from verifying GCMs, and there are more useful ways to do this.

    I should say I’m genuinely interested in this because I see it as an important epistemological question (and I know that’s not what this thread is about). My sense is that climate science has parted company with the other physical sciences in the way it deals with these issues.

    So could you and/or dikranmarsupial, step through my questions, accepting that true TCR/ECS isn’t what IPCC call TCR/ECS? I’m particularly interested in the steps where one excludes the observed world from what is regarded in the theory as the real climate system (if this in fact is what you do do).

    I should say that I don’t have a problem with an abstraction that allows the development of tools that can be applied to the real world. But I’m concerned at the point where the tail wags the dog as per my quote from Dessler et al earlier.

  529. HAS says:

    sorry missed a word “excludes the observed world from constraining what is regarded”

  530. ” I’m particularly interested in the steps where one excludes the observed world from what is regarded in the theory as the real climate system (if this in fact is what you do do).”

    Trolling. Nobody has even hinted at excluding observations.

  531. verytallguy says:

    This quote is the one causing you to be concerned HAS?

    “Given that we only have a single ensemble of reality, one should recognize that estimates of ECS derived from the historical record may not be a good estimate of our climate system’s true value.” It rings alarm bells.

  532. ” I’m particularly interested in the steps where one excludes the observed world from what is regarded in the theory as the real climate system (if this in fact is what you do do).”

    Nobody has even hinted at excluding the observed world. If I roll a die and get a 5, does that tell me everything about the physical properties of the die? No. If we observe the climate does that tell me everything about the physical properties of the climate system? No, as we only see one realization of the effects of internal variability, which can’t be unambiguously separated from the effects of the forced response, given only a single realisation.

  533. verytallguy says:

    If it is, then I think you are overcomplicating things.

    In essence, all Dessler’s quote means is that the past is not necessarily the best guide to the future, for a chaotic system (though it does inform it, of course).

    Understanding the fundamental properties of the system should be a better guide.

  534. HAS,

    I’m not sure you can say the “real” system has a climate sensitivity,

    Of course the real system has a climate sensitivity. I can’t see the point of pedantry if it leads you to conclude something like this.

  535. “I’m not sure you can say the “real” system has a climate sensitivity,”

    that is a bit like not being sure a “real” exoplanet has a mass other than that you estimate from the observations.

  536. HAS says:

    dikranmarsupial I know what a a “real” exoplanet is, its a planet in another solar system, but what is a “real” climate system?. Is it the one we are experiencing or is it something else?

  537. HAS says:

    aTTP you are resisting answering some simple questions, arguing instead that its pedantry and simply reasserting you POV. You may not get my point until you try to answer. I don’t think its trivial. I seriously don’t understand what you and dikranmarsupial are defining as the real climate system.

  538. HAS don’t be obtuse. The climate system is the oceans, cryosphere, land, atmosphere etc. The physics of the climate system is chaotic (i.e. deterministic, but heavily dependent on initial conditions). What we experience is one realisation of that chaotic physics, just as rolling a five is one realisation of the chaotic physics of rolling a die. You really shouldn’t use words like “epistemological” if you can’t understand a distinction as fundamental or obvious as that.

  539. VTG puts it well:

    In essence, all Dessler’s quote means is that the past is not necessarily the best guide to the future, for a chaotic system (though it does inform it, of course).

    Understanding the fundamental properties of the system should be a better guide.

    We can’t predict the exact future trajectory of the climate system because of its chaotic component. So the best we can do is to estimate its fundamental properties (i.e. its statistical behaviour if we could observe multiple futures with different initial conditions) which give us a probability distribution in which the TCR we experience in the future would lie. The important thing to remember is that the observational TCR is not the fundamental property any more than observing a 5 tells you the fundamental property of a die.

  540. HAS,
    Why don’t you try to explain your point again? The *real* climate is the one we’re actually experiencing. Clearly the climate system is not insentive to external perturbations. Therefore, it has a *real* climate sensitivity. The TCR/ECS are – in my view – simply approximations that give us some indication of the sensitivity of the real system.

  541. Dave_Geologist says:

    Thanks for the NY link Willard. Not my style of shirt, and is it just me, or is the radiating head stuff a tad messianic 😉

    Re thinking out loud, I’ve been told that speaks-to-think is a subset of that, as thinking-out-loud can just be stream-of-consciousness or bullshit. Whereas speaks-to-think is purposeful (of course, Frankfurter would say bullshit is often purposeful too). But since I was told that in a pop-psychology business-training course, it’s probably bullshit too!

    Re the Marvel links, my attitude to low ECS comes from the same place as my attitude to special creationism or its Wizard-of-Oz-Curtain proxy, Intelligent Design – the clue is in the ‘nym. To a geologist, evolution is obvious and had been for centuries before Darwin. The argument was about the mechanism. From the geological record, it’s obvious that ECS is more likely to be in the 3-6°C range than even the IPCC’s 1.5-4.5°C. And yes the continents were different, but they weren’t different 10,000 years ago and you can’t get in or out of Ice Ages with a low ECS unless you invoke unicorns or some other form of magic. And yes you can play with whether E means Effective or Equilibrium, and quibble over where you draw the line between decades, centuries and millennia, but ultimately our descends will have to live with the centuries-to-millennia warming and melting.

    The most plausible reason for today being different is that although the world wasn’t created 6,000 years ago, God took notice then and started actively intervening. When your strongest argument for out-of-the-mainstream science is religious not scientific, it’s time to pack away the test-tubes and put on a dog-collar or mitre. Of course, some of the usual suspects have explicitly said AGW can’t be real because-the-Bible, so they probably don’t see it my way.

  542. HAS says:

    aTTP yes your definition makes sense to me. It is different from dikranmarsupial’s that seems to be suggesting it is an abstraction form this “reality”. I think that if we are in the applied domain studying the set of all possible climates on an earth-like planet is only useful to the extent that the tools it develops validate.

    And yes I hadn’t meant my comment about real climate sensitivity using your definition to suggest that there aren’t metrics that tell us about what happens if you poke our climate. I meant it in the specific sense that TCR/ECS aren’t much use, and there are better metrics.

    I think if you follow your definition of what the real climate is you remain on safe ground, but one does end up constraining any tools developed in dikranmarsupial’s universe of all possible climates to what has actually happened in your real climate. And one your real climate doesn’t just have the status of being one of the possible representations in dikranmarsupial’s.

  543. verytallguy says:

    dikranmarsupial’s that seems to be suggesting it is an abstraction form this “reality”.

    You might usefully reflect on the fact that the only person dikrans posts suggest this to is you.

    Nobody else.

  544. “aTTP yes your definition makes sense to me. It is different from dikranmarsupial’s that seems to be suggesting it is an abstraction form this “reality”.

    No, it isn’t different AFAICS. The problem appears to be that you are not making a distinction between the laws and properties of a physical system and the observed outcome of those physical laws and properties.

  545. paulski0 says:

    HAS,

    What you’ve cited with regards RCP8.5 does not back up your claims about likelihood. Your leading of the quote…

    “we find that only one single SSP baseline scenario of the full set (SSP5) reaches radiative forcing levels as high as the one from RCP8.5”

    …from the paper with ‘With over 100 runs’ is not an accurate reading. The “only one” here refers to being one of the five SSP storylines, not 100 total runs. So that would mean a 20% chance in your terms. Though it’s not clear whether this result can really be interpreted as probability of outcome since it’s ultimately dependent on a narrow set of subjectively chosen storylines.

  546. Dave_Geologist says:

    HAS

    Dave “provide us with a link to the literature where a GCM has been run with pretty unrealistic forcings.”

    Sorry, my lack of clarity there. I meant historic runs. Obviously there are future runs with unrealistically high forcings (unrealistic, at least, unless you lot have your way), just as there are unrealistically low ones (meeting Paris stretch targets in the face of deeply entrenched and well financed opposition). Which is perfectly proper, if you’re using the projections to set total CO2 limits, you need to have an understanding of what lies beyond those limits and what the consequences are of meeting those limits in order to make an informed decision.

    I mean a CMIP5 model (or equivalent) whose range can only encompass historic observations by including an unrealistic forcing

  547. BBD says:

    I meant it in the specific sense that TCR/ECS aren’t much use, and there are better metrics.

    It always boils down to this when something tells a contrarian something (s)he doesn’t want to hear.

  548. zebra says:

    dikran,

    zebra:

    Your “true” TCR is then generated by averaging TCR(1….N). TCR(1…N) obviously are> dependent on the initial condition. (Remember, you are the one who described this process originally.)

    dikran: That is only true for an infinite ensemble of parallel Earths, not GCMs,

    Where did I mention GCM there? My analogy is with your parallel Earths, as you specified the process– hold everything constant but the initial conditions.

    Now, as to infinity:

    I would think that most people with a math/physics background would understand your use of “infinite” to mean that as we increase N, the estimate we get by averaging TCR(1…N) approaches the value we are seeking, which is TCR(real) or my t(real).

    And TCR real is what ATTP and I, and again, probably most people, would think of as the “true” TCR– the estimate that is generated by using the initial conditions of the Earth we live on. Or, t(real) is the value you get using the initial condition v(real).

    So when I say that averaging all the t(1…N) does not give t(real), that applies as N approaches infinity as well.

    For someone with your expertise, this should be a pretty trivial thing to understand.

    Now, if “true” TCR isn’t the estimate everyone but you thinks it is, what is it?

  549. Zebra is being dishonest again:

    Where did I mention GCM there? My analogy is with your parallel Earths, as you specified the process– hold everything constant but the initial conditions.

    What he actually said was:

    So we have your initial condition(1) (e.g. SST distribution), we run the model, and we get TCR(1).
    As we have my initial condition v(1), we run the model, and we get t(1).
    And so on.

    Your “true” TCR is then generated by averaging TCR(1….N). TCR(1…N) obviously are> dependent on the initial condition. (Remember, you are the one who described this process originally.)

    But really, what you call “true” TCR is simply “the average”.

    A parallel Earth is not a model that you can run, but a GCM is, and this is just evasion to avoid admitting you had misrepresented what I wrote.

  550. Dave_Geologist says:

    Ah, the perils of posting the next day. Now I’ve re-read HAS’s and Willard’s comments, I see the above was wrong. Whether due to faulty memory, or a misunderstanding yesterday, I don’t know.

    Anyway, for completeness (and following Willard by expanding on the strides in your Gish Gallop, rather than just trying to trip up the Galloper):

    1) Do the mainstream GCMs need unrealistic forcings to represent the last century or so of climate (my previous point)? No, as is obvious from looking at the ensemble means in thousand of publications.

    2) Do individual GCM runs display chaotic behaviour at global and local level which are comparable to year-on-year variations in the real world? Yes, as is obvious from looking at graphs and maps which display individual runs. Again in numerous publications, although sometimes in the Supplementary Material (which is usually freely accessible, even for a paywalled paper).

    3) Do projections require “unrealistic forcings” to produce “extreme events or non-linearities” over your conveniently unspecified timescale? No. I would argue that RCP8.5 is not an unlikely scenario. Isn’t it exactly what you and yours are fighting for? The IPCC deems it unlikely because it thinks the world has come or will come to its senses. I could interpret your treating it as unrealistic as an admission of defeat, which would be nice. On the other hand, a good tactic when your side is losing is to tell the others they can stop fighting now because they’ve won. Meanwhile, you put away the tanks but continue with guerrilla warfare. RCP6 is the only one which doesn’t require aggressive decarbonisation or sequestration, so until we see serious action out of Paris, I’ll regard that is a real-world credible outcome. And as I said in the previous post, the point of running RCP8.5 is to show that things will be really bad if we go there, so let’s not go there. Not going there requires action, not BAU. Until I see you campaigning for action, rather than belittling or downplaying mainstream science and promoting cherry-picked outliers, RCP8.5 will remain on the table in all my discussions with you.

    BTW, it’s a mistake (or a tactic) to claim implicitly (but that’s just more of your evasion/plausible deniability schtick) that only extreme “events or non-linearities” matter. Long before we get to the desertification of Spain or greening of the Sahel, we’ll have human-animal-and-plant non-linearities as we cross thresholds in a smoothly increasing system. For example, when the wet-bulb temperature exceeds 35°C, you’ll die without 24/7 aircon. As will all your domestic animals. People won’t wait until that happens every summer before becoming refugees or invaders. Under RCP 8.5, billions of people will have to face that. But some are already, just for short periods and every decade or so. They can survive that, bury their dead and move on. Every five years? Every second year? Ditto floods. Even if the maximum flood never gets bigger for storm-energetics reasons, people can live with once-a-generation events that wipe out their homes and livelihoods, but not once per decade or once every few years. I confidently predict that most climate-change casualties won’t be from heat prostration, drowning or burning, but from bullets and bombs.

    Oh, and I could have given references for these, but fair’s fair. As you’ve not backed up your claims with references, you can do your own homework.

  551. Dave_Geologist says:

    I think it is a bit more a matter of “over-fitting”

    Thanks dikran. I guess that’s related to the Dessler or Marvel argument. I did think of raising the point earlier, but then saw that the scatter-plots in LC18 lay very close to the regression line. So though it would be moot, although I guess that depends on how much manipulation there was before the final plot – e.g. how much variance was attributed to signal by behind-the-scenes tuning. Kinda like the Fourier transformers who make it an badge of honour to match the data to a millikelvin, which only proves that it’s over-fitted bunkum.

  552. Zebra, say I have a weighted die, so that the probabilities of rolling each number are not uniform. Say the true probability of a 6 is 3/12, the probability of a 1 is 1/12 and the probabilities for 2,3,4 and 5 are all 1/6. Now these probabilities describe physical properties of the die, due to the non-uniform distribution of mass within the cube.

    Say we now roll the die 64 times and observe the following outcomes:

    1: 5/64
    2: 8/64
    3: 12/64
    4: 6/64
    5: 9/64
    6: 24/64

    Now these are the “real” probabilities that we observe/experience in the real world. But they are not the physical properties of the die, just one realisation of a chaotic process governed by those properties. If we were to continue rolling the die in the future (as we are with the climate) the outcomes we will see are not governed by the “real” probabilities we observed/experienced in the past, but the “true” probabilities (1/12, 1/6, 1/6, 1/6, 1/6, 3/12). This is the same as the difference between “real” TCR and “true” TCR.

    Now you may say that the rolls of a die are independent, but warming isn’t, and that the future course of the climate is dependent on the initial conditions. However (a) the die rolls aren’t completely independent (chaotic processes are deterministic), if we knew the exact position and velocity of every particle involved at time t = 0, we could (in principle) predict future die rolls as far into the future as we wish and (b) in practice, we don’t know the initial conditions of the climate system with sufficient accuracy to project the precise trajectory of the climate system (including the chaotic component) into the future on a multi-decadal scale, which is why “true” TCR is more useful than “real” TCR in estimating future warming.

  553. D_G indeed in machine learning this is known as the garden of forking paths or “degrees of researcher” freedom, or p-hacking etc.

  554. frankclimate says:

    dikran: Your analogy to a loaded dice is interesting. When you roll this dice 64 times you don’t have a clue about his properties. When you roll it 6400 times you’ll have for sure with some uncertainty. Coming back to L/C 18: The observed time span of about 160 years is so small that you don’t know enough to constrain the uncertainty of the “climate dice”?

  555. Chubbs says:

    Frankclimate: The problem with the past 160 years is that the first hundred or so are of limited value due to limited data and aerosals. The past 40-50 years tell you all you need to know about TCR.

  556. Regarding infinity, I forgot to mention, the law of large numbers means that if you sampled an infinite number of rolls of the die, the “real” probabilities would be asymptotically equal to the “true” probabilities. The LoLL would also apply if you did one die roll in an infinite number of parallel universes that differed only in their initial conditions.

    frankclimate: Yes, that is essentially the point. We can use the GCM ensemble to estimate the variability in the observational estimate we might plausibly expect to see as a result of only having a short timescale. The L&C result is plausibly within that range (hence the two are consistent), but also plausibly below the “true” value, however there are also other issues, such as the uncertainty in estimating GMSTs and forcings in the real world, and the uncertainties in the model that are not fully accounted for (see VV’s excellent blog post mentioned near the start of the thread).

  557. frankclimate says:

    Chubbs: When you want to know the best available informations about the climate dice it’s not clever to limit yourself in the roll times. And for the last 60 years the properties of the “climate dice” are not very different ( as Tab.2 of the paper says) as one woul await it from a dice that betrayed the most of it’s properties.

  558. frankclimate says:

    Chubbs: I responded to the same proposal from you in the same way: https://andthentheresphysics.wordpress.com/2018/04/27/lewis-and-curry-again/#comment-117257
    Do you like so much Sonny and Cher’s song: “I got you” from the movie ” Groundhog day”? 🙂

  559. Willard says:

    > Riahi et al (2011) describes RCP8.5 as a conservative business as usual scenario.

    A quote might be nice. Here would be one:

    RCP8.5 was developed to represent a high-end emissions scenario. “Compared to the scenario literature RCP8.5 depicts thus a relatively conservative business as usual case with low income, high population and high energy demand due to only modest improvements in energy intensity.” (Riahi et al. 2011) RCP8.5 comes in around the 90th percentile of published business-as-usual (or equivalently, baseline) scenarios, so it is higher than most business-as-usual scenarios. (van Vuuren et al. 2011a)

    http://climatechangenationalforum.org/what-is-business-as-usual/

    This quote does not substantiate whatever you might mean by pattern of RCP8.5 dominance in the literature. In fact, the emphasized bit does seem to contradict it. That RCP8.5 is considered a BAU scenario does not imply that it’s the only one in town.

    NG’s “RCP8.5 was developed to represent a high-end emissions scenario” might not correspond to what you make “conservative” imply. See how “imply” works?

    Also, notice R11’s title: RCP 8.5 — A scenario of comparatively high greenhouse gas emissions.

    ***

    > So two foundation papers both providing evidence for what I said: “RCP8.5 scenario over the 21st century would be classified by the IPCC as exceptionally unlikely ( 0−1% probability)”.

    I don’t think you can claim that R17 “imply” the RCP8.5 scenario, when you have a direct quote where they claim:

    We select the nominal RCP forcing levels of 2.6, 4.5, and 6.0 W/m2 in 2100 as the long-term climate targets for our mitigation scenarios.

    (I also note that you forgot to add your “imply” in your own words. To me, “imply” looks stronger than “mention three times.”)

    More importantly, this doesn’t substantiate that gcms need to forced by pretty unrealistic assumptions to produce anything much by way of extreme events or non-linearities over this timescale, since for that you’d need to show how nothing but the RCP8.5 scenarios would be worrisome, or in your own words “anything much by way of extreme events or non-linearities.” I await your demonstration that RCP6 would be a walk in the park.

    You also forgot to substantiate where you got your would-be IPCC classification. That’s, like, not a trivial omission. And if you read NG’s post, you’ll notice his own central estimate:

    My central value, 5.4 F over the century, is near the low end of the RCP8.5 range.

    You might like to have a word with the Texas State Meteorologist. I’m sure he’ll be pleased to learn that his estimates are too strong even for the IPCC or that he doesn’t grasp the meaning of the words “extremely unlikely.” And I’m sure you’ll be pleased to learn that NG already anticipated your act:

    Your own opinion of climate sensitivity, likely world economic development, etc., may differ from the middle-of-the-road consensus values, especially if you think you’re smarter than a typical climate scientist.

    Did R17 describe RCP8.5 as “conservative,” BTW? Enquiring minds might want to know.

    ***

    > But you make a series of wild accusations

    Which accusations, HAS?

    This is your last warning. You really ought to answer that one.

    And I mean, really.

  560. zebra says:

    dikran,

    “evasion”

    When I say model, I mean any model, not GCM. My little equation is a model as well. Indeed, the OP is about different models in the first place. My disagreement is about your process of averaging the results, as I said, not the model you are using.

    So I think it is you who is being evasive by not answering my question.

    You say your “true” TCR is different from real TCR (as I described real TCR and real v, which I think is how most people would understand it.)

    That’s a tiny bit of progress. We know what it is not! It is not the estimate that we would get using the real initial conditions. But still, you have not explained what your “true” TCR is.

    If you can’t explain it even using my simple analogy, I am guessing that you had some moment of confusion when you originally wrote what you did, and you are stubbornly trying to justify it.

    If we average the t(1….N), where N is a very large number, what do we get? You say it is the “true” t, but what is that other than the average?

  561. We started with zebra saying

    ” We can find the “true” TCR by averaging the results from the parallel Earths”

    Your words, not mine.

    I then qualified that by saying that it only applied to an infinite ensemble of parallel Earths:

    O.K. I missed out the “infinite number of parallel earths” …

    Later zebra wrote

    So we have your initial condition(1) (e.g. SST distribution), we run the model, and we get TCR(1).
    As we have my initial condition v(1), we run the model, and we get t(1).
    And so on.

    Your “true” TCR is then generated by averaging TCR(1….N). TCR(1…N) obviously are> dependent on the initial condition. (Remember, you are the one who described this process originally.)

    But really, what you call “true” TCR is simply “the average”.

    To which I objected as this isn’t true for models, just an infinite ensemble of parallel Earths. Zebra now tries to defend this by writing:

    “When I say model, I mean any model, not GCM. My little equation is a model as well. Indeed, the OP is about different models in the first place. My disagreement is about your process of averaging the results, as I said, not the model you are using.”

    [emphasis mine]

    Since the average being equal to the “true” value only applies for an infinite ensemble of parallel Earths, it clearly doesn’t apply to “any model”. So (i) you don’t understand the point being made and (ii) you misrepresented what I wrote and can’t admit it, but engage in evasion instead.

  562. zebra wrote “But still, you have not explained what your “true” TCR is. ”

    That is just bullshit. I have explained it repeatedly. The fact it is what you would get by taking the average TCR of an infinite ensemble of parallel Earths that differ only in their initial conditions is a clear explanation of what it means.

  563. “I think you’re getting overly pedantic. Yes, TCR and ECS are *formally* model metrics. However, they are intended to represent the climate sensitivity of the real system. We can’t really measure the *real* system’s equivalent sensitivities because there are either confounding factors (it’s not just CO2 and forcing estimates require models) or we can’t wait as long as would be required. However, the real system almost certainly has a climate sensitivity and we use the TCR/ECS estimates as an indicator of that.

    To satisfy the pedants, I would half-heartedly recommend that the doubling climate sensitivity should have been defined in terms of decibels. That would prevent RICKA and HAS from misunderstanding what a logarithmic response means, at least for CO2.

  564. Chubbs says:

    Frank – The climate dice haven’t changed much in the past 60 years because man-made forcing has increased rapidly, overwhelming natural variability. I don’t expect the dice to change in the near future either. Do you?

  565. The Very Reverend Jebediah Hypotenuse says:

    >>> So I think it is you who is being evasive by not answering my question.

    >>> I am guessing that you had some moment of confusion when you originally wrote what you did, and you are stubbornly trying to justify it.

  566. verytallguy says:

    dikran,

    would it be fair to say that your “true” TCR is the TCR most likely to be observed on an Earth where the forcing varies as the IPPC definition (1% CO2 increase per year for 70 years).

    The *observed* TCR where forcing varied exactly as per IPPC could be significantly different.

    The *estimated* TCR from real world earth using methods such as L&C could be significantly different and could also be biased (ie the most likely estimate will not be the true TCR as the methodology will be biased)

    ?

  567. frankclimate says:

    Chubbs: I can’t say something about the future exactly ( nobody can do so in our universe). From the observations of the past I would mean that TCR ( enough to forecast the next 20 years which is a challange enough) of our “real” “climate dice” is about 1.3. The rest ( emissions, internal variability, the rest of the forcings) nobody knows good enough.

  568. zebra says:

    dikran,

    ” The fact it is what you would get by taking the average TCR of an infinite ensemble of parallel Earths that differ only in their initial conditions is a clear explanation of what it means.”

    Yes, my original comment was about exactly that.

    You are telling us how to calculate it, not “what it means”.

    You are making a circular statement: “You can get it (the average) by taking the average.”

  569. VTG yes, indeed, although I would say “expected” rather than “most likely” (mean -vs- mode of the distribution).

    Essentially the observed TCR is a random* variable, as it depends on initial conditions. The “true” TCR is the expectation (i.e. average) of the distribution of TCR and the TCR we observe is one sample from this distribution.

    Another way of looking at it, is to say that the climate is comprised of an additive combination of a forced response (warming due to a change in the forcings, including feedbacks) and an unforced response (i.e. internal climate variability) superimposed on top of it. The “true” TCR is a property of the forced response, but we can only estimate it from the observed climate which is a mixture of the forced response and one realisation of the unforced response, so we have to completely remove the effects of internal variability to determine the “true” TCR. It is unlikely that L&C have removed the effects completely and that is what causes the “bais”. The future forced response is independent of initial conditions, and that is the bit we can actually predict into the future. The internal variability is chaotic, and we can’t (currently) predict its trajectory with any confidence on the required multi-decadal timescale. Thus we aim to predict the expected warming and the uncertainty around it due to the internal variability.

    * arguably nothing above the quantum level is truly random, but it is used as a way of talking about a state of knowledge or uncertainty.

  570. My understanding is that the ensemble averaging of GCMs is a hybrid mix of Monte Carlo and non-MC simulations. Quite obviously, the random nature of volcanic disturbances is not included, as the known eruptions are fixed in time as temporal boundary-conditions. Yet, whatever goes into the dynamics of the coupled atmospheric-ocean oscillations is clearly randomized in terms of initial conditions. The other randomization is in terms of the parameter uncertainty.

    My research interest is to investigate whether the natural oscillations are stochastic, chaotic, or non-chaotic deterministic. This would have huge implications on how Monte Carlo-based ensembles would be run.

  571. zebra wrote “Yes, my original comment was about exactly that [an infinite ensemble of parallel Earths ].”

    bullshit, you wrote: “When I say model, I mean any model,” so it is clear that you weren’t talking about just an infinite ensemble of parallel Earths.

  572. Willard says:

    > You are making a circular statement: “You can get it (the average) by taking the average.”

    Mathematics is usually circular, zebra.

    What do you mean by “circular”?

    ***

    > You are telling us how to calculate it, not “what it means”.

    Unless how to calculate that quantity tells us what it means.

    What is the meaning of “what it means”?

  573. Willard says:

    > From the observations of the past I would mean that TCR ( enough to forecast the next 20 years which is a challange enough) of our “real” “climate dice” is about 1.3.

    What makes you think that forecasting the next 20 years should be easier than forecasting the next 40 years, FrankB?

    Looks like the meteorological fallacy to me.

  574. Chubbs says:

    Frankclimate – Thanks for your honest reply. I prefer 1.8. I wouldn’t rule out 1.3, but I wouldn’t bet on it either. We are going to hit 1.5C increase in the next few decades in any case.

  575. zebra says:

    very tall guy,

    “would it be fair to say that your “true” TCR is the TCR most likely to be observed on an Earth where the forcing varies as the IPPC definition (1% CO2 increase per year for 70 years).”

    Which you cannot determine by averaging the model results for parallel Earths (holding everything the same) except different initial conditions.

  576. Willard says:

    > Which you cannot determine by averaging the model results for parallel Earths (holding everything the same) except different initial conditions.

    It would be possible if you could find an accurate estimator, e.g. by running a very large number of Twin Earths using most of this universe’s resources, by developing quantum computing or whatever.

    Also, that Andrew got the result he got is a contingent matter. Had he discovered that EMBs are accurate, then he would have inferred from his results that they are. A very big advance in the history of Realness Trueness.

    Speaking of which, the Wiki provides two definitions of accuracy:

    Accuracy has two definitions:

    More commonly, it is a description of systematic errors, a measure of statistical bias; as these cause a difference between a result and a “true” value, ISO calls this trueness.

    Alternatively, ISO defines accuracy as describing a combination of both types of observational error above (random and systematic), so high accuracy requires both high precision and high trueness.

    https://en.wikipedia.org/wiki/Accuracy_and_precision

    The ISO definition seems to reinforce the intuition I have that it’s hard to speak of accuracy without precision.

  577. verytallguy says:

    Which you cannot determine by averaging the model results for parallel Earths (holding everything the same) except different initial conditions.

    Sure. But you can estimate it. The question is how best to do that, given the sum of human knowledge.

    One way to estimate is the GCM model results you refer to.

    Nic Lewis however, argues that his method gives the one true answer (or one true PDF, at least)

    Others argue that taking into account everything we know (paleoclimate, physics, volcano response etc etc etc etc) – all other methods – Nic’s result is likely biased low.

  578. Willard says:

    > Others argue that taking into account everything we know (paleoclimate, physics, volcano response etc etc etc etc) – all other methods – Nic’s result is likely biased low.

    Let’s bear in mind that Andrew claims that EBMs are imprecise, not biased. Bias relates to accuracy, not precision.

    As for trueness, here’s the official BIPM wording for “measurement trueness,” “trueness of measurement” and “trueness”:

    closeness of agreement between the average of an infinite number of replicate measured quantity values and a reference quantity value

    NOTE 1 Measurement trueness is not a quantity and thus cannot be expressed numerically, but measures for closeness of agreement are given in ISO 5725.

    NOTE 2 Measurement trueness is inversely related to systematic measurement error, but is not related to random measurement error.

    NOTE 3 Measurement accuracy should not be used for ‘measurement trueness’ and vice versa.

    Click to access JCGM_200_2008.pdf

    As for “true value”:

    quantity value consistent with the definition of a quantity

    NOTE 1 In the Error Approach to describing measurement, a true quantity value is considered unique and, in practice, unknowable. The Uncertainty Approach is to recognize that, owing to the inherently incomplete amount of detail in the definition of a quantity, there is not a single true quantity value but rather a set of true quantity values consistent with the definition. However, this set of values is, in principle and in practice, unknowable. Other approaches dispense altogether with the concept of true quantity value and rely on the concept of metrological compatibility of measurement results for assessing their validity.

    NOTE 2 In the special case of a fundamental constant, the quantity is considered to have a single true quantity value.

    NOTE 3 When the definitional uncertainty associated with the measurand is considered to be negligible compared to the other components of the measurement uncertainty, the measurand may be considered to have an “essentially unique” true quantity value. This is the approach taken by the GUM and associated documents, where the word “true” is considered to be redundant.

    The main difference between “trueness” and “true” seems to be that true is a value, while trueness is not. Also note that the metrological conception of “true” operationalizes it, therefore making it more “circular,” if I understand zebra’s concept of circularity correctly.

    This resource is referenced in the Wiki on accuracy and precision.

    I can’t believe zebra still hasn’t checked the Wiki.

  579. verytallguy says:

    Let’s bear in mind that Andrew claims that EBMs are imprecise, not biased. Bias relates to accuracy, not precision.

    Your claim on Andrew’s views is I believe, biased, based as it is on too small a sample.

    Andrew claims that his paper shows EBMs are imprecise, but also that others’ work shows they are likely biased low.

  580. zebra says:

    very tall guy,

    “determine”, “estimate” “how best to do that”

    Whether you call it determination or estimate,

    “averaging the model results for parallel Earths (holding everything the same) except different initial conditions”

    Is not a valid way to do it.

    OK, are we now going to have a debate “what does valid mean’? Should I say “not the best”?

    Look, instead of bringing in Nic Lewis or any other topic, how about just addressing the question as it has been stated? The point of me using the analogy is to separate the discussion from all the baggage of the climate game, and examine the method.

    Can we “estimate” the t(real) for a given v(real) by averaging the model results derived from the initial conditions v(1…N)? How is this better than just picking a number at random?

  581. verytallguy says:

    “How is this better than just picking a number at random?”

    It removes internal variability from the estimate

  582. paulski0 says:

    frankclimate,

    I think 160 years is a bit of a red herring. According to the median forcing history used by L&C2018, about 2/3 of historical net positive anthropogenic forcing has occurred in the past 50 years. If aerosol forcing is towards the more negative part of the uncertainty range, nearly all the positive anthropogenic forcing would have occurred over the past 50 years.

  583. Willard says:

    > Andrew claims that his paper shows EBMs are imprecise, but also that others’ work shows they are likely biased low.

    Indeed, but there’s a difference between claiming P and claiming that other claims Q. One does not simply claim Q by claiming that otters in Mordor claim Q.

    Also note that scientists usually don’t need to quote the BIPM to appease sea lions, Very Tall.

    Nic uses “bias” so much that it biases me against using “bias” too much.

    ***

    > “averaging the model results for parallel Earths (holding everything the same) except different initial conditions” Is not a valid way to do it.

    Since the Bureau International des Poids et Mesures defines trueness as the “closeness of agreement between the average of an infinite number of replicate measured quantity values and a reference quantity value,” this claim seems to imply that the BIPM does not offer something valid.

    But what does “valid” mean exactly?

    ***

    > The point of me using the analogy is to separate the discussion from all the baggage of the climate game, and examine the method.

    It’s about time you state your freaking point, zebra.

    I strongly suggest that your next comments show you can examine the method by yourself instead of playing Socrates with Dikran or anyone else.

  584. RICKA says:

    Marco says:

    “RickA, your eTCR in a very roundabout way just is the response to the specific RCP we’ve followed over the period in question. It’s not a useful metric at all, because we would still need to determine which forcings have changed and in what way, in order to be able to predict what the next period will bring us, contingent on the pathway we will take.”

    How is that any different than taking the model metric TCR and trying to apply it to the real world? Won’t you have to determine which forcings have changed and in what way in order to square up what actually happened to the model?

  585. RICKA says:

    Paul says:

    “To satisfy the pedants, I would half-heartedly recommend that the doubling climate sensitivity should have been defined in terms of decibels. That would prevent RICKA and HAS from misunderstanding what a logarithmic response means, at least for CO2.”

    The temperature response to exponential growth of CO2 is linear. It takes more CO2 emissions for the second doubling, as compared to the first doubling – but the temperature response is the same (i.e. whatever ECS turns out to be).

    Here is an article which might help your understanding Paul:

    https://skepticalscience.com/C02-emissions-vs-Temperature-growth.html

    This is why when CO2 emissions are 1/2 way to 560 people tend to simply double the temperature increase from when CO2 was at 280 ppm – i.e. they are scaling or projecting or using a linear projection to estimate what the temperature will be at 560 ppm, based on what has happened so far.

    The difficulty is teasing out how much of the temperature increase is caused by increased CO2 and how much is caused by other factors.

    Based on Lewis and Curry 2015 and 2018 it would seem that not 100% of the observed temperature increase can be attributed to CO2 emissions. It looks more like about 1/2 to me. The other 1/2 of warming is natural warming or changes in other forcings.

  586. zebra says:

    [Try to “examine the method” by yourself, zebra. No more Socrates. – W]

  587. verytallguy says:

    For a chaotic system they will not follow the same path.

    For a linear system, you would be correct.

  588. BBD says:

    It looks more like about 1/2 to me. The other 1/2 of warming is natural warming or changes in other forcings.

    This was wrong the last two dozen times you said it. The last two dozen explanations as to why should have sufficed to prevent a 25th repetition.

  589. Dave_Geologist says:

    indeed in machine learning this is known as the garden of forking paths or “degrees of researcher” freedom, or p-hacking etc.

    I was familiar with the latter two, dikran, but not the first. Like the Borges reference, although I’m not a great fan of magical realism myself.

    The cynic in me asks, when I see EBMs and other relatively simplistic analytical models which can be calculated fairly quickly and permit lots of researcher choices, how many higher-ECS outcomes were left on the cutting-room floor before the One True ECS emerged and was polished up for public view.

  590. verytallguy says:

    How is that any different than taking the model metric TCR and trying to apply it to the real world?

    Nobody does this. It would be stupid, as the forcings won’t match the TCR definiton.

  591. BBD says:

    The cynic in me asks

    And the cynic in me responds. Does that make it a dog whistle?

    🙂

  592. RICKA says:

    “Based on Lewis and Curry 2015 and 2018 it would seem that not 100% of the observed temperature increase can be attributed to CO2 emissions. It looks more like about 1/2 to me. The other 1/2 of warming is natural warming or changes in other forcings.”

    That’s why it’s called an effective response. The other half is primarily GHGs that get dragged along with the CO2 as they are proportionally emitted, as per the observational evidence over the years.

    You seem to have a fundamental problem with the concept of scaling and proportionality. Perhaps Nic Lewis has this same problem based on what I have read from JimD’s comments.

  593. RICKA says:

    Paul Pukite:

    No, I think I understand scaling and proportionality.

    I am simply pointing out that estimating the temperature at 560 ppm is not the same thing as measuring the temperature at 560 ppm.

    I am advocating that we (science) measure GMST when we hit 560 ppm and use that doubling point and the temperature difference at that doubling point to study the climate response of the real world to all of the various forcings, and their changes over the time period of the doubling. And to measure other future doubling points and so on. Eventually, in my opinion, this data could be used to obtain a much better understanding of the climate response to CO2 doubling than the models (at least so far).

    From what I can tell from this thread, the model metrics have nothing to say about the real world and can only be used to compare with other models.

    In fact, verytallguy just said that nobody would take the model metric TCR and try to use it in the real world, because that would be stupid.

    So, once again – what is the point of model metrics like TCR and ECS?

    Or put another way – how do we know the models are correct?

    How do we validate them?

    Say that various models converge on an identical TCR and ECS. How do we know that these model metrics, which have now been compared to other models and found to be the same (in my hypo), are in fact correct using real world data?

    From what I can tell – this cannot be done.

    So what is the point?

  594. In the exchange, I had with Nic Lewis on Climate Etc. following my comment here
    https://judithcurry.com/2018/04/30/why-dessler-et-al-s-critique-of-energy-budget-climate-sensitivity-estimation-is-mistaken/#comment-871495

    I demonstrated for Nic Lewis by simple algebraic calculations how the simple energy budget model breaks down when the temperature pattern of a planet is changing under warming by an external radiative forcing.

    It also seems to me that Nic Lewis has overlooked that ECS according to the simple energy budget model theory is defined on the basis of the traditional linearised energy balance equation (1) given in Dessler et al.:

    Click to access acp-18-5147-2018.pdf

    R=F+ λ Ts.

    This equation describes how the TOA energy imbalance develops when the planet is going from one equilibrium state where the anomalies R=Ts=0 to a new equilibrium state where, under the influence of the forcing F, the temperature anomaly Ts has changed in such a way that the planet has reached the new equilibrium state with R=0.

    Because this is a linearised equation, λ is a constant and the equation 0=F+ λ Ts can be solved for the new equilibrium value of Ts=-F/λ. ECS for this simple energy budget model is defined as the temperature anomaly calculated in that way for the forcing caused by a doubling of the carbon dioxide mixing ratio, for example from 400 ppmv to 800 ppmv.

    However, Nic Lewis claimed in a reply to me that λ could be temperature dependent. However, this is not compatible with the mentioned definition of ECS. We can easily see that by considering the following simple example:

    R=F+ λ(Ts) Ts.

    Now the solution of 0= F+ λ(Ts) Ts is different, it is no longer Ts=-F/λ but we have to consider the dependence of λ on Ts in calculating the solution. Thus the definition of ECS according to the simple global energy budget model is no longer valid if λ is temperature dependent.

    If we have a planet with a changing temperature pattern the calculation of ECS must be done in another way, with a modified energy budget model, and I have also discussed this in my comment to Dessler et al. (2018) in ACP

    Click to access acp-2017-1236-SC3.pdf

    I analysed an example of a planet with three regions, each of them satisfying the regional versions of the linearised flux equation according to equation (1) in Dessler et al. (2018):

    R_1=F_1+ λ_1 Ts_1
    R_2=F_2+ λ_2 Ts_2
    R_3=F_3+ λ_3 Ts_3

    I have discussed how ECS can be calculated for this example as an average of the ECS-values that can be calculated for each of the regions. Read more in my ACP comment.

    PS I posted this also at Climate Etc.

  595. BBD says:

    RickA

    So, once again – what is the point of model metrics like TCR and ECS?

    Read. The. Thread.

    If you can’t understand this stuff, don’t comment.

  596. frankclimate says:

    Willard: “What makes you think that forecasting the next 20 years should be easier than forecasting the next 40 years,”
    I think we calculate the future from models, (also a guess derived from observations is a model) and as longer the forecast goes as more uncertainties will arise. As longer the forecast goes into the future as more “unknown unknowns” could step in. Who thinks he has confidence for long runs can be sooner a victim of overconfidence than he estimates.

  597. paulski0 says:

    Dave_Geologist,

    …when I see EBMs and other relatively simplistic analytical models which can be calculated fairly quickly and permit lots of researcher choices, how many higher-ECS outcomes were left on the cutting-room floor

    There are a few elements of researcher freedom, and some questionable choices made, which have small effects, but the main factor in determining roughly the magnitude of sensitivity is the forcing es