Climate Sensitivity on the Rise

There’s a Nature news & views article by Kyle Armour called Projection and prediction: Climate sensitivity on the rise. It discusses various analyses that indicate that simple energy balance methods tend to under-estimate climate sensitivity. Normally, I would write a lengthier post about this. However, Victor has already done better than I could have done, so I would suggest going and reading Victor’s post.

Update: Dana has also written about this.

Advertisements
This entry was posted in Climate change, Climate sensitivity, Science and tagged , , , , , . Bookmark the permalink.

100 Responses to Climate Sensitivity on the Rise

  1. I think you can download Kyle Armour’s article here.

  2. dana1981 says:

    Also my post with some input from Mark Richardson here.

  3. Dana,
    Thanks, of course. I even posted a couple of comments on your article.

  4. BBD says:

    As it becomes increasingly obvious that the rhetoric about low sensitivity is not based on robust results, the contrarian claque will have to find another angle. I wonder what it will be next? Because it is now clear that there will have to be a next.

  5. Windchaser says:

    As it becomes increasingly obvious that the rhetoric about low sensitivity is not based on robust results, the contrarian claque will have to find another angle. I wonder what it will be next?

    Satellites are still on the table.

    “What’s that? RSI is in pretty good agreement with the surface measurements, and the methodology and code for UAH hasn’t been released, ? Shh shh shh, we don’t care. UAH will do us just fine, until scientists find the problem with that, and then we’ll move on to something else, just like we always do. ”

    Heh. I suppose you don’t see many skeptics talking about the surface measurements these days, do you? When they were flat, they were A-ok. When they stopped being flat — well, you know that data is all faked to show warming, right?

  6. Thanks for the plug.

    I had hoped for your assessment; this is more your topic than mine.

  7. You’ve got MarkR and Kyle commenting on your post. It’s much more their topic than mine and they seem to like it.

  8. -1=e^iπ says:

    While I’m not the biggest fan of Energy balance models, isn’t Kyle Armour’s (as well as Victor’s) choice to use 2C as the best estimate of ECS according to climate models a bit questionable? Nic Lewis’ best estimate is 1.45 C when he uses the updated Bjorn Stevens’ estimate for aerosol forcing. If one accepts the multipliers used by Kyle Armour, then this gives a best estimate of 2.92 C, not the 4.6 C that Victor gets.

  9. -1,
    I don’t think Bjorn Stevens ever provided a proper distribution for the aerosol forcing and Nic Lewis didn’t ever publish an update. The best estimates for the ECS (IIRC) vary from around 1.7C to 2C. You could use 1.7C and bring it down a bit, but it doesn’t really change the basic point. They’re not actually arguing that the best estimate should now be 4.6C.

  10. -1=e^iπ says:

    Sorry for the error in my last comment. I meant ‘according to energy balance models’ instead of ‘according to climate models’.

  11. -1: “If one accepts the multipliers used by Kyle Armour, then this gives a best estimate of 2.92 C, not the 4.6 C that Victor gets.

    Victor gets 3°C.

    Had the mitigation sceptical movement been a sceptical scientific movement, they would have gotten 3°C all the time as well. But the mitigation sceptical movement is clearly a political movement and thus pushed for the outlier result of Nic Lewis.

  12. -1=e^iπ says:

    Which basic point? There are many.

    Does the energy balance model have a downward bias due to the biases mentioned? Yes, I agree with that.

    Does these biases cause an underestimation by a factor of 2? That is a bit more questionable.
    – For example, coverage bias does mean that some temperature data sets such as HadCRUT4 are underestimating global temperature change. But the approach used by Richardson et al. 2016, to omit observed regions to climate models may be biased. Perhaps the CMIP5 models are systematically overestimating arctic warming, thus the bias due to the masking problem may not be as large as 15%. Statistical infilling may be a better way to estimate this bias.
    – As another example, if climate sensitivity is not as high as what the climate models are saying, then there is reason to believe that the ratio of TCR to ECS is closer to one and similarly the ratio of ECS as measured by the energy balance models and the true ECS value may be closer to one. This is due to the interactions of feedbacks. Less positive feedback means less interaction from positive feedbacks. So a bias of 30% due to the different definition of ECS used in energy balance models may be an overestimate of the bias.

    The claim “global climate models output also fit the “empirical” temperature change” and the treatment of climate models as providing unbiased estimates of climate sensitivity are things I disagree with. Even after you take issues like coverage bias into account for the warming over the instrumental period there is still a discrepancy between climate model predictions and observations. I see a lot of dancing around this issue by people claiming it looks about right, or it appears to be in rough agreement, but this isn’t very scientific. Compare the trends and perform a t-test.

  13. Which basic point? There are many.

    That these energy balance estimates don’t really change that the likely range is still about 2C to 4.5 with a best estimate of about 3C.

  14. -1=e^iπ says:

    By likely, do you mean the 66% confidence interval, or the 95% confidence interval?

  15. I think that is the 66% range. Well, AR5 had (IIRC) a likely range of 1.5C to 4.5C, but that was probably due to these energy balance estimates that appear to be biased low.

  16. -1=e^iπ says:

    I asked my last question because I think data over the instrumental period constrains ECS far more than the 66% CI being from 2 C to 4.5 C. Just take the Nic Lewis results with the Bjorn Stevens aerosol estimate and multiply it by the factor suggested by Armour. That 66% CI is now a 95% CI.

    I’ll leave you with this to think about: If climate models are overpredicting warming relative to observations, then does this not suggest a systematic bias in most climate models? Is it possible to use climate models + empirical observations to correct for this bias and produce unbiased (and more constrained) estimates of TCR and ECS? If so, what are these estimates?

  17. -1,
    Then I think you need to read the papers that Victor discusses in his post. There are aspects that the instrumental period cannot constrain, such as time dependence and efficacy.

  18. -1=e^iπ says:

    “There are aspects that the instrumental period cannot constrain, such as time dependence and efficacy.”

    Incorrect. These are aspects that the energy balance approach cannot constrain. But the energy balance approach is not the only approach that can use instrumental period data to estimate climate sensitivity. See Van Hateren 2012.

  19. These are aspects that the energy balance approach cannot constrain.

    Yes, that’s what I meant. Please try to think about what I’m saying, rather than being pedantically literal. Plus, unless you can see into the future, the time dependence issue is not resolvable without models, as is the efficacy. So, I do not believe that you can constrain the ECS as you claim you can. You might be able to do a calculation that appears to show this. I doubt you can do one that is definitively correct.

  20. BBD says:

    -1

    If climate models are overpredicting warming relative to observations

    Which ones? 🙂

  21. Steven Mosher says:

    “Perhaps the CMIP5 models are systematically overestimating arctic warming, thus the bias due to the masking problem may not be as large as 15%. Statistical infilling may be a better way to estimate this bias.”

    ya the models are crazy biased when it comes to amplification

    http://static.berkeleyearth.org/graphics/figure52.pdf

  22. -1=e^iπ says:

    @ BBD – CMIP5

    @ ATTP – The time dependence issue is revolvable because you estimate the impulse response function. Thus you can directly calculate ECS and TCR using the same definition as climate models, you don’t need to use an incorrect definition like the energy balance model. As for efficiency, you can add free parameters to the model to estimate the relative efficiency of difference sources of forcing from the data. As for ‘definitively correct’, are climate models ‘definitively correct’?

  23. Steven Mosher says: “ya the models are crazy biased when it comes to amplification”

    Your Berkeley Earth estimate is missing those beautiful things called error bars.

    Berkeley Earth may produce values in the Arctic, but it is mostly extrapolating from stations with a lower trend at lower latitudes. I would not be so overconfident like you, especially given that climate models have heavily underpredicted the decline in Arctic sea ice. Maybe try such a comparison again, but only with data you trust, in regions with a sufficient station density.

  24. I believe in the arctic from 1950 to present ( the time period in question)
    That
    A) we are in substantial agreement with C&W, whom you’ve roundly attacked… oh wait
    B) we are in substantial agreement ( a bit higher ) than CRU whom you’ve critiqued endlessly.. opps
    C) We dont extrapolate. The temperature is modelled. GISS extrapolates.
    D) If you dont like modelling the air temperature over ice, then just look at the SST under ice
    which is relatively constant. The answer is roughly the same.
    E) We are in substantial agreement with reanalysis. opps.
    F) Yes the models have underpredicted sea ice loss which will make the real comparison even
    worse, since the air over open water should be warmer.. correct? such that if the models
    got the sea ice loss correct, they would be even more out of wack.
    G. But yes, we could only focus on antarctica where there are more observations.. Opps.. that looks no better.

    Next.

  25. Ah poor Mosher, if CRU or C&W would make your overconfident claims I would also criticize them. If someone from CRU would pretend that reanalysis data is a good source for trends, I would destroy them, especially for a region that has nearly no observations in the beginning and depends mostly on ever changing satellites at the end.

    Your datapoints mainly come from regions with lower trends. Whether you formulate the problem as interpolation or extrapolation, you get the problem of extrapolation that you are inferring something about an outlier region from data from outside the region.

    So you do not have any error bars for your extrapolated values and you have to resort to sarcasm. Sad.

  26. SM writes: “D) If you dont like modelling the air temperature over ice, then just look at the SST under ice which is relatively constant. The answer is roughly the same.”

    How is either going to detect warming in an ocean filled with ice? Thermodynamically the area is constrained in summer between the ocean temp at approx. -1.8C and the temp of melting ice at 0C. Temperature graphs like DMI N80 tell us virtually nothing between day 120 and day 260.

    Only in winter months when ice has mostly covered the Arctic ocean can we see the effects of warming with temperature measurements. In summer, the decreasing ice volume *is* the thermometer – not temperatures. Summer energy mostly goes into the phase transition from snow and ice to water – not into raising air or ocean temperatures.

  27. -1,

    The time dependence issue is revolvable because you estimate the impulse response function.

    I don’t see how this follows. If the response overall is non-linear, then I don’t think you can compensate for this using a time interval over which the response has been approximately linear.

    As for efficiency, you can add free parameters to the model to estimate the relative efficiency of difference sources of forcing from the data.

    Indeed, but this require estimates of the efficacy from models. In other words, it’s not just empirical.

    As for ‘definitively correct’, are climate models ‘definitively correct’?

    Of course not, but that’s why one has to be careful of claiming a higher precision (as you seem to be doing). In fact, I think that formally 2C to 4.5C is closer to a 95% interval, but this is reduced to 66% to take into account aspects that are still uncertain. Doing a calculation that produces a more precise result does not mean that that precision is warranted.

  28. BBD says:

    -1

    @ BBD – CMIP5

    You didn’t read Victor’s post?

    Taking all three biases into account the best estimate from the energy balance models from around 2°C estimate becomes 4.6°C**; see Figure 1b of Armour (2016) reproduced below.

  29. -1=e^iπ says:

    @ ATTP –

    “I don’t see how this follows. If the response overall is non-linear, then I don’t think you can compensate for this using a time interval over which the response has been approximately linear.”

    Are you purposely being vague here?

    I’ll assume that by ‘response being non-linear’ you mean that the energy balance model definitions of ECS and TCR are non-constant over time. You get over this by using the climate model definitions of ECS and TCR and estimating those using the estimated impulse response function (or step response function if that’s easier to work with).

    … maybe you will understand better if I give you an example.

    Let’s say you estimate the step response function Sum(i = 1 to i = k; delta_f*c_i*(1 – exp(-1/b_i*delta_t))), where k is the number of exponential terms in the estimated step response function, c_i is the empirically estimated coefficient for the ith exponential term, b_i is the characteristic decay time of the ith exponential term, delta_t is the time since the step change in forcing of CO2, and delta_f is the change in forcing of CO2.

    From this, you can easily obtain TCR since TCR is defined as: “the average temperature response over a twenty-year period centered at CO2 doubling in a transient simulation with CO2 increasing at 1% per year.”.

    TCR is (assuming I made not mistakes in the integration):
    Sum(i = 1 to i = k ; c_i*3.71 W/m^2/(69.7 years)*(67.2 years + b_i^2/(20 years)*(exp(-1/b_i*79.7 years) – exp(-1/b_i*59.7 years)) – b_i))
    Here, TCR is just a linear combination of the estimated coefficients c_i for the step response function.

    “Indeed, but this require estimates of the efficacy from models. In other words, it’s not just empirical.”

    No, it’s called adding extra parameters to the model and estimating models from data. Maybe I’ll dumb the explanation down a bit:

    It is possible to estimate the model: temperature_change = c_0*forcing_change + error. It is also possible to estimate the model: temperature_change = c_0*GHG_forcing_change + c_1*Solar_forcing change + c_2*Aerosol_forcing change + … + error. The difference between the two models is that the second allows for efficiency to be freely estimated by the data, where as the first assumes forcing efficiencies of 1 for everything. Similarly, you can add free parameters to your model when estimating the step response function, in order to allow for the possibility that forcing efficiency is not 1.

  30. -1=e^iπ says:

    “I made not mistakes” should read “I made no mistakes”. Sorry for the typo.

  31. Christian says:

    Mosher,

    Its a little bit of cherry pick what you are done. Starting in 1950s isnt so well, because the models tend to fail this period to uncertainy of a possible WW2 issue. So again i think, you always using “TAS” in Models but using Air+SST from the Observations.

    Therefore the most best is, if you only look at the air temperature over Land and if you do this, you will also see that since 1970s (in 70-90N) that CMIP5 is running cool against Berkeley or other Words for the periode of 1970-2015 the trend are:

    CMIP5: 0.43K/Decade
    Berkeley: 0.6K/Decade

    Even woarse to look on the short periode of the fast sea ice decline, so we look at 2000-2015

    CMIP5: 0.39K/Decade
    Berkeley: 0.73K/Decade

    In my opinion, the last one is the important, because this is the periode where sea ice in northern Hemisphere decreases very much, much more like the models and we see what we would expect to see.

  32. -1,

    Are you purposely being vague here?

    No, and maybe you could cut these kind of comments out?

    The point is that climate models suggest that we – on average – warm faster as we approach a doubling of CO2 than at the beginning of a period during which we are starting to double atmospheric CO2. We have not yet doubled atmospheric CO2, therefore since we haven’t experienced the period during which warming may be faster, I do not see how it is possible for the instrumental period alone to take this into account.

    Sure, you could try and incorporate this into an estimate that uses the instrumental data, but since this would rely on climate model results, it seems hard to then argue that one can produce an estimate that is more precise than we get from climate models themselves.

    No, it’s called adding extra parameters to the model and estimating models from data.

    If the data does not yet include the period during which warming accelerates, then there is no way the data can be used to constrain thia acceleration.

    Maybe I’ll dumb the explanation down a bit:

    Humble, aren’t we? Our previous discussions are coming back to me now.

    You seem to (unintentionally?) be making my point for me. Developing some kind of model that produces a very precise result is not all that useful if that precision is unwarranted. For example, we haven’t yet doubled atmospheric CO2, so we can’t use the instrumental period to completely constrain how we will warm in future. There is also evidence that the pattern of the surface warming can influence the temperature response, so we can’t use the instrumental period to completely constrain how it will warm if the pattern of surface warming in future is different to what we actually experienced.

  33. -1,

    “I made not mistakes” should read “I made no mistakes”.

    Your humility really is showing, isn’t it?

  34. Dikran Marsupial says:

    “No, it’s called adding extra parameters to the model and estimating models from data”

    If we want to estimate the value of a parameter from a finite sample of data then there will be inevitably be some uncertainty in the estimated value. The more data we have, then in general the smaller the uncertainty. However, the data needs to provide some constraint on the value of the parameter in order to reduce the uncertainty in the estimated value. For instance if we want to try and estimate the boiling point of water at 1 bar, then if we only ever collect data below 50 degrees C then we won’t have constrained the value of the parameter very much.

    “It is also possible to estimate the model: temperature_change = c_0*GHG_forcing_change + c_1*Solar_forcing change + c_2*Aerosol_forcing change + … + error. The difference between the two models is that the second allows for efficiency to be freely estimated by the data, where as the first assumes forcing efficiencies of 1 for everything. Similarly, you can add free parameters to your model when estimating the step response function, in order to allow for the possibility that forcing efficiency is not 1.”

    The uncertainty in the estimates also depends on the number of parameters you are trying to estimate (this is known in statistics/machine learning as the “curse of dimensionality”). Just adding parameters to a model doesn’t mean you can necessarily estimate their values from the data you have (see also identifiability), and doesn’t mean it will give more reliable results or physical insight (c.f. overfitting).

    “Ignorance more frequently begets confidence than does knowledge: it is those who know little, not those who know much, who so positively assert that this or that problem will never be solved by science.” ― Charles Darwin, The Descent of Man

    … or indeed that the answer is straightforward ;o)

  35. Dikran Marsupial says:

    ” Similarly, you can add free parameters to your model…”

    Danger Will Robinson!

    “A turning point in Freeman Dyson’s life occurred during a meeting in the Spring of 1953 when Enrico Fermi criticized the complexity of Dyson’s model by quoting Johnny von Neumann `With four parameters I can fit an elephant, and with five I can make him wiggle his trunk‘” Mayer et al. (doi:10.1119/1.3254017)

  36. BBD says:

    You’d expect the smart contrarians to be among the first to realise that the EBM gig is over, but life is full of surprises.

  37. -1=e^iπ says:

    “We have not yet doubled atmospheric CO2, therefore since we haven’t experienced the period during which warming may be faster, I do not see how it is possible for the instrumental period alone to take this into account.”

    We have yet to experience a world where the sun has a radius greater than 700,000 km. Does this mean we cannot use empirical observations of the sun to predict and constrain the radius of the sun 4 billion years in the future, when the radius will be much larger?

    “Sure, you could try and incorporate this into an estimate that uses the instrumental data, but since this would rely on climate model results”

    Putting extra parameters in a model and using data to estimate the parameters = relying on climate models????????????????????? That doesn’t make sense.

    “it seems hard to then argue that one can produce an estimate that is more precise than we get from climate models themselves.”

    Even if I were to accept your bizarre premise that using a method to estimate forcing efficiency that doesn’t use climate models is apparently using climate models, now you are arguing that it is hard to believe that using more data would help better constrain an estimate. Really? Ever heard of the law of large numbers?

    “If the data does not yet include the period during which warming accelerates, then there is no way the data can be used to constrain thia acceleration.”

    1. There is acceleration in the instrumental period.
    2. There is variation in temperature and forcing for the instrumental period, which can be used to estimate the step response function.
    3. The method used by Van Hateren doesn’t necessarily have to be used with instrumental data. It can also be used with paleoclimate data, specifically over the Holocene.

    “we can’t use the instrumental period to completely constrain how we will warm in future.”

    If by completely you mean that the uncertainty is zero, no of course we can’t. That would require infinite data and an infinite amount of time to process that infinite data.

  38. -1,

    We have yet to experience a world where the sun has a radius greater than 700,000 km. Does this mean we cannot use empirical observations of the sun to predict and constrain the radius of the sun 4 billion years in the future, when the radius will be much larger?

    Seriously, if you think is a question worth answering, stop wasting my time.

  39. -1=e^iπ says:

    @ Dikran Marsupial –

    If you want to argue that the estimate won’t be very well constrained due to lack of data, complexity of model, etc. that is fine. But at least you acknowledge that you can use empirical data to estimate the step response function, forcing efficiencies, etc. without using climate models or whatever it is ATTP is trying to argue.

    Could you please try to convince ATTP that you actually can estimate the various things I am referring to. Because this conversation isn’t going to go anywhere if ATTP keeps claiming you can’t infer the step response function, forcing efficiencies, etc. from empirical data and use this to estimate and constrain climate sensitivity.

  40. -1,
    Maybe Dikran should respond, but I don’t think that he has agreed with this

    But at least you acknowledge that you can use empirical data to estimate the step response function, forcing efficiencies, etc. without using climate models or whatever it is ATTP is trying to argue.

    Do you really think that from observations of the Sun alone, we can estimate its properties in 4 billion years time?

  41. MartinM says:

    Putting extra parameters in a model and using data to estimate the parameters = relying on climate models????????????????????? That doesn’t make sense.

    Yeah, why would you be relying on climate models, just because you’re constructing a model of the climate? That’s crazy talk.

  42. Dikran Marsupial says:

    -1 ” But at least you acknowledge that you can use empirical data to estimate the step response function, forcing efficiencies, etc. without using climate models or whatever it is ATTP is trying to argue.”

    If you had suitable data (including direct knowledge of the forcings) that provided information on the value of the parameter AND the model was identifiable, however I don’t think either is the case. Note there are also issues such as missing variable bias that crop up as well.

    At the end of the day, the inference hierarchy is something along the lines of:

    physics > statistics >= chimps pulling numbers from a bucket

    Note I am a statistician (my specific area is machine learning, but it is essentially statistics when all is said and done), and I am much more swayed by physics than I am by statistics, which is why I find GCMs (with all their issues) more convincing than simple statistical models because the physics constrains the parameter values via “prior knowledge” even in the absence of the data. This helps to guide you away from the “chimps pulling numbers from a bucket” end of the spectrum. This is why climatologists tend to use both statistics and physics in their papers.

  43. verytallguy says:

    BBD,

    You’d expect the smart contrarians to be among the first to realise that the EBM gig is over

    these smart contrarians of which you speak. Do they live in the hot cold places? Or perhaps in the flat mountains?

  44. Dikran Marsupial says:

    “Could you please try to convince ATTP that you actually can estimate the various things I am referring to.”

    I think that ATTP is aware that you can estimate various things, but that is not the same as meaningfully/usefully estimate. In this case it seems that we don’t have observations that cast light on the value of a parameter of interest, so we can’t meaningfully estimate its value. Chimps pulling numbers from a bucket can estimate ECS, but it wouldn’t be an estimate I would trust.

  45. -1, maybe a more extreme example makes it possible to see why we disagree. Would you say we can estimate the Earth System Climate Sensitivity from instrumental data? I would say not, I would expect Physics to say the same.

    This sensitivity includes warming due to increases in albedo when the ice caps disappear. However, up to now this term is very small and it drowns in the natural variability and measurement and sampling uncertainties.

    Thus I would expect that it is impossible to estimate this sensitivity based on instrumental data, you need a climate model or observations from the deep past to do so.

  46. -1=e^iπ says:

    “Do you really think that from observations of the Sun alone, we can estimate its properties in 4 billion years time?”

    We have so much information about the sun. It’s radius, it’s spectrum, it’s colour, etc. obviously if you want to get a good estimate of its radius in 4 billion years, you would want to refer to knowledge of physics to interpret that information. With respect to using timeseries analysis to estimate climate sensitivity you can use a functional form that is consistent with physics. I’m not advocating the Keenan position where you use some nonsense ARIMA model that doesn’t make physical sense.

    Although if one wanted to be extremely pedantic, one could argue that the radius as a function of time is likely analytical. Thus one could fit a polynomial function of time to the ln of radius and use that to make predictions and constrain the future radius. Obviously it would be a terrible estimate, but it would still be an estimate with some non-infinite confidence interval.

    @ Martin – Yes, it’s technically a climate model. Though I thought by the context of the conversation, by climate models we were referring to things like GCMs.

  47. Victor,
    Exactly, if you don’t have enough data, then you can’t easily extract the signal from the noise. Another issue is related to what James Annan discusses here. We’ve only experienced one reality. It appears that the pattern of warming (in particular, SSTs) can influence the overall warming trend, and so we don’t even know if the signal we extract is actually a good representative of the forced response. It is much more likely some combination of the forced response and internal variability. Actually extracting the forced response would require data from multiple Earths, and we don’t have that.

  48. Dikran Marsupial says:

    -1 wrote “Because this conversation isn’t going to go anywhere if ATTP ”

    if you want the conversation with ATTP to go somewhere, then I would suggest you avoid saying things like “Are you purposely being vague here?” (which would be most unlike ATTP) and “Maybe I’ll dumb the explanation down a bit: “, which give the impression that you are more interested in having a bit of ClimateCraic (TM) and looking to wind ATTP up, rather than have a civil discussion. It takes effort not to respond to that sort of behaviour in kind, and it really isn’t conducive to seeing the other persons point.

    As I understand it, the “observations” we have on the forcings are partially dependent on model based analysis, rather than being direct observations, so if you want to include the distinct forcings as separate variables in the statistical model, then it is no longer a purely statistical analysis as it is implicitly dependent on the (physical) models. However it is a while since I read the papers on this, so you would be better of asking a physicist/climatologist.

  49. We have so much information about the sun. It’s radius, it’s spectrum, it’s colour, etc. obviously if you want to get a good estimate of its radius in 4 billion years, you would want to refer to knowledge of physics to interpret that information.

    What do you think most people would call some kind of calculation based on physics that allows us to determine the evolution of a system, like the Sun?

  50. -1=e^iπ says:

    @ Dirkan Marsupial – “Note there are also issues such as missing variable bias that crop up as well.”

    If you want to go down that route, GCMs also have missing variable bias. They aren’t taking into account the albedo of the fish in the ocean, or whether or not I decide to point a mirror at the sun tomorrow.

    “I think that ATTP is aware that you can estimate various things”

    Based on his responses, I doubt it. But maybe I’m wrong. Saying you will get a terrible estimate (your position) is different from saying you cannot get an estimate (ATTP’s position).

    @ Victor Venema –
    “-1, maybe a more extreme example makes it possible to see why we disagree. Would you say we can estimate the Earth System Climate Sensitivity from instrumental data?”

    Absolutely you can. It might not be a well constrained estimate, but you can still make an estimate.

    “However, up to now this term is very small”

    Very small, but non zero. Which is why you can estimate ESS.

  51. -1,
    If you’re going to be rude and if you’re going misrepresent me, you can go away. I do remember our previous discussions. I don’t remember them fondly.

    Saying you will get a terrible estimate (your position) is different from saying you cannot get an estimate (ATTP’s position).

    I didn’t say any such thing. All I’ve said is that simply because you can develop an analysis that produces an estimate that is more precise than other estimates, does not mean that your precision is warranted/justified.

  52. -1=e^iπ says:

    @ Dikran –

    “which give the impression that you are more interested in having a bit of ClimateCraic (TM) and looking to wind ATTP up”

    That isn’t my intention. I provided a reference to a paper that uses empirical data to infer climate sensitivity that doesn’t use the energy balance approach and I explained how one could modify it to take non-unity forcing efficiency into account. ATTP still didn’t get it, so I tried alternative explanations, tried simplifying things, tried analogies, etc.

    “As I understand it, the “observations” we have on the forcings are partially dependent on model based analysis, rather than being direct observations, so if you want to include the distinct forcings as separate variables in the statistical model, then it is no longer a purely statistical analysis as it is implicitly dependent on the (physical) models.”

    If you really wanted to, you could avoid this by using proxies for forcing instead of forcing itself. For example, using the logarithm of CO2 concentrations directly in the model, level of SO2 emissions directly in the model, etc. and just have free parameters to account for different forcing efficiencies. You could still estimate climate sensitivity from such a statistical model.

  53. -1,
    Possibly you should consider that there is a difference between someone disagreeing with you, and someone not getting it. It almost seems as though you haven’t considered the possibility that you might be wrong. oh, hold on.

    I’ll repeat my point. You seem to be claiming that you can use the instrumental record to produce an estimate that is more precise than the standard IPCC estimate. I don’t dispute that you can do this calculation. What I dispute is that one can be confident that such improved precision is warranted/justified. Do you at least get this, even if you disagree?

  54. Dikran Marsupial says:

    -1 “If you want to go down that route, GCMs also have missing variable bias”

    sorry you are just bullshitting (in the Harry Frankfurt sense) and transparently only latched onto this to avoid addressing the substantive point that the model you propose cannot be used to meaningfully estimate the quantities of interest. Sorry, I have better things to do than to indulge this sort of thing any further.

  55. -1=e^iπ says:

    “What do you think most people would call some kind of calculation based on physics that allows us to determine the evolution of a system, like the Sun?”

    A model.

    The context of the conversation usually involves people claiming things like ‘energy-balance estimates do not agree with climate model estimates’, yet the energy-balance model is still technically a climate model. Sorry for the ambiguity, but I did not create it.

    “I didn’t say any such thing. All I’ve said is that simply because you can develop an analysis that produces an estimate that is more precise than other estimates,”

    …. earlier ATTP said things like:
    ‘I do not believe that you can constrain the ECS as you claim you can.’
    ‘this require estimates of the efficacy from models.’

    If you claim that I cannot estimate something I interpreted that literally, rather than interpreted that to mean that you can estimate it, but it won’t be well constrained.

  56. -1,
    What do you think I meant by the term “constrain”?

    If you claim that I cannot estimate something I interpreted that literally,

    No, you very obviously did not interpret it literally, as I didn’t say any such thing. Jesus!

  57. Dikran Marsupial says:

    FWIW I have just checked ATTP’s comments on this thread, and nowhere AFAICS did he say anything that could reasonably be construed as claiming “that I cannot estimate something”.

  58. Especially not if one interprets things “literally” 🙂

  59. -1=e^iπ says:

    “It almost seems as though you haven’t considered the possibility that you might be wrong. oh, hold on.”

    Please correct me if I am misinterpreting you. Are you trying to use a post where I corrected a typo in a previous post, where I acknowledged the possibility that I could have made a mistake in the integration to get
    Sum(i = 1 to i = k ; c_i*3.71 W/m^2/(69.7 years)*(67.2 years + b_i^2/(20 years)*(exp(-1/b_i*79.7 years) – exp(-1/b_i*59.7 years)) – b_i))
    as the definition of TCR from the step response function as evidence that I haven’t considered the possibility that I might be wrong?

    “I’ll repeat my point. You seem to be claiming that you can use the instrumental record to produce an estimate that is more precise than the standard IPCC estimate.”

    Yes and no. There are different ways of using the instrumental record that were discussed here.

    With respect to the Van Hateren approach, that does not necessarily produce a more constrained estimate of climate sensitivity. Whether it does or not depends on the data.

    Alternatively, you could compare model output with observations and use that to correct for systematic bias in models as well as possibly exclude extreme values of climate sensitivity and thus get a more constrained estimate of climate sensitivity. One way to do this involves treating the true model as a linear combination of climate models plus error, similar to what Annan and Hargreaves do in their paper where they obtained their estimate of warming since the LGM. This would necessarily get a more constrained estimate, but it could appear to be less constrained as the CIs that use GCMs don’t take error due to systematic bias into account.

    And of course, thirdly, you could use the energy balance approach and correct for its biases. This could get a more constrained estimate than climate models, it depends. But I think you could get a more constrained estimate than the 66% CI being from 2 C to 4.5 C.

  60. -1=e^iπ says:

    “What do you think I meant by the term “constrain”?”

    Create an estimate and a confidence interval for that estimate, or something along those lines.

    “No, you very obviously did not interpret it literally, as I didn’t say any such thing.”

    Do you think it is possible to estimate forcing efficiency using timeseries analysis of instrumental data without the usage of GCMs? Yes or No?

  61. Marco says:

    “All I’ve said is that simply because you can develop an analysis that produces an estimate that is more precise than other estimates, does not mean that your precision is warranted/justified.”

    I’d rather have an estimate that is accurate – the precision can come afterwards…

  62. Willard says:

    Speaking of “overprediction” relative to observations, here’s an oldie:

    I MOURN OVER THE LOSS to England and to Cambridge of a discovery which ought to be theirs every inch of it, but I have said enough about it to get heartily abused in France, and I don’t want to get hated in England for saying more.” So wrote a disappointed Sir John Herschel in November 1846, two months after Neptune had been discovered by Heinrich d’Arrest and J. G. Galle of the Berlin Observatory as a direct consequence of the calculations based on the novel approach of inverse perturbations of the French mathematical astronomer U. J. J. Le Verrier.

    http://www.jstor.org/stable/234933

    So yes, Virginia, teh stoopid modulz are sometimes righter than the top observations of their times.

  63. BBD says:

    Roadies have gone. Lighting rig’s down. Tour bus left hours back. The venue is dark and empty but still there’s someone strutting and fretting on the stage, apparently unaware that it’s all over.

  64. -1,

    Create an estimate and a confidence interval for that estimate, or something along those lines.

    Exactly, hence my confusion as to how you have literally interpreted me as saying something different.

    Do you think it is possible to estimate forcing efficiency using timeseries analysis of instrumental data without the usage of GCMs? Yes or No?

    I’m not a politician, so you don’t get to play the “yes or no” game. Technically, forcings come from models, so the pedantic answer is probably no. Could you make some kind estimate as you suggested? Sure, but that has underlying assumptions, such as – I think – the mean trend is the forced response. This may not be true. Also, how confident are you that the result isn’t degenerate? How do you account for internal variability? So, yes, you can probably develop a pure timeseries analysis that would produce a set of numbers that you could call the efficacies of the forcings. You might, however, want to compare this with model results to check that they actually make sense.

    Again, you seem to be claiming that you can use the instrumental record to produce a result that is more precise than the IPCC estimate. My argument is simply that even if you can, you can’t be sure that the precision that you get is justified. You would need to be confident that you had accounted for all uncertainties and biases. As I’ve already mentioned, I think (I may be wrong) that based on all the evidence, the IPCC likely range is actually a 95% range that is reduced to 66% because they regard there are being some uncertainties that are not being fully accounted for. Someone can correct me here, if this is wrong.

  65. Dikran Marsupial says:

    ““No, you very obviously did not interpret it literally, as I didn’t say any such thing.”

    Do you think it is possible to estimate forcing efficiency using timeseries analysis of instrumental data without the usage of GCMs? Yes or No?”

    It is a pity that -1 couldn’t just admit that (s)he had misrepresented ATTP, or better still apologize for having done so.

    Note also the question is rather ambiguous, as I pointed out chimps pulling numbers from a bucket could estimate forcing efficacy without usage of GCMs so “yes” is also a pedantically correct answer.

  66. -1=e^iπ says:

    The main problem with using GCM output to estimate climate sensitivity is that there can be systematic bias in the distribution of GCMs. Thus the estimates are not necessarily biased. If you compare GCM output using historical forcing with observations, it appears that GCMs are oversensitive on average.

    Sure, you could say there is systematic bias that isn’t being taken into account, and subjectively reduce the 95% CI to a 66% CI, and end up with a not very well constrained estimate of sensitivity. But that doesn’t seem preferable to instead using instrumental or paleoclimate data to correct for the bias and maybe even better constrain the estimate.

  67. -1,
    You appear to be responding to a comment that noone has made. This isn’t an argument in favour of using GCMs over all other estimates.

    But that doesn’t seem preferable to instead using instrumental or paleoclimate data to correct for the bias and maybe even better constrain the estimate.

    You should, of course, try to constrain the estimates using other methods. However (and this is really all I’ve suggested) is that doing a calculation that appears to produce a more constrained result does not mean that one should immediately accept that result. The better precision would only be justified if you really had accounted for all possible uncertainties and biases. The goal (as Marco has implied) is not to simply produce a more constrained result; this has no real value if the result is not also accurate.

    I may have suggested this before, but why don’t you try and publish this? You seem remarkably (over?) confident on blogs and clearly have the ability to do the calculation. Why not put it out there more formally and see how it is received?

  68. Windchaser says:

    -1,

    If you compare GCM output using historical forcing with observations, it appears that GCMs are oversensitive on average.

    Could you provide some evidence for this? It seems to be contradicted by the papers cited in the original post. For instance, see this comment from Victor’s blog post:

    “The equilibrium climate sensitivity from global climate models is about 3.5°C***, which is close to the best estimate from all lines of evidence of about 3°C. The “empirical” estimate of 4.6°C is now thus clearly larger than the ones of the global climate models.”

  69. Windchaser says:

    Do you think it is possible to estimate forcing efficiency using timeseries analysis of instrumental data without the usage of GCMs? Yes or No?

    100% yes, as we didn’t yet put any bounds on how accurate the estimate has to be. Just “can we produce an estimate”.

    As for getting an accurate result (i.e., say, 50% more accurate than the GCMs). Certainly that’s theoretically possible, with sufficient data. But I don’t think we have such data yet.

    Generally, when you lose useful information from one source, you have to make up it up with more information elsewhere to get the same quality of results. So if we toss out the physics-based GCMs, you need a lot more statistical data to reduce the uncertainty back down. And we don’t have that.

  70. -1=e^iπ says:

    @ Windchaser –
    Lots of historical CMIP5 model output here: https://climexp.knmi.nl/getindices.cgi?WMO=CMIP5/Tglobal/global_tas_Amon_mod_historicalNat_%%%&STATION=CMIP5_historicalNat_models_Tglobal&TYPE=i&id=someone@somewhere
    Take it and compare it to some empirical observation time series, such as BEST or Cowtan and Way. Compare the trends and construct a t-test.

    With respect to Victor’s claim of 4.6 C being the new estimate, I explained earlier in this comment section that applying those multipliers to the most recent energy balance results only gives you 2.9 C as the best estimate.

    @ ATTP – Maybe I will publish something eventually.

  71. Dikran Marsupial says:

    “The main problem with using GCM output to estimate climate sensitivity is that there can be systematic bias in the distribution of GCMs. ”

    Error is comprised of both bias and variance, replacing a (possibly) biased estimator with a high variance (but unbiased) estimator (because you are unable to estimate the parameters reliably from the data you actually have) does not necessarily give you a better estimate. Besides, energy balance estimates will be biased as well, if only because of missing variable bias, which has already been pointed out to you more than once (c.f. internal variability).

  72. -1,
    I guess you haven’t read post and associated papers?

  73. -1=e^iπ says:

    I mentioned the 4.6 C estimate in my first comment. Why do you not think I read the post?

  74. BBD says:

    Funnily enough, I said exactly the same thing earlier.

  75. BBD says:

    2.92 C, gosh, canonical.

  76. -1=e^iπ says: “I mentioned the 4.6 C estimate in my first comment. Why do you not think I read the post?

    This part of my post:

    [T]here are many different lines of evidence that support an equilibrium climate sensitivity around 3, with a likely range from around 2 to about 4.5. That the simple energy balance models might now suggest a best estimate of around 4.6°C does not really influence this overall assessment. It is just one line of evidence.

    The promotion of the cherry picked climate sensitivity of 2°C, or lower, was disingenuous. A similar promotion of a value of 4.6°C would be no better. (Someone promoting a climate sensitivity of 12.8°C deserves a place in statistical Purgatory.)

  77. -1=e^iπ says:

    You still claim that 4.6 C is the new best estimate and that the new best estimate using the energy balance approach is clearly larger than climate sensitivity from climate models. If one uses the recent estimates of Nic Lewis using the updated Aerosol estimates, this isn’t the case.

  78. -1,
    You really do need to read everything people write!

  79. BBD says:

    You still claim that 4.6 C is the new best estimate

    No, he didn’t.

  80. BBD says:

    3C is the best estimate. As you agree.

  81. Eli Rabett says:

    “For instance if we want to try and estimate the boiling point of water at 1 bar, then if we only ever collect data below 50 degrees C then we won’t have constrained the value of the parameter very much.”

    Mr. Clausius and Mr. Clapyron beg to differ, but that is because they have many precise and accurate measurements and a proved model. Mr. -1, not so much

  82. Ethan Allen says:

    2.3 News and Views
    News and Views articles inform readers about the latest advances in climate research, as reported in recently published papers (either in Nature Climate Change or elsewhere) or at scientific meetings. Most articles are commissioned, but proposals can be made to the editors in advance of publication of the paper or well before the meeting is held. News and Views articles are not peer-reviewed, but undergo editing in consultation with the author.
    http://www.nature.com/nclimate/authors/gta/content-type/index.html#toc-2.3

    ” … NOT peer-reviewed … ”

    I went looking for the provenance of Figure 1 …

    “Figure 1 | Probability distribution of climate response to forcings. a, Transient climate response estimated from observations1 (black), and its revision following Richardson et al.3 (blue) then following Marvel et al.6 (green). b, As with a but for climate sensitivity, with an additional revision for climate sensitivity appearing smaller than its true value 7–11 (red). Histogram of climate model values shown in grey.”

    Didn’t find it in references 1-11. Go figure one, go. Go away. 😦

    Anyone drawing PDF cartoons ought to show their work for said PDF cartoons. IMHO

  83. Yes, News and Views articles are not peer reviewed, that is why I explicitly wrote that it was a News and Views article. These articles are written to put a new Nature article into a larger perspective and are written by scientists that are at the top of their speciality. The editors at Nature are as qualified as top scientists(, but they prefer to have a normal job with decent labour conditions.) If you do not know these things, you may want to wonder why you are in the business of telling scientists how to do their job.

    I would personally trust a News and Views article more than a peer reviewed article of an author I do not know.

    The numbers summarized in the figure come from the cited peer reviewed articles.

  84. Victor,
    Indeed, and I quite like the News and Views articles. Science does something similar. It is meant to put a recent paper into a broader context and I think that can be very useful.

  85. JCH says:

    I have to rely on intuition, hunches, and on figuring out whom to trust. Over at Climate Etc., I often write that the pause has made fools of a lot of very smart people. They readily believed in low climate sensitivity because there was a slow down in warming. Because the AMO plateaued in the 21st century, they readily believed natural variability had caused up to half of the warming from 1975 to 2005. When I read Nic Lewis writing about Zhou and Tung’s AMO paper, I suspected L&C’s calculation of TCR and ECS could not be correct. IMO, if TCR is in the range of 1.5 ℃, then the cooling PDO and La Nina dominance in the Eastern Pacific in the period between the period 1985 to 2014 would have dropped the global mean to 1970 levels. So my hunch is TCR is higher than the central estimate.

  86. Kyle Armour says:

    Hi Ethan,

    I wish I could have provided more detail about Figure 1, but there was simply no room given the strict News & Views word count. In any case, here’s what I did:
    1) I started with values for global temperature change, total system heat uptake, and radiative forcing taken directly from Otto et al (2013): the ‘2000’s’ values in their Table S1 (http://www.nature.com/ngeo/journal/v6/n6/extref/ngeo1836-s1-corrected.pdf). Using a monte carlo approach, these numbers give the black ‘observation-based’ PDFs, which match the published Otto et al TCR and ECS values.
    2) I next multiplied the global temperature change by 1.24, approximating the ~24% increase identified by Richardson et al, producing the blue ‘Richardson et al’ PDF. I could have included uncertainty in this number (9-40%, according to Richardson) but I wanted to keep things simple and conservative.
    3) I convolved the radiative forcing with the effective radiative forcing efficacy values (including uncertainties) reported by Marvel et al. These can be found in their ‘corrected Table S1’ at http://data.giss.nasa.gov/modelforce/Marvel_etal2016.html. This gives the green ‘Marvel et al’ PDFs. (I also checked that my Marvel calculation matches their reported TCR and ECS ranges when the Richardson revision is not applied.)
    4) I multiplied the ECS PDF by 1.25, approximating the ~25% that CMIP5 models (on average) suggest for how much ECS needs to be increased by to get accurate estimate of the equilibrium value from transient observations — producing the red ‘time dependence’ PDF. There is a large intermodel spread for this revision that I neglected, which again would have made the PDF broader, with a higher central estimate.
    5) The CMIP5 ECS values for the histogram are taken from ref. 4, and TCR values from Forster et al (2013): Evaluating adjusted forcing and model spread for historical and future scenarios in the CMIP5 generation of climate models (which I see is missing from the reference list).

    From these steps, you should be able to reproduce all information shown in Figure 1. The figure was not intended to provide a new estimate of TCR and ECS (that will require further research, and consideration of this information in the context of other lines of evidence, e.g., paleoclimate and emergent constraints). Instead, I was hoping to (i) highlight the interesting new research being done on global energy budget estimates of TCR and ECS, and (ii) to point out that TCR and ECS are likely higher (potentially much higher) than reported in analyses such as Otto et al and Lewis and Curry that don’t take these factors into account.

  87. Kyle,
    Thanks for the comment and clarification.

  88. Ethan Allen says:

    Kyle Armour,

    First, thanks for the very timely reply.

    Second, you lost me right at the start “a Monte Carlo approach” Table S1 is for the mean and 5-95% confidence interval, one then needs to select PDF distro(s), and randomly sample from said distro(s). Given a large enough sample size the MC derived distro (TCR or ECS) should collapse to an underlying theoretical distro shouldn’t it?

    So, for example, Marvel Table S3 shows negative values on the LHS and extremely high values on the RHS.

    At the very least this shouts for a one-sided PDF distro, say a one parameter Rayleigh distro or a two parameter Rice distro.

    I’ve been doing PDF/CDF distros for over 40 years now (hydrology, statistics and ocean waves (wind and long period harbor resonance/moored ship motion), all at least at the graduate level.

    I see this all as a simple linear transformation, alpha for the y-axis and 1/alpha for the x-axis, it’s still a unit hydrograph. No real need to get any fancier than that (including the ‘squiggly’ fat tail, IMHO it shouldn’t be ‘wavy’ as it descends towards the zero asymptote)

    Everything after the initial unit hydrograph is rather very straight forward (including the application of the Monte Carlo method).

  89. -1=e^iπ says:

    “I multiplied the ECS PDF by 1.25, approximating the ~25% that CMIP5 models (on average) suggest for how much ECS needs to be increased by”

    Could you please elaborate on where you are getting this 25% value?
    In your article you reference 5 sources (7-11) so I looked through them.

    #7 suggests a mean ocean heat uptake efficiency of 1.34.
    #8 suggests approximately the same ocean heat uptake efficiency according to figure 7.

    According to Lewis and Curry, the change in radiative forcing and ocean heat uptake over the longest period considered were 1.98 W/m^2 and 0.36 W/m^2 respectively. If you apply the updated Bjorn Stevens aerosol update 1.98 becomes 2.38. Increasing ocean heat uptake efficiency from 1 to 1.34 only increases sensitivity by 6.4%, not 25%.

    #9 Involves aqua planets, so I don’t think it can be used for ‘apples to apples’ comparisons. Not to mention I don’t see anything resembling this 25% value you use.

    #10 I see it comparing effective climate sensitivity measured from the first 20 years to the next 130 years. But the main estimates obtained by Lewis and Curry cover over 150 years. So the ratios obtained from this paper can’t really be applied to the main energy budget results, otherwise it is not an ‘apples to apples’ comparison.

    #11 alpha is estimated from a linear regression of N vs T (figure 2). But performing a linear regression is not the same thing as comparing the first and last periods (which is done in the energy budget approach) so it isn’t an apples-to-apples comparison. If I have an increasing and accelerating time series (like temperature over the instrumental period), the slope of the line of best fit will be smaller than the final temperature minus the initial temperature divided by the time period. This means that the effective sensitivities calculated in this paper should be smaller than the effective sensitivities calculated using an energy balance approach over the instrumental period, so using ratios of effective climate sensitivities to equilibrium climate sensitivities from this paper will overstate equilibrium climate sensitivity when applied to energy budget effective climate sensitivities.

    Furthermore, the ECS values used by the two models in 11 are 3.1 and 4.0 C respectively. As I mentioned earlier, as climate sensitivity increases, the ratios of ECS/TCR and ECS/effective climate sensitivity should both increase. So if these two models have overestimates of sensitivity then the ratio of ECS/effective climate sensitivity should also be upward biased.

    Maybe I overlooked something, but I don’t see how your references justify increasing the energy balance results by 25%. I see justification for 6.4% though.

    Using 6.4% instead of 25%, gives you an ECS of 2.49 C when applied to Lewis and Curry with the Bjorn Stevens aerosol estimates.

  90. -1=e^iπ says:

    Okay, to be fair, Lewis and Curry results were updated here: https://judithcurry.com/2016/04/25/updated-climate-sensitivity-estimates/

    So using the more up to date data an ocean heat uptake efficiency of 1.34 increases ECS by 10.0%, and if I replace the 25% with 10% while applying the other modifiers, the best estimate becomes 2.73 C.

  91. -1=e^iπ says:

    Okay, I’m a bit confused by something, so maybe someone can help me.

    At Gavin Schmidt’s blog (http://www.realclimate.org/index.php/archives/2016/01/marvel-et-al-2015-part-1-reconciling-estimates-of-climate-sensitivity/) someone asked the following question about Marvel et al.:

    “For the ERF result the low net historical efficacy result appears to be dominated by low GHG efficacy. Would historical GHG have a substantially different spatial structure than CO2 alone, or are there other factors affecting efficacy here?” – Paul S.

    How does one get a GHG forcing efficiency of 0.85 when CO2 makes up 74% of GHG radiative forcing change? Because things like CH4 and N2O are well mixed, so the distribution of radiative forcing is similar to GHG radiative forcing so should the forcing efficiency be roughly 1?

    Different forcing efficiencies for some things like Ozone, Land Use, Solar, etc. makes sense since there is a different geographic distribution of forcing. But what is the physical reason for the lower GHG forcing? How are CH4 and N2O getting ~40% efficiency despite similar geographic distribution as CO2?

  92. BBD says:

    the best estimate becomes 2.73 C.

    GISS ModelE, almost exactly, IIRC.

  93. -1=e^iπ says:

    From Marvel et al. page 388:
    “The evolving pattern of temperature change may be incorporated into a global mean framework as an ocean heat uptake efficiency. Our methodology does not differentiate between these two physical mechanisms and we note that a substantial portion of what we call forcing efficiency may be due to differences between the ocean heat uptake induced by CO2 forcing and the heat uptake induced by the forcing in question.”

    Does this mean that the impact of ocean heat uptake efficiency is included in Marvel et al.? This would explain the non-unity GHG forcing efficiency. If this is the case then using both the Marvel et al. forcing inefficiencies and adjusting ocean heat uptake efficiency by 1.34 would be double counting. So either one should apply just the marvel et al. forcing efficiency and not adjust ocean heat uptake efficiency, or use the marvel et al. forcing efficiencies but divide by 0.85 to get unity GHG forcing efficiency and then apply a non-unity ocean heat uptake efficiency.

    If I do the first option then that earlier ECS calculation I did becomes 2.48 C. If I do the second option I get an even lower value.

    Would be nice if Kyle Armour could explain where the 25% adjustment is coming from.

  94. BBD says:

    Well, you’ve done something wrong again, no doubt.

    ECS remains about 3C, even according to models such as GISS Model E, despite your repeated claim that models are over-sensitive.

  95. -1=e^iπ says:

    @ BBD – If there is a flaw, then you should be able to find it. Where is the non-unity GHG forcing efficiency coming from, and if it is coming from ocean heat uptake efficiency, then is taking ocean heat uptake efficiency after applying the Marvel et al. forcing efficiencies double counting? Where is the 25% adjustment coming from?

    Non unity GHG forcing efficiency raises a lot of bizarre questions. Like if CO2 is 15% less efficient then itself, then does that mean we should pretend we are at 382 ppm instead of 400 ppm? For the usage of CO2e, should we multiply the difference from pre-industrial levels by 0.85? So instead of 510 ppm CO2e we are at 475 ppm CO2e?

  96. Dikran Marsupial says:

    -1 “If there is a flaw, then you should be able to find it. ”

    If you responded more constructively when flaws in your argument were pointed out, this challenge might have had some value.

  97. -1=e^iπ says:

    @ Dikran – My intent is to understand the implications and validity of Kyle Armour’s results. When communicating online through text, sometimes people may misinterpret the intent or tone of someone’s posts.

    From figure 1 of Armour’s results, it appears he is starting with the Otto et al. results that use 1970-2009 data. And since this a relatively short interval of time, he is justifying the 25% increase in sensitivity based on reference #10 (which compares effective climate sensitivities of 20 year time periods to 150 year time periods), which is somewhat applicable. The main problem with this (other than a 40 year period is twice as long as a 20 year time period) is that natural variability is being ignored. Over the interval 1970-2009 the effects of AMO and ENSO caused some warming, so the results of Otto et al. that use 1970-2009 data are biased high due to natural variability.

    I’m not sure why it makes sense to use the Otto et al. results over the updated Lewis and Curry results. The main Lewis and Curry results cover a longer period of time (less bias due to short time interval), compare start and end periods that have similar levels of natural variability and use more up to date forcing data.

  98. Kyle Armour says:

    -1,

    The ~25% was intended as a representative estimate, based on my assessment of our current state of knowledge — not as a conclusive quantification of the effect, which will take more work. But I’m happy to share my thoughts on why this value seems reasonable.

    Ref. 11 suggests that ECS may be over twice (!) the magnitude of effective climate sensitivity (from a mean of 19 CMIP5 models); while it’s nice to see how robust this effect is, it’s unclear how their regression-based analysis relates to the global energy budget approach, so I didn’t include their results in my estimate. Instead, I focused the simulations in refs. 7,8 and 10, which suggest that ECS is on average ~60% higher than the effective climate sensitivity calculated in the initial decades after an abrupt CO2 increase (CMIP5-mean). But this value isn’t quite what we need, since the observed warming that has been driven by a comparatively slow increase in forcing. Convolving the CMIP5-mean 4xCO2 responses with a realistic forcing time series or using the 1%/yr CO2 ramping simulations instead brings this number down to ~35% for the CMIP5-mean (see [this], work hopefully to appear soon). Finally, as you noted, the effect is smaller for low sensitivity and larger for high sensitivity; I used the value that is appropriate for Otto updated by Richardson (a revision of ~25%, as implied by those models with an effective climate sensitivity around 2.5C). Those models with an effective sensitivity near that of Otto updated with Richardson+Marvel imply a ~45% revision, but I went with the lower value to keep thing simple and conservative.

    For reference, the CMIP5 models with effective sensitivity near 2C (that is, near the updated Lewis and Curry estimate, revised by the Richardson result) imply a ~15% correction. Note that you can’t simply multiply the ocean heat uptake by a given efficacy factor, since ocean heat uptake efficacy is itself time-dependent. I suspect this is where you’ve gone wrong in your estimate.

    As I noted above, I don’t intend my calculations be a definitive update to energy budget estimates of TCR and ECS. There are good questions yet to be answered, e.g., how have our observational estimates been biased by internal variability? And, how independent are the Marvel results from the ‘time-dependent sensitivity’ findings? (I suspect they’re related, but I’m not sure to what degree). But I hope my commentary has stimulated discussion around these interesting lines of research, and brought broader recognition of the fact that TCR and ECS are likely higher than suggested by traditional energy budget calculations.

  99. -1=e^iπ says:

    @ Kyle –

    Thanks for the reply and link. 15% for an effective climate sensitivity of 2 C seems reasonable, as does 25% for an effective climate sensitivity of 2.5 C. The 70 year TCR simulation is shorter than the 150 year instrumental period, but given that most of the forcing change occurs later in the instrumental period, it’s probably a decent comparison.

    Maybe a more conclusive way to get the ratio of effective climate sensitivity to equilibrium climate sensitivity is to start forcing constant for the pre-industrial climate, then run the historical forcing data, then once one reaches 2016 hold all forcings constant and wait for the model to reach equilibrium. From this one could estimate both equilibrium climate sensitivity from the final temperature change and effective climate sensitivity as estimated over the 150 year historical period. But unfortunately we don’t have those runs, so using the TCR runs is good enough for now.

    Do you have any thoughts on the usage of a non-unity GHG efficiency from marvel et al.? Given that well mixed GHGs have similar geographic forcing distributions, what is the physical reason for the non-unity efficiency? Is it due to ocean heat uptake efficiency?

  100. Dikran Marsupial says:

    -1 “My intent is to understand the implications and validity of Kyle Armour’s results. ”

    That is irrelevant, the point is that you need to take criticism of your position seriously if you are going to ask others to point them out for you. Otherwise how are we to tell when you are arguing rhetorically and when you are actually engaging in scientific discussion.

    “When communicating online through text, sometimes people may misinterpret the intent or tone of someone’s posts.”

    Indeed, which is why consistency is important. If you evade criticism as you did with mine, then you give a message about your intent that will affect the perception of your later comments.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s