The time evolution of climate sensitivity

I wanted to just post this figure from a new paper by Philip Goodwin called On the time evolution of climate sensitivity and future warming. It uses a modified energy balance approach in which multiple climate feedbacks evolve independently over time to multiple sources of radiative forcing. As many other studies have shown, the ECS you infer depends on the timescale considered. It suggests that if you consider decadal timescales, the ECS will be biased low, and that on century timescales, the ECS has a likely (66%) range that goes from just above 2K to around 3.5K, with a best estimate of about 3K.

Credit: Goodwin (2018)

This entry was posted in Climate sensitivity, Global warming, Research and tagged , , , . Bookmark the permalink.

228 Responses to The time evolution of climate sensitivity

  1. John Hartz says:

    ATTP: Out of curiosity, how many articles have you posed on this site with the term “climate sensitivity” in the title? IMHO, discussing climate sensitivity is akin to telling the “Never Ending Story.”.

  2. One question that arises is what time frame should we use as the standard to discuss ECS. The science and calculations on ECS keep getting better and better, so I think it makes sense to estimate and discuss ECS based on time frame of 100 years and to include the ECS estimate for 20 and 50 years. I don’t know if the general public will understand the issue of feedbacks and forcing, so I think the basic discussion should center around the 100 year mark with “footnotes” regarding the 20 and 50 year estimates. JH is right that ECS is a never ending story, but that is because it is so complicated and important. We have to keep discussing because the ECS number continues to change depending on time frames, as noted here, and the state and sophistication of the science used to generate an ECS estimate. I think we are starting to get pretty sophisticated when the discussion includes anticipated feedbacks and forcings with notation regarding how the estimated ECS number will change over a meaningful time frame when the feedbacks and forcings are factored in.
    I read in the Guardian that a lot of trumpster folks in the Carolinas are wading around in flooded neighborhoods and clinging to their belief that their situation and the recent large, slow-moving, big rainfall hurricane is just normal weather fluctuations. Too bad that stupidity is not immediately painful. You know, hot stove learning?
    https://www.theguardian.com/world/2018/sep/19/hurricane-florence-climate-change-deniers-north-carolina?CMP=share_btn_fb

  3. angech says:

    A lot to consider.
    Why should one consider an ECS value going out to say 100,000 years?
    Yes, ATTP has gone over this ground very well.
    If one must consider this possibility why do we bother with shorter estimates?
    Here is the rub.
    Whatever test or measure you use is designed to give a true value at what would be considered to be final conditions.
    By definition.
    ECS is defined to be the final result for the doubling of CO2 in the system.
    Whatever the length of time concerned.

    In regard to this study, if I am allowed, I would like to make a series of comments about methods, method interpretations and outcomes. I hope that they will be taken on board in a constructive way.

  4. angech says:

    For all the compound interest people out there.
    “(1) the time-step is reduced from 1/12th of a year to 1/48th of a year (Appendix A)”
    This unfortunately creates an extra 0.1_0.3 C increase in the ECS as a pure mathematical statistically occurrence.
    Changing the parameter values every week instead of every month ( in essence) puts an artificial multiplier into the system.
    The final result will always tend to a slightly higher ECS.
    True this should and probably is reflected in the uncertainty range.
    Does anyone else feel this is worthy of discussion?

  5. John Hartz says:

    smallbluemike: My reaction the the Guardian article about what the folk in in Fayetteville, NC had to say about man-made climate change:

    More examples of people who have been brainwashed over the years by the likes of the fossil fuel industry, Fox News and Rush Limbaugh. It really saddens me to read what they have to say.

    There is absolutely no need to label them “stupid.”

  6. Steven Mosher says:

    I will keep reminding skeptics that their best arguments are in the area of sensitivity
    and that they should, like Lewis, actually do science rather than merely doubting.
    And I will remind folks that challenges they might reject as faulty, often do lead to better
    understanding.

  7. Michael Hauber says:

    So what if we measure sensitivity over 10^3 years? Does that go up another couple tenths of a degree. Obviously at some n sensitivity at 10^n years has to stop going up though.

  8. angech,

    “(1) the time-step is reduced from 1/12th of a year to 1/48th of a year (Appendix A)”
    This unfortunately creates an extra 0.1_0.3 C increase in the ECS as a pure mathematical statistically occurrence.
    Changing the parameter values every week instead of every month ( in essence) puts an artificial multiplier into the system.

    What are you talking about? Reducing the timestep does not introduce an artificial multiplier. The shorter time step you can use normally depends on the fastest processes in your simulation. If some things change very rapidly, you need a time step small enough to capture these changes.

  9. izen says:

    @-smallbluemike
    “One question that arises is what time frame should we use as the standard to discuss ECS.”

    In practise I suspect that many people discount any time frame that exceeds their current life expectancy.
    Or at best, that of their children.

  10. small,

    One question that arises is what time frame should we use as the standard to discuss ECS.

    Technically, the ECS is the temperature change when equilibrium has been re-established, so it’s time-frame independent. What I guess you’re getting at is what is the timeframe over which we should be considering the forced temperature change?

    A key point, though, that I think is quite hard to incorporate into these studies is that in the absence of negative emissions, the long-term temperature response is likely to be similar to the transient response at the time at which we get emissions to ~ zero. So, in my view, the relevant timescale is how long is it going to take for us to get emissions to zero and how much are we likely to emit before we do so.

  11. Chubbs says:

    Nice paper. Main benefit is building on other recent papers to continue the process of resolving differences between climate models and the energy balance method (EBM). It is clear now that the main sources of discrepancy are 1) improper matching of model and observations i.e. eliminating differences due to: blended surface temperature using SST, data sparse regions and sea ice and 2) measuring different response timescales, with EBM only providing the short-term response.

    The timescale to use depends on the question. For transient natural forces like volcanoes, ENSO, and the 11-year solar cycle, short timescales are appropriate. For GHG, which accumulate slowly and persist for centuries, longer timescales are appropriate. If you want to look at timing effects this century from the combined impact of natural and man-made forcing then climate models are the best way to go.

    Finally per Isaac Held, equilibration is a slow process with model-estimated climate sensitivity still increasing very slowly after 600 years.

    https://www.gfdl.noaa.gov/blog_held/time-dependent-climate-sensitivity/

  12. BBD says:

    A lot to consider.
    Why should one consider an ECS value going out to say 100,000 years?

    Because that would be Earth System Sensitivity (ESS), including slow feedbacks like ice sheet albedo, not ECS, which is the fast feedbacks sensitivity…

  13. BBD says:

    And I will remind folks that challenges they might reject as faulty, often do lead to better understanding.

    The thing is, although Goodwin’s study avoids the shortcomings of Lewis’ work, it only adds to the existent stack of evidence pointing at an ECS of ~3C. Lewis only managed to muddy the waters, not clear them.

  14. Steven Mosher says:

    “The thing is, although Goodwin’s study avoids the shortcomings of Lewis’ work, it only adds to the existent stack of evidence pointing at an ECS of ~3C. Lewis only managed to muddy the waters, not clear them.”

    you still dont get it. that’s ok. I run into the same stuidity at WUWT.

  15. Steven Mosher says:

    “Nice paper. Main benefit is building on other recent papers to continue the process of resolving differences between climate models and the energy balance method (EBM). ”

    As long as EBM approach gave the same answer no one was curious enough to explore certain questions.

  16. BBD says:

    you still dont get it. that’s ok. I run into the same stuidity at WUWT.

    When you learn how to make your point, I will be happy to give it further consideration.

  17. I think Stephen’s point is that even studies that are wrong, or not right, can be be useful. This is true even if there is reason to suspect that those studies are motivated by some kind of ideological bias. Of course, one could argue that it might be better if such studies were not promoted for ideological reasons, but it still doesn’t change that we can still learn from them.

  18. BBD says:

    it still doesn’t change that we can still learn from them.

    What did we learn from NL’s stuff? That ECS is probably ~3C? We knew that already. Sure, the detail as to why EBM-based estimates tend to be low has come out in the wash, but this is where I differ with Steven. Elucidating minor detail at the expense of major fake controversy and induced confusion in the public discourse is too high a price to pay, IMO. And it is a deliberate ‘sceptic’ tactic – as Steven will recall from his lengthy time spent trying to cast doubt on the validity of the Mannean Hocky Stick – and by extension, the integrity of climate scientists in general.

  19. BBD,
    I agree, but I still think that we can differentiate between what we learn from some studies and how they’re promoted by some. I think I’ve probably said this before, but I think in most fields if you did a fairly simple analysis and got a result that was broadly consistent with more complicated/detailed analyses it would probably provide some confidence in those other analyses. It’s only in this area (where this is controversy) where a slight difference between simple analyses and more detailed analyses are used to sow doubt about the more detailed analyses.

  20. BBD says:

    To be clear, I understand what you are saying, but I suggest that we learned much more from the work of Marvel, Richardson, Dessler, Armour, Cox, Goodwin and many others than we did from Lewis and Curry.

  21. The above implies acceleration of temperature trends.
    This does not appear to be borne out by observations so far as thirty year global mean surface temperature trends since 1990 have been fairly constant, within the range of 1.5 to 1.8 C/century.

  22. TE,
    And yet their model largely matches the observed warming. Maybe that’s because it’s only been decades, rather than centuries.

  23. Joshua says:

    BBD –

    You may well be right, at least in many cases, that the negative outcomes of some researchers outweigh the good, but you have no control over whether such activity takes place.

    The problem is made worse when someone applies double-standards when outlining the problematic mechanisms connecting individual scientists with public views on climate change…

    But then again, complaining about it can be like an old man using at clouds.

  24. angech says:

    “What are you talking about? Reducing the timestep does not introduce an artificial multiplier”

    It is like compound interest.
    You could say take an interest rate of 2% per annum and instead pay it quarterly, monthly or daily.
    because the amount [principal] is increased sooner this way the amount you actually receive at the end of the year is larger by a small but significant amount.
    By recalculating the temperature increases on a shorter time step the amount of ECS likely to be reached must be higher than that calculated on resets at a longer time step.
    Perhaps one of our quants can confirm this?

    “The Importance of Compounding Frequency and Number of Periods
    There are two key factors that can impact the amount of interest you receive on a compounding investment: the compounding frequency, and the number of periods the money is invested.
    Compounding frequency refers to the number of times interest is calculated on a loan or an investment in a given year (or other unit of time). For example, an investment could be compounded any number of times per year: annually, monthly, weekly, daily or even constantly. The more frequently your money is compounded, the more you will earn. “

  25. attp says: “It’s only in this area (where this is controversy) where a slight difference between simple analyses and more detailed analyses are used to sow doubt about the more detailed analyses.”

    I think this is where ideology takes over from science. It is also one way how ideology takes over and starts spinning the significance of the science. I think almost everyone processes information with some impact from their ideology, but things start to get out of control when ideology gets the upper hand. I think it doesn’t make much sense to work with folks who are more driven by their ideology than the science. And it really doesn’t matter which direction their ideology sends them, the science is just a pretext for ideological arguments.

    I think in these instances, it makes sense to simply observe that science has become a sideshow or causality of ideology and end the discussion there. Continued wrangling is useless. It’s mud wrestling with pigs, you both get dirty and the pigs enjoy it. A person needs a shorthand means of calling out the other as someone who acts in bad faith with regard to the science and let it go at that. Use the shorthand phrase as needed so that others understand why you won’t take the bait offered by bad faith actors and otherwise ignore the ideologues.

    All of this exists on a spectrum and all of us can learn and change. Constant argument hardens positions and slows learning and change.

  26. angech,

    “What are you talking about? Reducing the timestep does not introduce an artificial multiplier”

    It is like compound interest.
    You could say take an interest rate of 2% per annum and instead pay it quarterly, monthly or daily.

    No, it isn’t. If you’re solving differential equations, there is a timestep condition (Courant-Friederichs-Lewy condition) which is essentially that the time step must be small enough that nothing propagates more than one grid spacing per timestep. If the timestep is higher than this, then your simulation will not be accurate. You can, however, make it smaller than this without significantly influencing the result. However, this can be inefficient, so typically one sets the CFL condition to be about 0.5. Setting a smaller timestep is *not* equivalent to changing the period over which you do a compound interest calculation.

  27. Joshua says:

    … yelling at clouds…

  28. angech says:

    This is one of two interesting points enabling the authors to favor a higher ECS for a longer time period.
    The more extra effects one can bring into play, the higher the sensitivity must be.
    So allowing people to make adjustments for long term effects that are claimed to be overlooked in studies working from short term effects has to cause a higher ECS result.
    A no brainer.
    But the real issue this then brings to the table is did the people doing the shorter studies actually miss the bus?
    The answer to this riddle by the people doing the shorter studies, not just NL but a couple of recent article writers here, would be a resounding no.
    One should really consider this paradox if wishing to come to a reasonable set of ways to discuss ECS.

  29. We all need to be careful not to distort the science or inflate the significance of slightly different results to grind an ideological axe. If a person does not feel the need to be careful about that, then they are a bad faith actor.

  30. angech,

    The more extra effects one can bring into play, the higher the sensitivity must be.

    No, this is not true. Before we move onto this, though, can we resolve your confusion about timestepping?

  31. angech says:

    “If you’re solving differential equations, there is a timestep condition (Courant-Friederichs-Lewy condition) which is essentially that the time step must be small enough that nothing propagates more than one grid spacing per timestep. If the timestep is higher than this, then your simulation will not be accurate.”

    ” the general case radiative forcing from each agent does not increase via a step function, but
    instead by pathways that can increase or decrease over time
    This is achieved in WASP by using two time stepping equations: one equation adjusting the
    climate feedbacks to the existing radiative forcing to the source at the previous time step,
    and a second equation adjusting the climate feedback to the additional radiative forcing from
    the source since the previous time step,
    Other alterations to the WASP model,include:
    (1) the time step is reduced from 1/12th of a year to 1/48th of a year”

    I do not understand. the paper talks about reducing the time step for recalculating the radiative feedback by a factor of 4. nowhere is there a mention of what an appropriate time spacing should be in terms of fitting to a grid spacing. They have just chosen simple sample time frames on which to do their adjustments. Weekly instead of monthly as presumably they have the tools to do so.Choosing a shorter time frame means a compounding increase in the ECS that would be calculated.
    Andrew Dessler, where are you.

  32. angech says:

    “”No, this is not true. Before we move onto this, though, can we resolve your confusion about timestepping?”
    cross posting. Your maths level is higher than mine.
    Nonetheless I agree the point as to whether adjusting your indices every week instead of every 4 weeks should cause no increase in the ECS, as you assert, is important.
    I would be happy if others could make this clearer as the compounding effect seems possible to me.

  33. angech says:

    The rate of increase in any compounded amount eventually reaches a limit
    ” Bernoulli discerned that this sequence eventually snowballed towards a limit, thus defined as e which describes the relationship between the plateau and the interest rate when compounding.”
    “Going from 730 periods to infinite number of periods is only a 0/01% increase in interest, hardly noticeable.”
    Simiilarly for an ECS which is being approached in a similar way there is an end point or final ECS.
    Unlike money there are other alterations which prevent the final figure ever being truly evaluated.
    What is true though is that the extra temperature increases at a thousand, 10,000 and a million years are all very small and close to each other and just should not be that important.
    I see you wish to say that “The more extra effects one can bring into play, the higher the sensitivity must be.”” is not true.
    I would agree with you that this should, paradoxically, be the case.
    Yet the very paper you quote says that adding in these extra factors gives a higher sensitivity?
    I guess you mean that it merely gives the true sensitivity.
    Yet that is exactly what AD and NL do with their ideas.
    And they should all match exactly, baring input choice differences, unless they are the ones that have it wrong.20 year study or 200 year study it should all give the same answer.

  34. dikranmarsupial says:

    “I would be happy if others could make this clearer as the compounding effect seems possible to me.”

    perhaps since “Your maths level is higher than mine.” you should be asking whether it is possible rather than asserting (as a fact) that it does? Of course it has been pointed out to you repeatedly that this behaviour is not acceptable in a scientific discussion (as it is BS – making arguments without apparently caring whether they are valid, this is yet another occasion where it wasn’t).

  35. angech,
    The point is that if you’re solving a set of differential equations, reducing the timestep is not some kind of amplifier.

    Consider the following differential equations

    \dfrac{dx}{dt} = v,
    and
    \dfrac{dv}{dt} = a.

    where a is a constant. If the initial conditions are x = 0, v = 0, then I can solve the above equations using some kind of integrator, but essentially I’d be doing

    v_{i} = v_{i-1} + a dt,
    and
    x_{i} = x_{i-1} + v dt.

    (To be clear, you’d normally use a higher order scheme). As long as dt is small enough to capture the evolution, it should converge. In fact, I’ve written a little code to demonstrate this. It’s not quite equivalent to solving on a grid (since we’re simply solving for x) but it shows that as you make dt smaller, you start to see the result converging.

  36. dikranmarsupial says:

    Of course, it could be that climatologists are mathematically inept and don’t understand how to solve D.E.s… ;o)

  37. John Hartz says:

    My two cents: Climate Sensitivity is a mathematical index created by humans to compare the outputs of various GCM runs using alternative sets of inputs. Climate sensitivity does not exist in nature in the same that say gravity does. Climate sensitivity is a misnomer because the Earth’s total climate system consists of much more than the lower troposphere.

    Discussing Climate Sensitivity ad nauseam is akin to doing what the band did on the Titanic.

    PS – The Average Annual Mean Global Surface Temperature does not exist in nature. It is a mathematical construct created by humans.

  38. Joshua says:

    angech –

    So allowing people to make adjustments for long term effects that are claimed to be overlooked in studies working from short term effects has to cause a higher ECS result.

    Dude, that’s classic. I love the “allowing people to make adjustments.” How is that differentiated from “paying attention to people who are examining analytical improvements and adding greater precision in their analyses?”

    How is “claimed to be overlooked” differentiated from “accounting for factors not previously included.”

    How is “adjustments for long term effects” compared to “studies working feim short term effects” differentiated from “working with a larger sample size?”

  39. dikranmarsupial says:

    JH if you want to consider more physics, then there are other metrics, e.g. ESS.

    We can only evaluate the ECS of a model, but that doesn’t mean that it is just a metric for comparing models and nothing more. The real Earth has an ECS, it is just that it is not feasible to set up the experiment to measure it directly. That is why ECS does give a useful single value that tells us something about the long term effects of climate, it is just that we have to use some judgement to compensate for the differences between the experiment that would give a direct estimate and the grand climate experiment that we are actually performing.

    I am very happy to see discussion of ECS as it is one of the areas where there is genuine uncertainty, and where skeptics are most likely to have a point. Certainly much better than discussing the second law of thermodynamics or whether ther rise in co2 is natural!

  40. dikranmarsupial says:

    Actually GMST does exist, it is just that we can’t measure it, only estimate it.

  41. thanks, yes, I thought about the idea of equilibrium as the right time frame. But it’s definitely a moving target because our emissions are moving the needle all the time and if we start adding in a magic bullet, like bringing negative emissions online with increasing impact, then the calculation of an equilibrium time frame is really pretty theoretical and we are back in the realm of ideology.

    As I think more about this, I think it is useful to talk about projected warming increase at any particular concentration of CO2 and to project the increase at 20, 50, 100, 200 and 500 years. So, a discussion today would be that if we could zero emissions today at an annual average of say, 408 ppm (or whatever it actually is at this moment), then we would see global warming of X degrees in each of the time frames.

    I think that ECS is a bit of an intellectual construct, much like the global mean temp, it’s probably a pretty accurate calculation, but it’s probably not going to be an actual condition of the planet for any meaningful period of time because the planet has fluctuations. The time frames and conditions that are meaningful to a certain somewhat sentient species that walks erect on two legs is probably in the 20, 50, 100, 200 and 500 year ranges.

    Warm regards all

    Mike

  42. angech,
    I no longer have a clue what you’re getting at. The reason this paper gets a different results to Lewis & Curry is that it tries to account for the things that Lewis & Curry largely ignore (time dependencies, efficacies, etc). It’s got nothing to do with changing the integration timestep. That’s to ensure that it’s properly catching the fast reponses (I assume).

  43. izen said “In practise I suspect that many people discount any time frame that exceeds their current life expectancy. Or at best, that of their children.”

    I think that is correct, but in discussing committed warming based on current condition (not using the ECS equilibrium term) then a discussion of committed future warming in longer time frames might help non-scientists grasp the situation a little better.

    Also, ATTP said something about how the committed warming would essentially stop when neutral emissions is reached. Maybe I misunderstand this, but I don’t spend much time thinking in these terms for many reasons:
    1. We are so far from reaching that point
    2. We are unlikely to actually stop at the point if we have deployed negative emission technology because
    3. If we have good negative emission technology, we would probably choose a certain CO2e accumulation number that we want to reach to stabilize climate change and reduce risks to the ecosystem (maybe that would be 350 to 380 ppm range).
    4. Until we stop the increase, the talk is just talk and there is good reason to wonder if our global political structures are capable of negotiating the changes required to change our CO2e issues in a time frame that is meaningful to us.

    I could be wrong about all that.

    Mike

  44. attp said: “angech,
    I no longer have a clue what you’re getting at. ”

    I think this is where a good faith discussion leads when one of the parties is a bad faith actor who is grinding the ideological axe rather than trying to learn from the discussion.

    Maybe I am wrong about that? Does anyone here believe that angech is a good faith actor who is not simply working on an ideological agenda? I think it’s fine to engage with the ideologues, but maybe only if you are regularly noting that ideology is getting in the way of understanding. For myself, I generally won’t bother. I would rather play with my grandchildren that let ideologues waste my time. But that’s just what makes sense to me. YMMV

  45. small,
    I think angech is engaging in good faith (and we should probably avoid discussing other commenters, so maybe we can stop after this). Would be nice if angech didn’t regularly make claims that were clearly wrong and – unfortunately – repeat the same wrong claims on multiple occasions.

  46. ecoquant says:

    As a kind of random footnote relating to Climate Sensitivity, but of the Transient kind, not the Equilibrium kind, and going back to something discussed in comments a bit back, Schurer, Hegerl, Ribes, Polson, Morice, and Tett have just hit the street with:

    Estimating the Transient Climate Response from Observed Warming”, in Journal of Climate (2018). In their supplement, they offer:

    which illustrates some of the silliness of that old Lewis and Curry set of claims regarding Bayesian inference.

  47. verytallguy says:

    I’d like to confess to not understanding the concept at all.

    Is the paper doing a “virtual EBM” over different timescales using model runs?

    And the conclusion is that the methodology used in EBM models for ECS returns a higher ECS the longer the timescale from inn initial doubling considered?

  48. vtg,
    I read it last night, but I haven’t digested it all myself. If I get a chance, maybe I’ll read it more carefully and write another post.

  49. John Hartz says:

    Dikran: Based on my recollection of the comment threads to prior articles about climate sensitivity, I can pretty much predict what the “regulars” will post on this thread. I’m extremely frustrated right now because. based on new research findings. the Earth’s climate system seems to be going to hell in a handbasket and the human races doesn’t appear capable of doing what is necessary on a global scale to mitigate man-made climate change..

  50. dikranmarsupial says:

    Getting a good idea of the distribution of ECS and of impacts given ECS is a good source of rational argument in favour of action on climate. I understand frustration about this, but the scientific discussion ought to be focussed on areas of genuine uncertainty, and that is sciences best hope of making a contribution to sane policy.

  51. Dave_Geologist says:

    I would be happy if others could make this clearer as the compounding effect seems possible to me.

    Did you get as far as differentiation and derivatives angech? Another way of looking it is as analogous to the Tangent Problem. If you freeze it at about 4 minutes, you’ll see a line joining (4,3) and (5,6). You’ll see that this secant line is not tangential to the curve. If you’re unfamiliar with the concepts I’d advise watching the whole video. You can’t meaningfully discuss differential equations without understanding derivatives (it’s a necessary but not sufficient condition). The lecture goes on to bring the two points closer and closer together, until at the limit where dx tends to zero, the line is tangential to the curve. In a D.E. solver, you don’t have luxury of setting dx (dt in a time-series) to zero and have to use finite steps. Hence “finite-difference” solver. Imagine you’re trying to forward model the curve in small steps. At each step you know dy/dx (or dT/dt, or whatever) and project a line segment forward like the one beyond (5,6). That’s called a forward-predictor solver. You’ll notice that the line falls below the curve, as would a projection to x=6, because the curve is concave upwards. So at an x value of 6, you’re underestimating y. Depending on how dy/dx is determined, you may still get the gradient right at x=6. But when you project to x=7, you’ll fall even further below the curve because the two deficits add up. By x=100 you’ll be a long way off.

    You can get round that by making the x-steps smaller, but of course that requires more computing resources. The brute-force way to find out how small the step needs to be is to keep making it smaller until it stops making a significant difference, for your chosen value of “significant”. That’s essentially what ATTP’s graph above does. The orange line is two big tangents, the green line four smaller tangents, etc. There are other approaches such as using a Predictor–corrector method, but that makes the calculation of each individual time-step more costly, so may or may not be more efficient, depending on the specific problem.

    I have used an alternating forward-reverse predictor in the past, which probably counts as a sort of predictor-corrector. Going back to the video, once I’ve projected from (5,6) to get the y value at x=6, I calculate the gradient at x=6. I then step back and project forward from (5,6), using the gradient from x=6. This time, my point for x=6 falls above the curve. That’s called a reverse-predictor because you’re using the gradient extrapolated back from the next step to project forward from the previous step. If you alternate forward and reverse predictors, in odd and even steps, then instead of always falling below the curve, the joined-up tangents will zig-zag either side of the curve. That could be helpful with something like ECS, where you perhaps just care about the end point. You can tolerate bigger excursions either side of the curve as long as you’re not systematically drifting away from the curve. It relies on the system being fairly well behaved, with smooth first and second derivatives. Again it adds complexity, in this case to every second step. You have to calculate the gradient twice, and do the forward projection twice. IIRC I got a three- or four-fold improvement in run-time compared to dividing the step size by ten.

  52. That’s generous, John and I think you are right. I am also saddened by what the Carolina Trumpsters have to say about climate change. Calling people brainwashed or stupid is not a great idea. Maybe the politically correct and less offensive way to cover this is to say that it is not smart to get your news about the science of climate change from the fossil fuel industry, Fox News and Rush Limbaugh?

  53. Dave_Geologist says:

    Why should one consider an ECS value going out to say 100,000 years?

    angech as I understand it ECS started out as a measure within particular GCMs, with “equilibrium” meaning “close enough to equilibrium”, i.e. dT/dt becomes almost flat. So “how long” depends on what feedbacks are in the model. For most GCMs, that doesn’t include the ESS stuff on thousands-of-year timescales (it must for people modelling glaciations, Snowball Earths etc.). But strictly speaking it is “how long is a piece of string” unless you specify the GCM. In practice, I presume the CMIP models are pretty mutually consistent, otherwise it would be hard to do an inter-comparison 🙂 . Since much discussion is focused on 2100, century-scale sounds about right. But we need to be very careful always to say this is not really equilibrium. ESS is consistently above ECS so warming will continue after we reach ECS “equilibrium”. Which is why I wish the word hadn’t been used. The Earth System processes will kick in gradually, so we shouldn’t just ignore them because we’ll be living on Mars by the time they become important. And some short-term feedback may occur on century timescales, but be omitted form GCMs because they’re too hard to model or the inputs are too poorly constrained.

  54. John Hartz says:

    Dikran: You wrote:

    Actually GMST does exist, it is just that we can’t measure it, only estimate it.

    OTH, gravity exists and we can measure it. 🙂

  55. dikranmarsupial says:

    Sorry, I find this type of discussion tedious. If you make an assertion that is incorrect, it is poor form to evade simply acknowledging it,.

  56. John Hartz says:

    Dikran: I stand by my assertion that GMST does not exist in nature. It exists as a mathematical computation. Ditto for ECS.

  57. JH,
    The planet clearly has an average global surface temperature, even if we can’t actually measure it.

  58. dikranmarsupial says:

    No John. The mathematical computation is an estimate of GMST not GMST itself. If I have a ball of rock six inches in diameter does it have a mean temperature? Yes or no?

  59. John Hartz says:

    Dikran: Of course your rock has a “mean temperature” that is computed by taking a number of temperature readings. The “mean temperature” does not exist in the natural world. It is a mathematical computation by humans.

  60. John Hartz says:

    Dikran & ATTP: Perhaps the fact that I have a civil engineering degree explains why our perceptions of what exists in the natural world differ. Regardless, I strongly believe that all of the time and energy devoted to discussing, dissecting, and debating ECS has a high opportunity cost relative to our understanding of the myriad of other components of the Earth’s climate system. Furthermore, by definition, ECS is an output of the system, not an input.

  61. dikranmarsupial says:

    JH “Of course your rock has a “mean temperature” that is computed by taking a number of temperature readings.”

    No, that would be an ESTIMATE of the rock’s mean temperature. The rock does have a mean temperature, it is a property of matter (consider what is different about the matter at absolute zero and room temperature – that is a physical difference not a mathematical construct). You don’t appear to understand the difference between an estimate or measurement and the thing that is being estimated or measured. They are not the same in physics or engineering (I am an engineer by training as well).

    Discussing ECS is not a waste of time, as I have repeatedly pointed out it is a useful metric for understanding climate change, and one with genuine uncertainty. If we want evidence based policy, then it is absolutely what we should talk about, and it is ironic that you are expending much energy promulgating errors that I had only previously seen on blogs like WUWT.

  62. John Hartz says:

    Dikran: You state:

    You don’t appear to understand the difference between an estimate or measurement and the thing that is being estimated or measured. They are not the same in physics or engineering (I am an engineer by training as well).

    Hogwash!

    You are right, there is absolutely zero point in continuing our discussion.

  63. BBD says:

    @ John Hartz

    What ecoquant said, eloquently.

    Regardless, I strongly believe that all of the time and energy devoted to discussing, dissecting, and debating ECS has a high opportunity cost relative to our understanding of the myriad of other components of the Earth’s climate system.

    I know what you mean, but climate sensitivity is a Thing that Must Be Investigated. And therefore discussed, at least by those interested in the investigation. Even if every time somebody mentions the S-word…

  64. BBD says:

    To be clear, my sarky image quote refers to zombie arguments, not those who express them.

  65. ecoquant says:

    @John Hartz,

    I feel your pain. I’m sure many of us do. I think that’s a reason I hang out here, at Tamino‘s and Eli‘s when I can.

    On my most charitable days I think of the epitaph of humanity in a century, or perhaps, assuming it all goes to hell in a few centuries, written above the corridor entrance to the unmarked Hall of Extinction from Dr Tyson’s Cosmos and, perhaps my favorite, albeit emotion-laden episode, “The lost worlds of planet Earth”,

    We’re sorry, but we didn’t know.

    (Of course, see Thought #3 below.)

    Three thoughts might help.

    #1, continue to hang out and talk. It probably won’t do any good for the Big Picture, but it’ll help you — and us — and, who knows, it might even do some good for the Big Picture.

    #2, the late Buckminster Fuller offered a counterargument (in a video made late in life) to those who felt a federal or world government ought not take away the rights of individuals and communities `for the greater good’. These aren’t his words, they are mine, but they are due to and capture his argument.

    You don’t really mean that, he might say. Suppose I’m sitting here, and I see something coming in towards you quickly, looking deadly, and it’s within my power to knock it away. I could do that, but, then, taking your argument I might be depriving you of your rights. After all, maybe you want to die. How do I know?

    What he means, of course, is that there’s a line where everyone but the most extreme solipsist expects some social care.

    Continuing, And suppose you live in a flood-prone area, a flood is on the way, and the authorities tell you do evacuate. You don’t. So, if they are respecting your rights they should just abandon you. You’ve indicated you don’t care about your life and family by choosing to remain against their recommendations. But, in fact, most who make such choices do expect them to come after you.

    And, finally, Most everyone in a society expects it to provide some care. If they are grown up or wise enough, they realize there always exist people who know, in the long run, better than you do how to care for you and yours. (I know that’s controversial.)What your actual choice is whether or not you want these people in a position to take care of you when it is warranted. I would add, I think that’s your only choice.

    We can help people who care for others know what they need to know, and to do what they need to do.

    #3, Support Our Children’s Trust. Donate. Attend a rally around the nation on the 29th of October when the Oregon trial opens.

    For over fifty years, the United States of America has known that carbon dioxide (“CO2”’) pollution from burning fossil fuels was causing global warming and dangerous climate change, and that continuing to burn fossil fuels would destabilize the climate system on which present and future generations of our nation depend for their well being and survival. Defendants also knew the harmful impacts of their actions would significantly endanger Plaintiffs, with the damage persisting for millennia. Despite this knowledge, Defendants continued their policies and practices of allowing the exploitation of fossil fuels. Specifically, Department of Energy has approved the export of liquefied natural gas (“LNG”) from the Jordan Cove LNG terminal in Coos Bay, Oregon. This export terminal will be the largest projected source of CO2 emissions in Oregon, and will significantly increase the harm that Defendants’ actions are causing to Plaintiffs. Defendants’ have long-standing knowledge of the cumulative danger that their aggregate actions are causing Plaintiffs. The Jordan Cove project enhances the cumulative danger caused by Defendants affirmative aggregate actions.

    In a 1965 White House Report on “Restoring the Quality of Our Environment,” for example, the President’s Science Advisory Committee stated: “The land, water, air and living things of the United States are a heritage of the whole nation. They need to be protected for the benefit of all Americans, both now and in the future. The continued strength and welfare of our nation depend on the quantity and quality of our resources and on the quality of the environment in which our people live.”

    The United States Environmental Protection Agency (“EPA”) in 1990 and the Congressional Office of Technology Assessment in 1991 prepared plans to significantly reduce our nation’s CO2 emissions, stop global warming, and stabilize the climate system for the benefit of present and future generations. Both the EPA’s 1990 Plan, “Policy Options for Stabilizing Global Climate,” and the OTA’s 1991 Plan, “Changing By Degrees: Steps to Reduce Greenhouse Gases,” were prepared at the request of, and submitted to, Congress. Despite the imminent dangers identified in both the EPA’s 1990 Plan and the OTA 1991 Plan, Defendants never implemented either plan.
    .
    .
    .

    Defendants have for decades ignored experts they commissioned to evaluate the danger to our Nation, as well as their own plans for stopping the dangerous destabilization of the climate system. Specifically, Defendants have known of the unusually dangerous risks of harm to human life, liberty, and property that would be caused by continued fossil fuel burning. Instead, Defendants have willfully ignored this impending harm. By their exercise of sovereign authority over our country’s atmosphere and fossil fuel resources, they permitted, encouraged, and otherwise enabled continued exploitation, production, and combustion of fossil fuels, and so, by and through their aggregate actions and omissions, Defendants deliberately allowed atmospheric CO2 concentrations to escalate to levels unprecedented in human history, resulting in a dangerous destabilizing climate system for our country and these Plaintiffs.

    The 1965 Report and the 1990 and 1991 Plans are only examples of the extensive knowledge Defendants have had about the dangers they caused to present and future generations, including Plaintiffs. Since 1965, numerous other studies and reports also have informed Defendants of the significant harms that would be caused if Defendants did not reduce reliance on carbon-intense energy from fossil fuels and rapidly transition to carbon-free energy. These studies and reports concluded that continued fossil fuel dependency would drive the atmospheric concentration of CO2 to dangerous levels that would destabilize the climate system.

    Yet, rather than implement a rational course of effective action to phase out carbon pollution, Defendants have continued to permit, authorize, and subsidize fossil fuel extraction, development, consumption and exportation – activities producing enormous quantities of CO2 emissions that have substantially caused or substantially contributed to the increase in the atmospheric concentration of CO2. Through its policies and practices, the Federal Government bears a higher degree of responsibility than any other individual, entity, or country for exposing Plaintiffs to the present dangerous atmospheric CO2 concentration. In fact, the United States is responsible for more than a quarter of global historic cumulative CO2 emissions.

    The present level of CO2 and its warming, both realized and latent, are already in the zone of danger. Defendants have acted with deliberate indifference to the peril they knowingly created. As a result, Defendants have infringed on Plaintiffs’ fundamental constitutional rights to life, liberty, and property. Defendants’ acts also discriminate against these young citizens, who will disproportionately experience the destabilized climate system in our country.

    That’s from the Complaint.

  66. John Hartz says:

    BBD: I agree with you. Climate Sensitivity is a Thing that Must Be Investigated and discussed,. In hindsight, I should have resisted the temptation to offer my two cents opinion.

  67. ecoquant says:

    I think we need some humor here. Well, even if we don’t, I can’t resist inflicting the following upon you. It’s all I can think of when I read this conversation:

    LICHTMAN: Yeah. It needs a little setup, so Christy(ph) from Laramie, Wyoming, told this joke, and it’s about three statisticians that go deer hunting.

    CHRISTY (Caller): The first statistician crouches, aims and fires, but his shot goes off to the right. The second statistician steps up to take her shot, but it veers off to the left. Seeing this, the third statistician jumps up and cheers, we got him.

    (That’s from a Science Friday episode.)

  68. John Hartz says:

    ATTP: My apologies for inferring that you post too many articles about Climate Sensitivity. It’s obviously a topic that you are keenly interested in. This is your venue and you are free to post whatever you want to.

  69. John Hartz says:

    ecoquant: I’m not about to throw in the towel and go into a state of depression. I am a member of the Skeptical Science team and spend 40 or more hours per week of my time maintaining the Skeptical Science webpage and publishing two documents each weekend.

  70. ecoquant says:

    @John Hartz,

    ecoquant: I’m not about to throw in the towel and go into a state of depression.

    Good, glad.

    But if, in fact, “frustration” interferes with scientific discussion, there is IMO an issue to be addressed.

  71. izen says:

    @-JH
    “Regardless, I strongly believe that all of the time and energy devoted to discussing, dissecting, and debating ECS has a high opportunity cost relative to our understanding of the myriad of other components of the Earth’s climate system.”

    The flooding, landslides, fires, algae blooms and all the other enhanced ‘Natural’ disasters that are ongoing, may scale in intensity/incidence with rising temperatures, but I think the evidence they do so in any linear manner is weak. I would agree that ECS is less important that a better understanding of how local and transient effects of adding Giga Joules of energy to the biosphere, along with significant chemical changes (ocean acidification) that are likely to impact the agricultural infrastructure and the global civilisation we currently enjoy.

    Because it can be estimated from models and observations it is of scientific interest as a discover-able metric of climate change.
    But it is rather like the man who dropped his keys, looking for them under the street lamp, not because that is where they are likely to be, but because that is the only place he is able to see them.
    The ability to search there does not mean it is the most useful area to cover.
    Because it IS possible to estimate the value of ECS, but with error bars, it also becomes a target of opportunity for those wishing to doubt and dismiss all the other aspects of our knowledge of climate change.

    Fossil Fuel use is far too deeply embedded in the global system to be easily abandoned, or even reduced at any significant rate. That will require the gradual evolution of alternative sources of energy generation and/or a major collapse of consumption as a result of some combination of the four horsemen.

    ECS is particularly unconvincing to the inactivists if it refers to a timescale beyond their personal horizon. For ‘people’ who are corporations that is the next stockholder meeting and profit report.
    Or if they are Institutions, Nationalised/centrally planned systems, the end of the current five year plan.

    It may be more effective to set aside the exact magnitude of ECS and highlight the importance of making our civilisational infrastructure much more resilient in the face of the sorts of impacts already seen. On the basis they they will continue and increase while emissions do.
    Ignoring any dubious quibbles about the precise scale of the problem on the basis that we can already see the type of threats we face, and can have no doubt they WILL increase.

    That will certainly require a much greater degree of mutual cooperation and social care than is the current zeitgeist. As ecoquant observed,
    “We can help people who care for others know what they need to know, and to do what they need to do.”
    To adapt to the disruption a changing climate will cause for a system built on a ‘steady state’ will need a lot more cooperation and compromise by nations and individuals. But is likely easier than getting significant emission cuts. It might even motivate more action on emissions if the cost of adaption becomes more evident.
    I suspect most of this will be imposed, retroactively, by necessity.

    An aside; you are correct that both ECS and GMST are imaginary concepts that exist only as dynamic patterns in human brains that ‘understand’ them. To class them as ‘Real’ falls into the Platonic Dualism fallacy. (they may of course be ‘True’).

  72. John Hartz says:

    izen: Thank you for your thoughtful and insightful comments.

  73. ecoquant says:

    @Dave_Geologist,

    Okay, so I’m no climate scientist, or geophysicist, or atmospheric physicist. However, I do know a bit, from personal study.

    I know that there is a definitive theoretical definition of ECS, and there is an \text{ECS}_{\text{2X}} thing, which is more operational. The theoretical definition is an asymptotic version of climate sensitivity, which is a pretty non-linear thing, described in detail, for example, in Prof Ray Pierrehumbert’s Principles of Planetary Science, section 3.4.2. Indeed, his Chapter 3 is all about radiation balance of planets, and delves into Blackbody and various feedbacks, like ice-albedo. (Indeed, ATTP has a post about non-linear feedbacks to which Prof Pierrehumbert, if I recall, contributed some material.) There is also Transient Climate Sensitivity, which Held and Winton have written about and explained at some length. Roughly, Transient is the response before deep oceans equilibrate. ATTP took a look at some of this in 2013. Prof Steve Easterbrook has written intelligently on this, too.

    In order to come up with a prediction of something like \text{ECS}_{\text{2X}}, in order to pin down the contributions, assumptions need to be made, such as constant relative humidity. But, even so, whatever the source of the ECS or TCS estimate, be it paleoclimatological or model, or a mix, there are important things to remember when interpreting, as I’ve written about. The significant one is that people should be really focussed upon ECS for land, not global ECS, in the same way that GMST for land is what should be considered. Oceans and land are quite different surfaces and there’s more ocean, so integrating over the entire surface biases the result low. This is enough to make desperate situations seems not so bad.

    It’s also a lesson in prediction. Predictions need to be specific to the case, so they are conditional expectations and conditional variances, whether limited to land or to a region. Depending upon the case and region, the expected conditional variance might be larger or smaller than a global counterpart. A bunch of this might well be prediction error, some of which is intrinsic. There is also likely to be some model specification error included as well.

  74. dikranmarsupial says:

    JH wrote “Hogwash!”

    well, if you put it like that, I’m convinced. I note you didn’t even attempt to address the substantive point. Typical of discussions about climate: hubris followed by evasion.

  75. dikranmarsupial says:

    “To class them as ‘Real’ falls into the Platonic Dualism fallacy. ”

    There is a real mean distance between the Earth and the Pleiades: as I understand it, we can’t directly measure it (my longest ruler is a measly yard long), but we can estimate it by a variety of different methods. Those estimates are not the mean distance, just an estimate of it. There is nothing Platonic about the mean distance between the Earth and the Pleiades, it is a stretch of space that really does exist, ask any photon that travels between the two.

  76. dikranmarsupial says:

    ECS is not a fundamental property of the Earth system, that is true (and I haven’t said otherwise), but that doesn’t mean it isn’t a useful tool in guiding/informing action on climate change. It basically gives us an idea what would happen in a hypothetical situation, but it does so in a mathematically tractable manner, which means it can be connected with theory and model simulations, from which it gains consillience. This means it is an alternative to approaches such as SRES, which does something similar (what would happen in this situation?) but in a way that is much less general (for instance it fixes a start-point) and is less easily related to theory (AFAICS you need a climate model to do so).

    Personally I think it is a mistake to tell other people how to go about achieving their aims without (i) understanding their aims, they may not be the same as yours and (ii) having good evidence that your approach works and theirs does not. It is a bit like all the papers on communication of climate change that also gets discussed here, where it seems to me there is unlikely to be a “one size fits all”. It seems to me a rather better idea to understand there is diversity of goals in the discussion and different means for achieving these goals given the inhomogenous audience that we face. I’m glad that JH works hard on action on climate change, I’m sure he had a good deal of influence for the better. However his approach won’t work for everybody, and will alienate some.

  77. Dave_Geologist says:

    As long as EBM approach gave the same answer no one was curious enough to explore certain questions.

    Tell me Steven, what question was explored? Whether, if you use a physically unrealistic Bayesian prior in a problem where the choice of prior matters, you’ll get a physically unrealistic posterior distribution? I kinda think we knew that already. That if you choose to label your prior “objective” rather than “uninformative” (I would say “uninformed”), it will be meat and drink to those who can’t bring themselves to face inconvenient truths? That it will be turned into “Nic is objective (in the everyday sense of the word), and everyone else is subjective and can’t be trusted”? Ditto. That if you yourself elide the distinction between the Bayesian meaning and the everyday meaning in your social media interactions, it will make matters worse? Ditto. That if you pick a low-ball temperature record, you’ll get a low-ball ECS? Ditto.

    What precisely, outside of the rhetorical world, did Nic’s effort contribute? Maybe questions like “should I use a non-physical prior just for fun” had been asked and answered. And the answer was that while it may be an interesting intellectual exercise, it has zero value in the real world and my funders live in and are interested in the real world.

  78. Dave_Geologist says:

    OTH, gravity exists and we can measure it.

    I think you’ll find that Big G has a plus or minus attached John.

    If you mean the Earth’s gravity, small g, that’s an interesting one because it varies across the surface of the Earth, like temperature, and it even varies with the seasons (snow and glacier melt/freezing; aquifer depletion/recharge), and from year to year as events like El Nino move water around within oceans. And pre-sateliite, gravity meters, which are prone to drift like thermometers, took relative measurements after being calibrated at a base station with known gravity. And of course gravity satellites have many of the same orbital and timing issues as climate satellites. And to crown it all, geophysicists generally work with gravity anomalies, not absolute gravity.

    So gravity is actually a very good analogue to global temperature.

  79. Dave_Geologist says:

    ecoquant, I agree that insufficient attention is paid to land. We say “why surface when it’s a crap measure of the heat gained – because it’s where seven million people live; the surface matters more to us than 3000m down in the ocean”. But we also live on land. And as well as warming more, it has bigger day/night cycles so high temperature extremes will bite first on land. In my favourite early Triassic example, where the tropics became too hot for reptiles and fish on ESS timescales, I bet the reptiles died or migrated first, and the fish followed later.

  80. JH,

    My apologies for inferring that you post too many articles about Climate Sensitivity. It’s obviously a topic that you are keenly interested in.

    No problem, I’m used to people telling me what I should, or should not, do 🙂

  81. Dave said:

    “If you mean the Earth’s gravity, small g, that’s an interesting one because it varies across the surface of the Earth, like temperature, and it even varies with the seasons (snow and glacier melt/freezing; aquifer depletion/recharge), and from year to year as events like El Nino move water around within oceans. “

    The interaction of gravity and a spinning mass of fluid is one of the challenging problems that has engaged mathematical scientists such as Newton, Maclaurin, Jacobi, Meyer, Liouville, Dirichlet, Dedekind, Riemann, and Chandresekar. This is referred to as the the theory of equilibrium figures and “figure of the Earth” for the planet in particular

    Click to access 1409.3858.pdf

    Gravity impacts the movement of water in the oceans, and then this redistribution of water impacts the gravitational forces. The first-order perturbation of this is the Mathieu equation, which is used by hydrologists to model sloshing of water in a container. Interesting that Moon and Wettlaufer (2017) are using this equation to effectively model ENSO, yet they don’t call out Mathieu. I think that once these guys get their ducks in a row, they will be able to figure out the dynamics of ENSO

    “And to crown it all, geophysicists generally work with gravity anomalies, not absolute gravity.

    So gravity is actually a very good analogue to global temperature.”

    I’ve noticed that the geophysicists are making much progress, and the fact that the main multidecadal natural variability in temperature is correlated with LOD points to idea that gravity is perhaps even a direct effect rather than an analog to global temperature.

  82. John Hartz says:

    Dikran: Re our interchanges on this thread, your comments are laced with ad hominem attacks, mine are not.

    When you state,

    I’m glad that JH works hard on action on climate change, I’m sure he had a good deal of influence for the better. However his approach won’t work for everybody, and will alienate some.

    I say,

    People who throw stones shouldn’t live in glass houses.

  83. dikranmarsupial says:

    JH I am not throwing stones, if you read what I have written you will find out that I think different approaches are required for different audiences, and I don’t tend to criticise people for trying to do things their way (as you did “Discussing Climate Sensitivity ad nauseam is akin to doing what the band did on the Titanic.”).

    It is bizarre that you can quote what I said and not realise it was a complement not an ad-hominem!

  84. John Hartz says:

    Dikran: You stated,

    ECS is not a fundamental property of the Earth system, that is true (and I haven’t said otherwise), but that doesn’t mean it isn’t a useful tool in guiding/informing action on climate change.

    You and I are in complete agreement on this statement in its entirety. Case closed.

  85. dikranmarsupial says:

    Sorry, selective quoting to evade the point I was making is just trolling, and seems intended to wind me up, which is not at all appreciated.

  86. Maybe we could draw this discussion to a close.

  87. dikranmarsupial says:

    Certainly, my apologies. Perhaps time for another hiatus…

  88. John Hartz says:

    ATTP:

    Maybe we could draw this discussion to a close.

    My thoughts exactly!

  89. ecoquant says:

    @John Hartz, @dikranmarsupial,

    Regarding

    ECS is not a fundamental property of the Earth system …

    I would just suggest in analogy that Reynolds number is not a fundamental property of fluid flow or objects in it, yet it is very useful for understanding flow across many scales, even for highly complicated systems.

  90. ecoquant says:

    @John Hartz,

    Thank you very much, John. That’s excellent.

  91. GERALD RATZER says:

    For those who are tired of the ECS debate (which I found interesting) – I would like your opinion on the recent presentation from Dr. Ed Berry on why the IPCC and its Bern Model for CO2 are wrong.
    His slide show is at

    Click to access EdwinBerryPortoSep7Final.pdf

    I think the first few slides and the one on the C14-CO2 and its decline after the NTBT is good evidence on a global basis.

    Are there any errors or omissions in this presentation?
    If this Science is correct, there is less to worry about the ECS.

    Gerald

  92. Gerald,
    The problem with Ed Berry’s ideas is that he is confusing the residence time of a molecule, with the adjustment time of an enhancement in concentration. It is true that a single molecule will only remain in the atmosphere for a few years. However, when it leaves the atmosphere, it is typically replaced by a molecule from one of the other resources. Therefore the timescale over which an enhancement of atmospheric CO2 will decay is much longer (centuries rather than years). Also, because we’re adding CO2 to the system, once full ocean invasion has been achieved, about 20-30% of our emissions with remain in the atmosphere for thousands of years. You could try reading this paper.

  93. izen says:

    The extra CO2 molecule that we put into the atmosphere has a 50:50 chance of getting absorbed by ocean or land plant in around a decade.
    But it is a cycle.
    The extra CO2 molecule that we put into the atmosphere that was absorbed by ocean or land plant in around a decade ago has a 50:50 chance of getting released…

  94. Dave_Geologist says:

    Gerald

    D minus for Berry I’m afraid. he has three or four decades of atmospheric physics and chemistry to catch up on. This stuff is in textbooks. No need to even search the literature. He should be humble and read a textbook or audit an online course.

  95. Gerald,
    Diffusion of species to sequestering sites does not follow a damped exponential half-life. This is independent of the nature of the species if the species are following a random walk.

    izen said:

    “The extra CO2 molecule that we put into the atmosphere has a 50:50 chance of getting absorbed by ocean or land plant in around a decade.”

    This 50/50 chance is actually characteristic of a random walk. Half of the molecules move downwards into the ocean on average, while half move upwards. In other words, they have a hard time sequestering so are constantly moving about in a random walk fashion — this is called diffusion.

    Unfortunately this behavior of CO2 is often not adequately explained in the research literature. Diffusion always requires a multiple-box model — infinite boxes in theory — and a single-box model will not even come close to showing the true diffusional response.

  96. Marco says:

    People here would do well to google Gerald Ratzer (and add climate as a search term). Based on what I found, I doubt he will take in your responses.

  97. dikranmarsupial says:

    Here is a video where I explain the difference between residence time and adjustment time that was part of a MOOC on climate change, which explains why Berry is wrong.

    i had previously published a (hopefully accessible) journal paper on this misunderstanding, you can find a pre-print here

    It is hard to understand why this canard still gets promulgated since the first IPCC WG1 report explicitly warns against making this very mistake. ECS is an area of genuine scientific uncertainty, however whether the rise in CO2 is natural or anthropogenic is not. Any skeptic that promulgates this argument is marginalising themselves from the discussion by showing (ironically) a complete lack of self-skepticism. It does neither “side” of the “debate” any good.

  98. dikranmarsupial says:

    I note that Gerald has already added Berry’s argument to the summary document on his web page. If someone claims that a large group of scientists are fundamentally wrong in their understanding of a very basic issue, then common sense should suggest checking first before promulgating it further. Great claims require great evidence. While Galileos do exist, they are vanishingly rare, but crackpots remain plentiful (and the WWW allows them to spread their misinformation much more easily than was the case 20 years ago).

  99. Maybe I’ll ask Gerald my standard question in these situations. Has he heard of, and does he understand, the Revelle factor? It’s my view that if one is to dispute that basics of carbon cycle modelling, it’s an important thing to at least be aware of, and understand.

  100. dikranmarsupial says:

    PP “Unfortunately this behavior of CO2 is often not adequately explained in the research literature. Diffusion always requires a multiple-box model — infinite boxes in theory — and a single-box model will not even come close to showing the true diffusional response.”

    There have been several edited volumes on just this topic (about 20 years ago). The carbon cycle modellers know this perfectly well. Indeed the actual Bern model is a multi-box model (what is often called the Bern model is just a computationally efficient approximation of it’s impulse response).

  101. dikranmarsupial says:

    I think it is a bit like the difference between residence time and adjustment time. It is such a basic issue that it is taken for granted by those writing papers on the carbon cycle and it doesn’t warrant a specific mention. The first IPCC WG1 report specifically warns against this misunderstanding. IIRC the second has a section explaining why the 14C observations don’t give the full story about the future or atmospheric CO2. Subsequent IPCC WG1 reports seem to relegate these issues to (slightly cryptic) footnotes and entries in the glossary. Given the ever increasing size of the reports, it seems likely this is because there has been a lot of scientific progress and page-space is better spent on other topics.

    The first IPCC WG1 report is actually quite readable, and well worth perusing (very obviously Berry didn’t bother to check the IPCC reports very carefully).

  102. The actual diffusion model is for an infinitely divisible medium. For practical (and numerical computation) consideration this is often referred to as a semi-infinite slab model. For whatever reason, which is perfectly fine, the terminology for CO2 is translated to a multiple-box model. The mathematical effect is the same, which is approximating the 2nd-order partial differential equation for diffusion.

  103. Steven Mosher says:

    Dave

    “Tell me Steven, what question was explored?”

    All the studies that came after Nic explaining why they thought he was wrong.

    It is not that hard guys

    Prior to Nic, EBM gave answers that comported with others,
    no questions no problems, no need to investigate warming patterns ( Dessler)
    after Nics results of course lots more interesting work.
    why?
    because the EBM approach started to diverge toward the low end.

    not that hard to get

  104. ecoquant says:

    @Gerald Ratzer,

    Per Berry’s core arguments, which founder on an improper characterization of Carbon flows between soils, oceans, and atmosphere, confounding budget with flux, see a proper accounting at “The Global Carbon Budget” (2016).

    As far as Munshi goes, I have no respect for anything he writes, based upon previous work.

  105. ecoquant says:

    @Marco,

    Agreed. But such a comment, and challenge, deserves a parry.

  106. John Hartz says:

    Dikran: I’m pleased to see that your hiatus was short-lived. I regret that our prior interchanges devolved into a peeing match. I highly value and respect the knowledge that you bring to this and other venues.

  107. Marco says:

    Steven, I don’t think that is true. From what I have seen, and with only very few exceptions, EBMs always gave lower ECS values than various other approaches.

    ATTP: yes, that one.

  108. verytallguy says:

    That Ratzer website is perhaps the best example of wilful ignorance I’ve ever come across.

    But back on topic, anyone fancy explaining the methodology of this paper? ‘cos I’m lost by it.

    The time evolution of climate sensitivity

  109. vtg,
    I think it goes something like this. In the standard 1D energy balance model (EBM) you solve

    N(t) = R(t) - \lambda \Delta T(t),

    where N is the planetary energy imbalance, R is the change in forcing, and \lambda is the feedback response.

    You can then rewrite this as

    \Delta T(t) = \left( R(t) - N(t) \right) \dfrac{1}{\lambda} = \left(1 - \dfrac{N(t)}{R(t)} \right) \dfrac{R(t)}{\lambda}.

    Now consider that rather than a single forcing agent, there are i forcing agents R_i(t) and instead of a single feedback response, there are j feedbacks, \lambda_j, each with a timescale \tau_j.

    The feedback response to forcing agent i at time t is then

    \lambda_i (t) = \lambda_{Planck} + \sum_j \lambda_{i,j} (t).

    Since each feedback response has an independent timescale, what you actually have due to some incremental change in forcing agent i at time t_O is

    \lambda_i(t_O + \Delta t) = \sum_j \lambda^{equil}_{i,j}\left(1 - \exp(-\Delta t/\tau_j) \right).

    The second equation then becomes

    \Delta T(t) = \left( 1 - \dfrac{N(t)}{R_{total}(t)} \right) \sum_i \left[ \dfrac{R_i(t)}{\lambda_{Planck} + \sum_j \lambda_{i,j}(t)}\right].

    At least is think this is about right – there presumably has to be some way of incorporating the second to last equation into the final equation, but I haven’t quite worked that bit out. However, the bottom line is that this all means that the response can depend on the timescale considered.

  110. verytallguy says:

    OK, that makes sense.

    But

    (1) how does the paper actually calculate this? Is it to simulate EBM calculations using model outputs?

    and (2)

    (2) only fast feedbacks are included in the ECS definition – as i understand it water-
    vapour content, lapse rate, cloud cover, snow and sea-ice albedo. It seems implausible that any of these are so significantly different between 10 and 100 year timescales to cause the difference shown in the figure?

    I’m obviously missing something here.

  111. (1) I think the feedback responses and timescales come from the CMIP5 models.

    (2) I’m not sure about this, but there are also some cloud feedbacks that operate on multi-decade timescales.

  112. verytallguy says:

    there are also some cloud feedbacks that operate on multi-decade timescales.

    Interesting, thanks. Still seems unlikely to me that just one part of one feedback could cause the difference in the figure on the 10 and 100 year timescales?

  113. vtg,
    I’ll have to think about that a bit more. There may be some other factors that influence that.

  114. vtg,
    Maybe this fugure will help. The top shows the different feedbacks and their timescales and the bottom shows the resulting overall feedback response against time.

  115. JCH says:

    Gregory 2002:

    Does this comport with what others were saying?

  116. ecoquant says:

    @ATTP,

    The emailed version of your otherwise excellent comment showed `formula does not parse’ messages in place of the bottom two expressions. I’d just advise readers that if they go to the Web site version of that comment, the formulas show up just fine.

    That said, I cannot testify to the correctness of the sums of \lambda_{i} expressions, at least without dropping back and studying much more material. The question is, are these really additive?
    Yes, they, no doubt, can be expanded in a set of additive terms, but do these correspond to the individual feedbacks? See Bony, et al, 2006 for one review, for instance. Cloud physics were always a deep mystery to me.

    See also, and for instance, the discussion of ice-albedo feedback in Prof Ray Pierrehumberts Principles of Planetary Climate, and Stocker’s IPCC-related discussion, and, in particular, its Figure 7.5.

    A direct dive into the question of linearity is offered by Colose from NASA GISS. In addition to showing the couplings, Colose links to a lecture by none other than Professor RayP, which I intend to watch.

  117. angech says:

    Hit the enter key too early sorry. Both VTG and ATTP comments above.
    “the constrained estimate of climate feedback quickly decreases to 1.9±0.3 Wm2 K on a response timescale of 0.1 years , and then slowly decreases further to around 1.5±0.3 Wm2K-
    and 1.3±0.3 Wm2K response timescales of 10 years and 100 years”
    as it is the denominator the ECS derived does blow out quickly over 10 and 100 years.

    ATTP “the ECS you infer depends on the timescale considered. It suggests that if you consider decadal timescales, the ECS will be biased low, and that on century timescales, the ECS has a likely (66%) range that goes from just above 2K to around 3.5K, with a best estimate of about 3K.

    “The equilibrium climate sensitivity (ECS) refers to the equilibrium change in global mean near-surface air temperature that would result from a sustained doubling of the atmospheric (equivalent) carbon dioxide concentration (ΔTx2). it is better characterized as a “near steady state.”
    , the Earth system sensitivity (ESS), can be defined which includes the effects of slower feedbacks, such as the albedo change from melting the large ice sheets that covered much of the northern hemisphere during the last glacial maximum. These extra feedbacks make the ESS larger than the ECS — possibly twice as large”
    Not sure re VTG’s points and your reply.

    The aim is to derive a modified energy balance equation

  118. dikranmarsupial says:

    @ecoquant IIRC Munshi regularly gets confused by looking at correlations in data that have been differenced or accumulated and not realizing the correlations are insensitive to the average values of the signals (which is what a linear trend ends up being after differencing). Basically the same error as Salby. Salby at least has the sense not to respond when people point out his error.

  119. angech,
    That’s not really the aim.

    dikran,
    I think “confused” is a polite way of putting it.

  120. dikranmarsupial says:

    I try, but sometimes fail, to be polite ;o)

  121. Dave_Geologist says:

    It is not that hard Steven

    Prior to Nic, EBM gave answers that comported with others, Because they did it right. Then Nic did it wrong and got a smaller number (than most). But still within the range, so why was there a range if everyone else prior to Nic gave answers that comported with others? Then more people did it right and showed why Nic was wrong.

    Now that all that new work has been done, do you agree that ECS is most likely 3C and that a value as low as 1.5C is no longer credible?

  122. niclewis says:

    ATTP wrote:

    “And yet their model largely matches the observed warming. Maybe that’s because it’s only been decades, rather than centuries.”

    The reason why their model matches the observed warming is very simple. They carried out 10^7 simulations using a range of values for all the key parameters in their model, with the spread in those relating to climate feedbacks corresponding to that of the CMIP5 model ensemble. Then they threw out all the simulations that didn’t match observations, of warming and other variables. So, the remaining simulations (which provide their posterior ECS estimates and model-simulated warming) are bound to largely match the observed warming.

    ATTP also wrote:

    “The reason this paper gets a different results to Lewis & Curry is that it tries to account for the things that Lewis & Curry largely ignore (time dependencies, efficacies, etc)”

    That is incorrect. Lewis & Curry (2018) allow for efficacies, where shown or stated in IPCC AR5 to differ from one. Lewis & Curry also allow for the possibility of longer-term time dependent feedbacks, using information from CMIP5 models. Short term (sub-annual) time dependency of feedbacks will have a negligible effect on energy budget ECS estimates such as those in Lewis & Curry.

    The main reason why Goodwin et al get different results from Lewis & Curry is very probably because their prior for effective climate sensitivity comes from separate feedbacks in the CMIP5 ensemble, differing in their assumed timescales and implying that effective climate sensitivity increases over time, eventually to well above 3 C. Their model has sufficient free parameters for some combinations to be able to match all their observational constraints while also preserving the increasing-over-time sensitivity built into their prior.

    There is just not enough information in the global historical record to provide useful estimates of sensitivity on a longer timescale to that equating to the historical period. And sensitivity estimated on a timescale of weeks or a year, even if valid, is irrelevant for practical purposes such as projecting future climate change.

  123. NL said:

    ” They carried out 10^7 simulations ……. Then they threw out all the simulations that didn’t match observations, of warming and other variables. “

    That’s obviously a case of making things up.

  124. RickA says:

    Paul Pukite:

    They threw out all of the 10,000,000 simulations except for 4600, which matched observations:

    “To check which of the 10 million simulations were most realistic, I checked each simulation against observations of warming in the atmosphere and ocean up to the present day. I kept only the simulations that agreed with the observations for the real world.

    This left 4600 simulations, where the values of the climate sensitivity (and changes in climate sensitivity over different timescales) agree with the atmosphere and ocean warming observed so far. It is from these final 4600 simulations that evaluate how the climate sensitivity evolves over time.”

  125. ecoquant says:

    @RickA,

    Did they use an ABC framework? Or something more ad hoc?

  126. Everett F Sargent says:

    At least all the source code and data will be available (C++) in the final published version for anyone else to check the author’s work. Perhaps that will help NL to see where he has been wrong all these years. :/

    I would have picked the 4600 that were biased the LOWEST relative to the observational record. No make that the 4600 that were biased the HIGHEST relative to the observational record. No, on 3rd thought, I might pick those 4600 that satisfied some form of RMSE based on several observational criteria, oops that’s what the author actually already did.

    Similar to Goodwin (2018) …
    Pathways to 1.5 °C and 2 °C warming based on observational and geological constraints
    https://www.nature.com/articles/s41561-017-0054-8
    https://static-content.springer.com/esm/art%3A10.1038%2Fs41561-017-0054-8/MediaObjects/41561_2017_54_MOESM3_ESM.cpp
    https://static-content.springer.com/esm/art%3A10.1038%2Fs41561-017-0054-8/MediaObjects/41561_2017_54_MOESM4_ESM.cpp

  127. Everett F Sargent says:

    Similar to Goodwin (2018) …

    Click to access 2018-Goodwinetal-2018.pdf

    (actually not RMSE but something reasonably analogous, see Equation 3 in the Methods section)

  128. Chubbs says:

    Agree with Everett, like EBM, this is an observation-based method using a simple energy-balance model, but with time-varying feedback allowed. Per the paper a “very large initial Monte Carlo model ensemble” is constructed. There is no negative connotation to selecting the 4600 that best match observations.

  129. Nic,

    The reason why their model matches the observed warming is very simple. They carried out 10^7 simulations using a range of values for all the key parameters in their model, with the spread in those relating to climate feedbacks corresponding to that of the CMIP5 model ensemble. Then they threw out all the simulations that didn’t match observations, of warming and other variables.

    Yes, I know what they did. My comment was a response to TE who seemed to be claiming that the results were not consistent with observations.

    That is incorrect. Lewis & Curry (2018) allow for efficacies, where shown or stated in IPCC AR5 to differ from one.

    The word “largely” was there for a reason.

    There is just not enough information in the global historical record to provide useful estimates of sensitivity on a longer timescale to that equating to the historical period.

    I’m not quite sure what you’re getting at here, but if you’re suggesting that we should be careful od using observationally-based estimates to make strong statements about warming over much longer timescales, then I would agree. I’m just surprised that you seem to be making this argument.

  130. Dave_Geologist says:

    Then they threw out all the simulations that didn’t match observations, of warming and other variables.

    Which is why all the simulations from all the CMIP runs post on top of each other and make one thin black line. Oh hang on a minute, AFAICS they don’t. A puzzler, isn’t it?

    The main reason why Goodwin et al get different results from Lewis & Curry is very probably because their prior for effective climate sensitivity comes from separate feedbacks in the CMIP5 ensemble

    Or perhaps it’s because they eliminate physically unrealistic but rhetorically useful (“objective”) cases from their prior, because, well, Then There’s Physics. And because posteriors which invoke a physically unrealistic prior are likely to be, well, physically unrealistic. And even if valid, “irrelevant for practical purposes such as projecting future climate change”.

  131. RickA said:

    “They threw out all of the 10,000,000 simulations except for 4600, which matched observations:”

    Note the loaded phrase “threw out” that NL used. Consider a case of matching a model to observations using an iterative search algorithm. The search can test millions of combinations on its path to finding a best fit. That’s just a process of elimination.

  132. Dave_Geologist says:

    The last thing a lukewarmer wants is to match observations. They’re so…. inconvenient.

  133. Everett F Sargent says:

    I’m thinking it takes only one paper. Maybe that one paper has already been written?

    Assume things are constant in time (whatever things that NL has assumed are constant in time). Now assume that those assumed constant things are not really constant things in time. ECS is assumed to be constant given enough time, ~several hundred years to a new equilibrium (but technically forever given a folding time, or half life, as in an exponential decay).
    https://en.wikipedia.org/wiki/E-folding

    I’m thinking that Goodwin should have taken their experiment out to 10^3 (1E3) years (or 5E2 years) Perhaps Goodwin was limited by the temporal extent that the CMIP5 experiments were taken out to (2100 or fewer out to 2300 or even fewer beyond 2300).

    Anyways, the basic argument, as I see it, is that multi decadal to multi centennial processes are not fully captured in the observational record, any observational record, if that observational record is constrained to be less then the say 10-100X the e-folding time of all ECS processes (or technically, any process that is not instantaneous, as say the Planck feedback per the current Goodwin paper).

    But I could be wrong, in assuming that stochastic metrics, that extend to the time (or conversely the frequency) of our current observational records, might not be extrapolated beyond that observational record without including the well known “bell shaped” flare of statistical uncertainties.

    Or some such gibberish. 😉

  134. Chubbs says:

    Goodwin et al use 3 separate surface temperature periods for fitting:
    1850-1900 —> 1985-2006
    1950-1960—->2007-2016
    1970-1980—->2007-2016
    This gives heavier weight to more recent obs than standard EBM, and allows response timing and aerosol effects to be better accounted for. The rapid warming since 1970, well predicted by climate models (after accounting for SST blending and obs coverage), is inconsistent with low ECS.

  135. ecoquant says:

    To some degree, this is, to me, a tempest in a teacup. Let’s suppose there is some probability mass assigned to low ECS ranges. The question is, How big is it?, not does it exist? It’s as if the existence of some mass there means it has to be there. Many parties, from my perspective, on all sides are still chasing the idea that ECS is a “true point value” and the uncertainty envelopes draped about it in all these estimates are due to ignorance. That’s hardly the case.

    These are stochastic systems. If one could, and one can’t, run the “Earth experiment” from identical initial conditions (assuming these could be realized) many times, there’s every reason to believe they would each end up in different states, even if forced by identical conditions.

    Facts are, the overall probability mass associable to low ECS is small. There’s no loss associated with these future outcomes, apart from investment in mitigation efforts which might not be needed but they, themselves have benefits beyond mitigation. (An EV is a superior vehicle. Period. 500,000 mile lifetime?) There is also a probability mass associated with +8C, +10C ECS. It is also small. But, if it were realized, at least civilization would end. There might be worse consequences. Fast marching arguments suggest that’s where effort should be allocated to clarify.

    Note there’s an inherent confounding between measurement and calculation uncertainties, and specification errors, meaning that, to enable calculability, we, collectively, simplify deeply nonlinear systems and, so, miss mechanisms and features which, under peculiar conditions, might matter. That’s what’s out there at +8C, +10C, too. Prof RayP’s talk is one attempt to parry these away as unrealistic. But, his is just one, albeit highly educated view.

    In short, for me, no serious assessment of ECS can be done without considering the losses associated with the outcomes. We don’t need an integrated assessment model to do that, unless we want to provide welfare for economists. It’s clear that +8C is incompatible with anything we know. +5C might be as well. Estimates of what’s out there always depend upon how much the observer cares and where they care. But, hey, I’m a statistician and I would think that.

  136. angech says:

    Good to see Nic Lewis prepared to comment here and set the record straight.
    “The main reason why Goodwin et al get different results from Lewis & Curry is very probably because their prior for effective climate sensitivity comes from separate feedbacks in the CMIP5 ensemble, differing in their assumed timescales and implying that effective climate sensitivity increases over time, eventually to well above 3 C. Their model has sufficient free parameters for some combinations to be able to match all their observational constraints while also preserving the increasing-over-time sensitivity built into their prior”
    As someone said earlier
    “Nice paper. Main benefit is building on other recent papers to continue the process of resolving differences between climate models and the energy balance method (EBM). ”
    As long as EBM approach gave the same answer no one was curious enough to explore certain questions.”
    Selective auditing of which models to use in a study is a fraught field.
    All the models ran on similar ideas, some would obviously not match the observations, only luck of the draw that some did.
    The dynamics of the majority of the models did not change, did they?
    The best way to settle it would be to do a run of all models and get the true all model ECS, obviously a lot higher and quote that as a comparison to what this small subgroup of models did.
    Even so the fact that for a small subgroup the outputs were lower than expected on the short run does not make up for the fact that they all will go up higher than observations when run for any length of time.
    I would have liked Nic to make some comment on changing the 12 to 48 times a year input recalculations and if in his opinion this would cause an increase in the expected rate of warming and a higher ECS.

  137. dikranmarsupial says:

    angech wrote “The best way to settle it would be to do a run of all models and get the true all model ECS, obviously a lot higher and quote that as a comparison to what this small subgroup of models did.”

    Please stop pretending to be an expert in climate models angech (hint, ask why something isn’t done in such and such a way, rather than asserting that it should). There is no such thing as a “true all model ECS”. For a start the models have parameterisations, so if you want to determine the “true all model ECS” you would first have to establish a prior belief over the parameter space and marginalise over the parameters (and indeed establish a prior over the models themselves). This of course is computationally infeasible (c.f. “perturbed physics experiments), which is why the IPCC describe it as an “ensemble of opportunity”, to point out these structural (and other) uncertainities exist.

    “Good to see Nic Lewis prepared to comment here and set the record straight.”

    So how do you know that Nic is setting the record straight (certainly straight in his opinion, but that doesn’t imply that it is objectively [sic] straight)?

  138. Dave_Geologist says:

    I would have liked Nic to make some comment on changing the 12 to 48 times a year input recalculations and if in his opinion this would cause an increase in the expected rate of warming and a higher ECS.

    No need angech. I’m pretty sure Nic Lewis is well aware of the difference between compound interest and solving ordinary differential equations, let alone the partial D.E.s you need in climatology.

    Did you read my comment on the Tangent Problem? Or watch the whole of the linked video if you’re unfamiliar with differentiation? I can’t think of a simpler place to start. It’s generally how calculus is introduced in high school maths classes, page 3 of chapter 1 in my old textbook. Generations of maths teachers have obviously concluded that it has good explanatory power, and it requires no more grounding than y = mx + c, plus the perhaps-abstruse concept of there being a limit to the gradient as the x-increment tends to zero.

  139. verytallguy says:

    Dikran,

    don’t waste pixels responding to bullshit!

    A suggestion that instead, you comment on Nic’s focus on the use of priors, which I don’t understand at all.

    The main reason why Goodwin et al get different results from Lewis & Curry is very probably because their prior for effective climate sensitivity comes from separate feedbacks in the CMIP5 ensemble, differing in their assumed timescales and implying that effective climate sensitivity increases over time, eventually to well above 3 C. Their model has sufficient free parameters for some combinations to be able to match all their observational constraints while also preserving the increasing-over-time sensitivity built into their prior.

    Can you explain the difference between the prior used by Lewis and Curry and the prior used in this work?

  140. dikranmarsupial says:

    VTG I’d need to read it and think about it in more detail, however it seems somewhat ironic to blame the prior after defending a the Laplace prior which obviously contradicts our state of prior knowledge. Unfortunately term has started, so I don’t have time at the moment (which you are right, I shouldn’t waste responding to angech).

  141. verytallguy says:

    Understood DK, your comments here are appreciated.

    Angech is not merely wrong, he’s *determined* to be wrong. Your time is too precious for that.

  142. Chubbs says:

    vtg – I don’t understand the NL’s prior comment either. Goodwin et al have a more complicated energy balance model than L&C with more parameters to be fitted. Also they are using more observations than L&C. To me that is the likely root cause of the differences. Seems strange to criticize both the prior and the fact that 10^7 ensemble members are provided prior to downselecting.

  143. angech says:

    Dave_Geologist says:
    “No need angech. I’m pretty sure Nic Lewis, a true expert in climate models, is well aware of the difference between compound interest and solving ordinary differential equations, let alone the partial D.E.s you need in climatology.”

    Good. The fact that he will make no further comment should prove all of you right. He does seem to imply that the CMIP5 ensemble has separate feedbacks differing in their assumed timescales implying that effective climate sensitivity increases over time, eventually to well above 3 C.
    He must be wrong because we all know that ECS is an emergent property not programmed into models.
    The fact that the emergent ECS increases over time until it matches what we expect is purely incidental and factual.
    “Did you read my comment on the Tangent Problem?” Yes. and Thanks.

  144. JCH says:

    It’s real simple. For ECS to be captured by observations to date, the freakin’ wind has to blow abnormally hard for most of the time. The second it stops blowing, we get a horrendous snapback warming that erases the stall and then some. In the mid-20th century the stall was an actual fall in the GMST: a decline. In the pause, during which there was a much higher level of atmospheric CO2, the stall was about 6 years of actual decline – 2006 thru 2013. Barely a flea on an elephant’s butt.

    angech – stare at 1945 to 1970 and then stare at its 2nd coming, 2006 thru 2013. You’re a religion. You’ll like the south seas, bones in their noses, witchdoctors who have built fake airports in the hopes that 1945 to 1970 is going to happen again. It happened again: 2006 thru 2013. It barely made a dent. The negative phase of the PDO is a pussy cat. Meow, purr purr. Skeptics are the 2nd coming of Feynman’s cargo cult. Tsonis: waiting for it to happen again. Curry: waiting for it to happen again. Pray:

    Or come to your senses and look at the aqua PDO from the 1980s to 2014. It’s sinking. That is the negative phase of the PDO, only with an ACO2 life jacket. It has three pronounced upward humps, all of which caused record highs in the GMST. But overall, it had a downward influence on the GMST, progressively less upward assist and ending with extreme downward pressure during the “PAWS”.

    Up next, assuming it is a cycle, which is in doubt, maybe the POSITIVE phase. A huge surge in global warming.

    During the period 1970 to 2012, models showed a decline in wind. In actuality wind increased. This increased EB upwelling in the Pacific, a vast area of the earth’s surface, which resulted in clouds that reflected a lot of SW. When that went away we got a boatload of SW warming, post 2013, almost instantly. Call it the positive phase of the PDO. Whatever, it’s big. It’s powerful. It’s fast.

    What happens if we get an abnormally long period of low winds along the equatorial Pacific? Where will angech et al be then? LMAO. Claiming they knew the whole time.

    And oh yeah, abrupt climate change means it can also go way up, abruptly. Especially if the system is quite sensitive to natural variability, which it appears that it is.

  145. JCH says:

    Speaking of the equatorial Pacific, Niño 3.4 is on a wild ride:

  146. Dave said:

    “I’m pretty sure Nic Lewis is well aware of the difference between compound interest and solving ordinary differential equations, let alone the partial D.E.s you need in climatology.”

    The fun is in attempting to separate the temporal and spatial parts of the DiffEq’s so that standing wave modes of various climate behaviors can be modeled. Solving the Navier-Stokes for a constrained or reduced topological structure is what the condensed matter faction of climatology (Marston, Wettlaufer, and others) are targeting.

    Wettlaufer: “There is a vast gulf, both conceptually and in terms of space and time scales, between simulations and idealized models. Attempts to reconcile them will have to focus on the problem of scales, a task well suited to physicists: The challenge of scale separation in both condensed matter and particle physics led to the development of the renormalization group, unifying concepts in previously disparate fields [5]. Renormalization group concepts and methods have been successfully applied to fluid dynamics problems [6,7], which are central to climate dynamics.”

  147. angech says:

    Re time steps
    “The time-stepping scheme that is widely used in weather and climate models is the second-order centred-difference scheme, which is affectionately known as the leapfrog scheme. It is widely used because it is easy to implement, computationally inexpensive, and has low run-time storage requirements. Unfortunately, in addition to the physical solution of the governing PDEs, the leapfrog scheme also admits a spurious computational mode that is manifest as a growing 2-delta-t oscillation.”

  148. angech says:

    JCH
    “Skeptics are the 2nd coming of Feynman’s cargo cult. Tsonis: waiting for it to happen again. Curry: waiting for it to happen again. abrupt climate change means it can also go way up, abruptly. Especially if the system is quite sensitive to natural variability, which it appears that it is.
    It’s real simple. For ECS to be captured by observations to date, the wind has to blow abnormally hard for most of the time. “

    Feynman, despite skeptics quoting him, is all right, isn’t he? AGW accepts his views as well.
    Yes, if skeptics are wrong they are in a cargo cult. People who are in a cargo cult are presumably incapable of seeing this despite the best efforts of enlightened people by definition.
    The term skeptic has been downgraded by applying it to people who are really just anti AGW in many different ways.
    I am as skeptical of some of those claims as I am of those from the AGW camp.
    Nonetheless I would be more than happy to be included in the Curry camp theme.
    Thanks for your graphs, I get a better picture of what you are trying to educate people about with PDO from that, but still not enough to discuss /debate it with you.
    Will keep listening instead.
    And darn that 3.4 Niño rise.

  149. Everett F Sargent says:

    RE: time steps (context matters)
    http://www3.imperial.ac.uk/newsandeventspggrp/imperialcollege/naturalsciences/mathematics/mathsseminars/eventssummary/event_19-4-2016-11-13-32

    “Finding accurate numerical methods for stepping forward in time in weather and climate prediction models is challenging. We want atmospheric and oceanic waves to move at their correct speeds. We also want the growth and decay of weather patterns and ocean instabilities to be based on the physical processes rather than the numerical methods. The challenge is made more complicated by the broad spectrum of time-scales of atmospheric and oceanic processes, ranging from those associated with transport by the wind and currents to those of acoustic waves.

    The time-stepping scheme that is widely used in weather and climate models is the second-order centred-difference scheme, which is affectionately known as the leapfrog scheme. It is widely used because it is easy to implement, computationally inexpensive, and has low run-time storage requirements. Unfortunately, in addition to the physical solution of the governing PDEs, the leapfrog scheme also admits a spurious computational mode that is manifest as a growing 2-delta-t oscillation. For over 40 years, the solution has been to apply the Robert-Asselin filter. The filter successfully suppresses the computational mode, but it also weakly damps the physical mode and reduces the accuracy to first-order.

    This talk will review several simple modifications to the filtered leapfrog scheme. The modifications will be shown to increase the amplitude accuracy from first-order to third-order and even seventh-order, without sacrificing the phase accuracy, stability, or computational expense. The modified filter has become known as the RAW filter. The proposed new schemes have been tested in numerical integrations of simple nonlinear systems and comprehensive general circulation models. They are now used in many weather and climate models and have, for example, significantly increased the skill of medium-range weather forecasts. Therefore, they appear to be attractive alternatives to the conventional implementation of the leapfrog scheme.”

  150. Dave_Geologist says:

    angech, that’s basically what I described in my third paragraph, except instead of zig-zagging either side of the curve, you take the average of the two gradients and apply it at the central point. A concern would be that where you have a continuously increasing or decreasing gradient, you could still drift away from the curve, just not so fast. I believe the “second order” bit means that you also consider the second derivative to introduce a correction for the degree of curvature. Since I was involved in sedimentary basin modelling, where most parameters were continually increasing or decreasing, I was more concerned about systematic drift or bias than about wave-like instabilities, and about the end-point of a particular run than the intermediate points (or rather, I might have thousands of intermediate steps but only plot every tenth or whatever, so minor oscillations were unimportant at the multi-step plotting scale). Given that the best I had to work with was a PDP11/70 (actually an 11/45 with memory upgraded to 11/70 spec), my idea of “computational expense” was probably rather different from that of a modern programmer 🙂 .

    Anyway, it looks like climatologists are well aware of the issue and are on top of it. As you’d expect.

    BTW, do you accept that compound interest was a red herring?

  151. What you do is take an analytic closed-form solution to Navier-Stokes and use that as a check against the numerical computation scheme to solving the DiffEq’s. If the numerical solution converges to the analytical solution then you know the numerical iteration scheme is precise enough.

  152. Dave_Geologist says:

    I’m sure that works for some situations Paul. It didn’t for my basin modelling one. I did check simple closed-form cases (with constant-density basin fill) and it converged closely enough. But for real-world cases with compacting sediment fill, for which there were no closed forms, it didn’t. Fortunately it exhibits time-reversal symmetry. Indeed reverse modelling with decompaction is more common than forward modelling with compaction, because you want to track the basin back through time and determine useful stuff like when hydrocarbons were expelled. I realised I had a problem when the backward runs didn’t overlie the forward runs. The drift was either side of the “true” curve.

  153. dikranmarsupial says:

    “Given that the best I had to work with was a PDP11/70 (actually an 11/45 with memory upgraded to 11/70 spec), my idea of “computational expense” was probably rather different from that of a modern programmer “

    And you try telling the young people of today that, … they won’t believe you!

    Started with a Jupiter Ace myself, but first “academic” computer was a Sun 3/50 ;o) There is a lot to be said for starting out with limited computing power, so you learn how to do things efficiently.

    “Anyway, it looks like climatologists are well aware of the issue and are on top of it. As you’d expect.”

    Indeed, Steve Easterbrook’s video discussed here is well worth watching. The people who write climate models know what they are doing and use software engineering methods most appropriate for the task they are performing (and bringing in software engineers won’t help much because they won’t have the maths/phsyics background to understand what is being coded up). Never ceases to amaze me though the hubris of blog commenters think they know better.

  154. Dave said:

    “I’m sure that works for some situations Paul. It didn’t for my basin modelling one. I did check simple closed-form cases (with constant-density basin fill) and it converged closely enough. But for real-world cases with compacting sediment fill, for which there were no closed forms, it didn’t. “

    I’m sure it won’t work for situations where as you say “there were no closed forms”, since that is exactly what is required to apply the approach I suggested.

    This approach also works for stochastic models. Inverting a PDF to generate a sampling distribution and then running a Monte Carlo long enough to check that it matches the analytical form.

  155. JCH says:

    I worked for accountants who started in the profession before there were adding machines.

  156. Dave_Geologist says:

    I should have been clearer Paul. What I meant was that the evolution of the non-closed-form-but-realistic cases might have (in my case, did have) second and third derivatives (or even singularities, although not in my case) which are sufficiently different from the closed-form cases that you can’t assume the minimum required step size will be the same. Sometimes you have to test the realistic case, by brute force if nothing else is available. My forwards-backwards test wouldn’t work for anything chaotic, for example, and in a case where natural variability is large compared to the forcing, you might be bedevilled by numerical oscillations vs. the “real” oscillations you’re trying to simulate. A better way of expressing why that didn’t matter for me (but obviously does for climate models) would be to say that my forcings vastly exceeded any natural variability. A gazillion tonnes of lithosphere moving faster than your fingernails grow will do that 🙂 .

    The other “trick” I used (no, not that kind of trick) was to specify extension in a dimensionless form which simplifies the maths (but everyone does that so not my invention!). Even nowadays, it’s normal in structural geology and geomechanics to work in a rotated reference frame which diagonalises or symmetricalises the tensors. So what if you’ve got a 64-bit mainframe that crunches any numbers in seconds? I want to run it on my desktop, Or my laptop. Or an iPad. Or a phone.

    None of which, of course, means you can appeal to “the model being wrong” because there are know solver issues which have to be confronted and worked around. Climate modellers know about this stuff, as per Everett’s full abstract quote, probably far more than the authors of the relevant Wikipedia article. The old saying “all models are wrong, but some are useful” applies. A spherical cow has a closed-form solution for the density, given mass and diameter. But a spherical cow can only roll, not walk. Horses (or cattle 🙂 ) for courses.

  157. Dave_Geologist says:

    And you try telling the young people of today that, … they won’t believe you!

    Interestingly, back in the day my boss didn’t believe me. He kept telling me to make more use of virtual memory (swap, which had to be hard-coded and was not accessed dynamically like on a modern PC), and I tried but it slowed things down. He kept insisting I was just unlucky with my runs and there were too many concurrent users. From the PDP Wiki page, I think I now understand why.

    PDP-11/70 – The 11/45 architecture expanded to allow 4 MB of physical memory segregated onto a private memory bus, 2 kB of cache memory, and much faster I/O devices connected via the Massbus.

    Because it was an upgraded 11/45, the physical memory would have been on the new, fast bus but the swap space on the older, slower bus. So doubly hampered compared to main memory.

  158. dikranmarsupial says:

    I once had vaguely similar problem, I wanted to buy more memory for my computer (essentially so I could cache more partial computations, which would be frequently accessed) and was told I should look for algorithmic improvements (the caching was the only algorithmic improvement available).

    There are good aspects and bad about modern computing. This year I have written some optional lab exercises for my first year programming course, for students that already had some programming experience, on (retro-)game programming (Pong, Asteroids, Space Invaders, that kind of thing). Asteroids was rather hard to implement in the late 1980s/early 90s, much easier now, but still difficult enough to be an entertaining/educational activity for those learning to program.

  159. JCH says:

    Top-of-Atmosphere Earth Radiation Budget Variability During and After the 2014-2016 El Niño Event

    Prior to 2014, the CERES record (March 2000 onwards) is dominated by a cool phase of the Pacific Decadal Oscillation (PDO) with three significant La Niña events (1998-2001, 2007-09 and 2010-12) and four relatively weak El Niño events (2002-03, 2004-05, 2006-07 and 2009-10) occurring. During 2013, near-neutral conditions persist over the Pacific Ocean throughout the year, followed by a gradual build-up of El Niño conditions in 2014, which reach maximum strength during late 2015—early 2016. By mid-2016, the ENSO index returns to neutral conditions. The 2014-2016 El Niño event is considered to be one of the three strongest El Niño events since 1950, and also coincides with a return to the warm phase of the PDO. Anomalies in CERES outgoing longwave radiation (OLR) and outgoing shortwave radiation (OSR) exhibit remarkable behavior during and after the 2014-2016 El Niño event. OLR anomalies steadily increase after 2013, reaching 1.8 Wm-2 in early 2016. In contrast, OSR anomalies show a steady decrease, and reach a minimum of -2 Wm-2 only in January 2017, one year after the peak of the El Niño. Interestingly, the magnitudes of OLR and OSR anomalies remain appreciable through July 2017, the latest CERES month processed at the time of this writing. However, because OLR and OSR anomalies are of opposite sign, the magnitude of anomalies in net TOA flux are smaller, but are generally positive during and following the 2014-2016 El Niño event. To confirm that these remarkable variations in TOA radiation are robust, this presentation compares TOA flux anomalies during and after the 2014-2016 El Niño event from CERES EBAF Ed4.0 and other satellite and reanalysis datasets. We examine how anomalies amongst individual CERES instruments processed independently (Terra, Aqua and SNPP) compare with one another and with EBAF Ed4.0 during this period. Global mean net TOA flux anomalies are also compared with Argo-based ocean heating rates for the top 700 m and 2000 m ocean layers. Finally, we decompose the global and regional TOA flux anomalies into individual components that contribute to TOA flux variability during the 2014-2016 El Niño event in order to better understand what are the main drivers of the SW and LW TOA flux variability.

    Today:

    So, choose the big region:

  160. ecoquant says:

    @Dave_Geologist,

    Ah, yes, those days … Overlays, with variables shared between them in COMMON statements. My introduction to “high technology” was an IBM 1620 in 1964 (yes, I was 12), then an IBM 1401, then a few years on an IBM 1130, all in FORTRAN.

  161. John Hartz says:

    For the “I kid you not!” file…

    Last month, deep in a 500-page environmental impact statement, the Trump administration made a startling assumption: On its current course, the planet will warm a disastrous 7 degrees by the end of this century.

    A rise of 7 degrees Fahrenheit, or about 4 degrees Celsius, compared with preindustrial levels would be catastrophic, according to scientists. Many coral reefs would dissolve in increasingly acidic oceans. Parts of Manhattan and Miami would be underwater without costly coastal defenses. Extreme heat waves would routinely smother large parts of the globe.

    But the administration did not offer this dire forecast as part of an argument to combat climate change. Just the opposite: The analysis assumes the planet’s fate is already sealed.

    The draft statement, issued by the National Highway Traffic Safety Administration (NHTSA), was written to justify President Trump’s decision to freeze federal fuel efficiency standards for cars and light trucks built after 2020. While the proposal would increase greenhouse gas emissions, the impact statement says, that policy would add just a very small drop to a very big, hot bucket.

    Trump administration sees a 7-degree rise in global temperatures by 2100 by Juliet Eilperin, Brady Dennis & Chri Mooney, Health & Science, Washington Post, Sep 28, 2018

    There has to be a rather high ECS associated with the forecast embedded in draft EIS discussed in the above article.

  162. ecoquant says:

    @John Hartz,

    Yes, I saw this, too, and was appropriately appalled.

  163. dikranmarsupial says:

    The car is certain to run into the brick wall, it’s fate is sealed, so there is no point in taking your foot off the gas. 😦

  164. izen says:

    Here is the draft report, it is loooong, still skimming thru it;-

    Click to access ld_cafe_my2021-26_deis_0.pdf

    But ‘shorter’ version,
    If we keep the Obama fuel efficiency rules the Earth warms 0.05C less and oceans rise 0.02mm less than if we delay the fuel efficiency cuts.
    Temperature is going up by 0.7m and extreme storms, floods and droughts are going to get worse.

  165. izen says:

    Temperatures up >2C, oceans >0.7m.
    (and the delayed fuel efficiency only increases the warming by 0.002C)

  166. kribaez says:

    Nic Lewis wrote :-
    “The main reason why Goodwin et al get different results from Lewis & Curry is very probably because their prior for effective climate sensitivity comes from separate feedbacks in the CMIP5 ensemble, differing in their assumed timescales and implying that effective climate sensitivity increases over time, eventually to well above 3 C. Their model has sufficient free parameters for some combinations to be able to match all their observational constraints while also preserving the increasing-over-time sensitivity built into their prior.”
    I would support this argument. It is applicable whether the predictive model used by Goodwin is valid or not. In other words, an invalid model fitted to GCM results with over-determined data can still produce outcomes which are superficially plausible – even if their parameter values are not – and they would indeed in this instance forcibly retain the implied time-dependent increase in effective climate sensitivity.
    However, I would add to this that there appears to be a profound conceptual error in Goodwin’s paper, which make the abstracted (mean equilibrium) feedback estimates disconnected from any physical interpretation, and hence impossible to interpret in the way that he does. Before I try to explain this error, I will note that there are two other less profound problems with the paper.
    (1) Goodwin uses Effective Radiative Forcings (ERFs) to tie feedback to ECSeff. Use of ERF or back-extrapolation of a Gregory plot are fine for certain types of prediction under an assumption of constant feedback, but make no sense at all in this application. Definitionally, the ERF has already discounted the fast temperature feedback and atmospheric feedbacks to net flux over a timeframe of several years to a decade. Given that Goodwin here is specifically using a non-constant feedback and moreover purporting to resolve feedbacks over a period of less than one year, the use of ERF is a major inconsistency. Instantaneous forcings are required. The use of forcings which are too low, as they are in this instance, results via the temperature match in estimates of longer-term feedback which are also too low in magnitude, and importantly, since the feedback values are abstracted in the form of (R-N)/DeltaT there is no compensating cancellation of the two underestimates when R(t)/lambda(t) is projected to ECSeff.
    (2) Equation A1 is nonsense. It is probably a typo, since its application would produce mush. However, since the code is not accessible, it is impossible to know exactly what Goodwin did in his updated routine.

    Now to return to the more profound problem, which concerns the governing equation that Goodwin uses.
    To simplify the explanation, let’s assume that there is only one source of radiative forcing, and let’s start by considering the assumption of a constant feedback. If an experiment is run with a single stepforcing, F, then we see a straight line relationship between N and T via the relationship:
    N(t) = F – lambda * T(t) where N and T both signify changes from time zero.
    If we are dealing with an LTI system (and most of the AOGCMs do conform to this) and we now wish to model how this system responds under an arbitrary forcing series, R(t) in Goodwin’s nomenclature, we can use superposition or convolution to determine the evolution of N and T over time.
    We find that the relationship for an arbitrary forcing series becomes:-
    N(t) = R(t) – lambda* T(t)
    In other words a plot of (R(t) – N(t)) against T(t) yields a straight line of gradient lambda.

    Now let’s consider an alternative, but still LTI, system where the system response to a fixed forcing is a CURVE on the net flux vs temperature plot i.e. the sort of system postulated by Goodwin.
    We can write from the stepforcing experiment:-
    N(s) = F – lambda(s)*T(s) The variable , s , is still a time variable but now it relates to the length of time since the start of the forcing . I will explain in a moment why I have introduced a new time variable. We can calculate lambda(s) unambiguously as the secant gradient drawn from the point N=F at T = 0 to the point (N(s), T(s)) at time s.
    That is
    lambda(s) = (F – N(s))/T(s) = change in net flux after time s/change in temperature after time s.

    Once again, we can find out how this system behaves in response to an arbitrary input forcing series, R(t), using superposition or convolution to compute N(t) and T(t), where the time variable, t, in this instance relates to the start of the simulation or the start of the introduction of the input forcing series. We now find that the relationship
    N(t) = R(t) – lambda(t) * T(t) DOES NOT HOLD. It is not even a good approximation for the correct solution yielded by convolution. Yet this same equation is written by Goodwin as Equation 5 and forms the basis for the development of his model.
    Now there is nothing to stop Goodwin from defining a new time-dependent variable alpha(t), say, and DEFINING the variable as
    Alpha(t) = [ R(t) – N(t)] /T(t) , but he then needs to recognise that alpha(t) is NOT the same function as lambda(s). One can forward model N and T from R using lambda(s) and then calculate alpha(t), but not the other way round. The importance of this point is that lambda(s) may be considered a model characteristic and used as input, whereas this alpha(t) functional relationship is a history-dependent output, and hence it cannot then be related directly to feedback estimates which derive from fixed step-forcing experiments and which are the most basic source of estimates of ECS from the AOGCMs. An additional problem is that the history-dependence includes a time-weighting effect which confounds the justification for the simple partitioning proposed by Goodwin.

  167. kribaez,

    Definitionally, the ERF has already discounted the fast temperature feedback and atmospheric feedbacks to net flux over a timeframe of several years to a decade.

    No, I don’t think this is correct. The ERF accounts for some very rapid changes that don’t produce a change in global mean temperature. It doesn’t include things like water vapour, lapse rate, clouds, which are the feedbacks that Goodwin is considering.

  168. Everett F Sargent says:

    “We can calculate lambda(s) unambiguously as the secant gradient drawn from the point N=F at T = 0 to the point (N(s), T(s)) at time s.”

    Unfortunately, for you, this does not work for large time as the temperature reaches a new equilibrium or asymptote. It is simply a trigonometric relationship for the secant as delta temperature is a CONSTANT for any large value of time (e. g. 10X-100X the longest e-folding time). :/

  169. Everett F Sargent says:

    LTI
    Linear time-invariant system
    https://en.wikipedia.org/wiki/Linear_time-invariant_system

    Sounds like something that an EE to would say.

    BTW, the final paper formatted and all C++ code are now available …
    https://agupubs.onlinelibrary.wiley.com/doi/abs/10.1029/2018EF000889

  170. kribaez says:

    ATTP,

    You wrote “No, I don’t think this is correct. The ERF accounts for some very rapid changes that don’t produce a change in global mean temperature. It doesn’t include things like water vapour, lapse rate, clouds, which are the feedbacks that Goodwin is considering.”

    I suggest that you read the detailed definition of ERF which you can find in Section 8.1.1 of AR5 WG1 Chapter 8. SST and sea ice are held fixed but other variables are allowed to adjust, including land temperatures, before the top-of-model net flux change is recorded as the forcing value.

    Or try this from the Executive Summary of Chapter 8:-

    “Whereas in the RF concept all surface and
    tropospheric conditions are kept fixed, the ERF calculations presented
    here allow all physical variables to respond to perturbations except
    for those concerning the ocean and sea ice. The inclusion of these
    adjustments makes ERF a better indicator of the eventual temperature
    response. ERF and RF values are significantly different for anthropogenic
    aerosols owing to their influence on clouds and on snow cover.
    These changes to clouds are rapid adjustments and occur on a time
    scale much faster than responses of the ocean (even the upper layer) to
    forcing.”
    ERF does not discount ALL fast feedbacks, since SST is held fixed, but it discounts enough tropospheric response to confound what Goodwin is trying to do. Its effect is to straighten out the early curvature of net flux vs temperature on a Gregory plot, reducing the magnitude of the early feedback. Ironically, because of the difficulty and expense of calculating ERF, back-extrapolation of the Gregory plot is often used as a quick and dirty method of estimating its value, thus guaranteeing a straight line start to the net-flux vs temperature relationship.

  171. kribaez,
    I’ll have to think a bit more about that, but I don’t think it’s invalidating what Goodwin is doing. The main factor that is influencing Goodwin’s results is the feedbacks that operate on timescales of decades, not those that operate on the timescale of days.

  172. kribaez says:

    Everett F Sargent,
    “Unfortunately, for you, this does not work for large time as the temperature reaches a new equilibrium or asymptote. It is simply a trigonometric relationship for the secant as delta temperature is a CONSTANT for any large value of time (e. g. 10X-100X the longest e-folding time).”
    It certainly does work, and I don’t understand your objection. At large times, for a fixed forcing, F, the temperature asymptotes to its equilibrium value, Te, say. The net flux, N(s) goes to zero. The change in net flux (F – N(s)) goes to F. The value of lambda(s), hence asymptotes to F/Te.
    Hence the equilibrium temperature is the limit as time, s, goes to infinity of the forcing divided by lambda(s). No problem.
    You are right that this is just a trigonometric relationship. It derives directly from rearranging
    N(s) = F – lambda(s) * T(s). In practice, the system with non-constant feedback can be solved without introducing the variable lambda(s) at all. It is sufficient to know the response curves N(s) and T(s), but given that we do introduce this variable, because we are interested in its asymptotic magnitude, then it translates into the secant gradient as I have described it. This is not controversial or new, but in the analagous expression for a constant feedback assumption, the lambda value is equal to the gradient of the line on a Gregory plot. Because of this, some authors make the lazy mistake of thinking that lambda(s) should also be equal to the gradient of the line, which is a straightforward mathematical error. If we are going to use a lambda(s) in the above equation, then basic mathematics tell us that it is equal to the secant gradient drawn from F as described.

  173. kribaez says:

    Everett F Sargent,
    “Sounds like something that an EE to would say.”
    I are an engineer, but I are not an EE.
    Convolution or superposition as a means of solving linear differential equations with time-varying boundary conditions was kicked off in the late 18th century, and was applied throughout the 19th century long before the electric light bulb was invented. It is very useful for solving second order linear PDEs, and I have used it professionally to solve a number of such systems, so it is by no means reserved to EEs.
    Good, Gregory, Lowe 2011 was one of the first papers to test AOGCMs for linearity of response. Since then, a number of authors have demonstrated that aggregate behavour of the majority of AOGCMs, (after averaging over just 3-5 runs to eliminate stochastic noise), can be emulated accurately as linear systems – they show an “amazing additivity” as one author put it.

    The model proposed by Goodwin (his Equation 5 in particular) is not consistent with this, and yields parameter values (lambda(t)) which do not mean what he thinks they mean (lambda(s)).

  174. kribaez,
    Equation 5 is just a re-write of Equation 1. What is your problem with it?

  175. Chubbs says:

    ADessler and student have a new paper on TCR. Using a climate model with an ensemble of cases with different initial conditions to replicate the standard energy balance calculation (19’th century baseline, CMIP5 forcing, HADCRUT coverage with blended SST). The mean TCR is 1.44 vs the true model TCR of 1.81 obtained iwith the standard 1% per year forcing increase.

    Low TCR obtained by energy balance methods are due to the forcing history, particularly aerosols, and HADCRUT missing some warming. The observational results obtained by L&C and others actually validate model results when a proper comparison is made. This really shouldn’t be surprising since climate models generally match the past 150 years of warming,.

  176. Chubbs says:

    If link above doesn’t work check Dessler’s twitter for a paper link.

    Kribaez comments don’t make any sense,.. NL criticized the paper prior, yet 10^7 cases are generated initially, which should provide a wide envelop prior to downselecting. The Figures in the paper show that the final model track the 150-year temperature observation history quite well much better than a simple energy balance model would,

  177. kribaez said:

    “Convolution or superposition as a means of solving linear differential equations with time-varying boundary conditions was kicked off in the late 18th century, and was applied throughout the 19th century long before the electric light bulb was invented. It is very useful for solving second order linear PDEs, and I have used it professionally to solve a number of such systems, so it is by no means reserved to EEs.”

    It is not technically accurate to say that convolution is used for “solving” differential equations. Finding the natural response or eigenvalues of a DiffEq does not involve a convolution. But the forced response can be calculated by the convolution of the impulse response function (i.e. a Green’s function to a physicist) with a forcing function. That’s where the concept of a superposition comes in, which is essentially an ordered summation of time-shifted impulse responses (convolution) to a finely iterated sequence of time-varying scaled impulses (the forcing).

  178. Everett F Sargent says:

    Something funny happened between 2011 …
    A step‐response simple climate model to reconstruct and interpret AOGCM projections
    https://agupubs.onlinelibrary.wiley.com/doi/full/10.1029/2010GL045208
    ( e. g. Good, Gregory, Lowe 2011)

    and 2016 …
    nonlinMIP contribution to CMIP6: model intercomparison project for non-linear mechanisms: physical basis, experimental design and analysis principles (v1.0)
    (same 3 authors plus 5 other authors)
    https://www.geosci-model-dev.net/9/4019/2016/

    “3.1 Linear mechanisms: different timescales of response”

    “Even in a linear system, regional climate change per kelvin of global warming will evolve during a scenario simulation. This happens because different parts of the climate system have different timescales of response to forcing change.”

    “In a linear system, patterns of change per kelvin of global warming are sensitive to the forcing history.”

    “3.2 Non-linear responses”

    “Non-linear mechanisms arise for a variety of reasons. Often, however, it is useful to describe them as state-dependent feedbacks.”

    So LTI appears to have some utility for very simple modelling (e. g. IAM’s), but it is still a simplified modelling assumption nonetheless.

  179. kribaez says:

    ATTP,
    “Equation 5 is just a re-write of Equation 1. What is your problem with it?”

    No it isn’t. Look more closely. The whole purpose of my explanation was to explain the difference that arises between a system that has a constant feedback value and a system that has a time-varying feedback value.

  180. kribaez,
    No, I’m pretty sure Equation 5 is simply a re-write of Equation 1.

  181. kribaez says:

    Paul Pukite,
    “It is not technically accurate to say that convolution is used for “solving” differential equations.”
    What a very odd comment. If a hydrologist runs a flow test on a borehole drawing from an aquifer, he can readily find the expected pressure response for a constant rate flowtest from the radial diffusivity equation. If, as is common, the test was stopped and started a few times, and resulted in variable flowrates, then he will use superposition (of the constant rate solution) to solve for pressure response, and thereby extract the parameters of interest. I call it “solving the PDE for arbitrary boundary conditions”; what do you want to call it?

  182. kribaez says:

    ATTP,
    “No, I’m pretty sure Equation 5 is simply a re-write of Equation 1.”
    Please look at the lambda term. It is a constant in Equation 1. Equation 1 is perfectly valid under the assumption of constant feedback.

    It is a time-dependent variable in Equation 5. The lamda(t) which appears in this form is not the same as the lambda(s) which derives from a fixed step forcing experiment AND CANNOT BE USED AS THOUGH IT IS.
    There are two distinct reasons why it cannot be used here, although I only explained one earlier.
    The second problem is as follows:-
    The ECS for an AOGCM is normally estimated as an extrapolation of the fixed forcing experiment. If lambda(s) is derived from the equation N(s) = F – lambda(s) *T(s) then it is (unambiguously) the secant gradient drawn from the point (T = 0, N = F) to the point (T(s), N(s)) on a Gregory plot.
    The ECS is then given by Lim as s-> infinity {F/lambda(s)}
    If alpha(s) is defined instead as the gradient, dN/dT, of the net flux-temperature response curve to a fixed forcing then note that the ECS is then given by Lim as s-> infinity T(s) + N(s)/alpha(s)

    Goodwin builds, updates and applies his lambda values as dN/dT values, but see Equation 6. He wants to use them as though they are like the lambda(s) above.

  183. kribaez,
    Okay, I missed the t. Let’s be clear, is your criticism is that they’re not properly solving the system of equations that would describe a time-dependent feedback response? If so, I think that is what we would call a GCM. It’s quite reasonable – in my view – to make simplifiying assumptions that allow one use a simpler model to evolve a system with time-dependent feedbacks.

  184. kribaez said:

    “What a very odd comment. If a hydrologist runs a flow test on a borehole drawing from an aquifer, he can readily find the expected pressure response for a constant rate flowtest from the radial diffusivity equation. If, as is common, the test was stopped and started a few times, and resulted in variable flowrates, then he will use superposition (of the constant rate solution) to solve for pressure response, and thereby extract the parameters of interest. I call it “solving the PDE for arbitrary boundary conditions”; what do you want to call it?”

    You are mixing impulse responses and step responses here. The real issue with applying convolution to the climate change problem is dealing with the fat-tail impulse response functions of (1) thermal diffusivity and heat capacity and (2) the sequestration of CO2.

    For the latter, in terms of the Bern model this would be a superposition of the individual exponentially damped impulse response terms.

  185. kribaez says:

    Paul Pukite,
    “You are mixing impulse responses and step responses here.”
    Not at all, but it is clear from your later comments that you are not considering the same problem to be solved.
    Let’s clear up the first issue. All of the GCMs have run CO2 quadrupling experiments – which are fixed step forcings. These data are publicly available. The data obtained yields N(s) and T(s). (I will continue to use s as a time variable to denote the time from the start of the imposition of the fixed forcing.)
    The net flux balance equation can be written as
    N(s) = F4x – lambda(s) * T(s) with N(0) = F4x
    As I have explained a couple of times on this thread, if N(s) is crossplotted against T(s) (i.e. a Gregory plot), then the lambda(s) is, for any value of s, a secant gradient drawn from F4x at T = 0 to a point (T(s), N(s)) on this Gregory plot.

    Let us divide the empirical functions N(s) and T(s) by F4x and call the resulting functions UN(s) and UT(s). UN and UT are now the responses to a unit step forcing. If we differentiate UN and UT we obtain the unit impulse response functions for the two functions.
    We can now forecast the temperature evolution for an arbitrary forcing series, R(t), USING SUPERPOSITION which gives us:-
    T(t) = The integral from 0 to t of R’(s) UT(t-s) ds
    Alternatively, if we integrate the above expression by parts, then we obtain (using Liebnitz)
    T(t) = The integral from 0 to t R(s)UT‘(t-s) ds
    The second integral uses the impulse response function and is generally called CONVOLUTION, but it is only a mathematical re-arrangement of the superposition equation.

    We can do the same thing for the net flux and we obtain:-
    N(t) = The integral from 0 to t of R’(s) * UN(t-s) ds (superposition form)
    Alternatively we can write
    N(t) = R(t) + The integral from 0 to t of F(s)*UN‘(t-s) ds (convolution form)

    In other words, superposition and convolution yield identical mathematical results.
    Any fat-tailed, thin-tailed or long-tailed behaviour is built into the response functions.

  186. kribaez, That’s not anything different than what I said, it’s just that you need multiple paragraphs to say it. I essentially just wanted to clarify that solving for the impulse response function is independent of applying convolution and superposition to determine the forced response.

  187. kribaez says:

    ATTP,
    “Okay, I missed the t. Let’s be clear, is your criticism is that they’re not properly solving the system of equations that would describe a time-dependent feedback response? If so, I think that is what we would call a GCM.”
    You do not need a GCM to model/use/accommodate a time-dependent feedback response. You can postulate a time-dependent response and use an EBM to model it, or you can take the step response from an AOGCM and use it DIRECTLY to emulate the response of that AOGCM to an arbitrary input forcing series. The only proviso is that the system behaves as a linear system. Solutions are additive. Nothing more. Fortunately, with a few exceptions, most of the GCMs satisfy the necessary conditions for this.

    However, leave that all aside, because it is fairly clear that there is not a lot of traction in my repeating basic engineering mathematics to people who are unfamiliar with the concepts. Here is perhaps a simpler way of explaining the problem.

    I am going to define three animals in the feedback stable – they all have the same dimensions, but they do different things.
    The first animal is Monkey(s)
    I created this animal by taking the ratio of net flux change to temperature change from a fixed step forcing run from the NONAME AOGCM. The monkey therefore satisfies
    N(s) = F – monkey(s) * T(s) where F-N(s) is the change in net flux over time s and T(s) is the change in temperaure at over time s.
    Secondly I create Camel(s). I do this by estimating the point derivative dN/dT at successive values of s from the fixed forcing run. Hence the camel satisfies
    Camel(s) = dN/dT evaluated at s.
    Thirdly, I create Mule(s). I do this by taking 5 historical runs of the NONAME AOGCM, averaging their aggregate values of N(t) and T(t) and then calculating the mule, using
    Mule(s) = (R(t) – N(t)) /T(t) where R(t) is the forcing series as per Goodwin.
    Hence the mule satisfies:-
    T(t) = [1 – N(t)/R(t)] * [R(t)/Mule(t)] which by no coincidence corresponds to Equation 5 in Goodwin,

    You will note that I have created these animals without having had to resort to any emulation model as yet.
    Now suppose that I turn the monkey (or the camel) into a constant creature, then we find that the monkey and the camel magically become the same animal. The mule is a very noisy animal here, but its average value over time or its OLS gradient on a crossplot of (R(t) – N(t) vs T(t) also turns out to be the same as the monkey and the camel. This is the constant feedback case.

    However, Goodwin is not dealing with a constant feedback. He forces a concave-upwards curvature onto his feedback term. In these circumstances the animals become very different creatures. You cannot send the camel up a tree to fetch bananas. You cannot send the mule out into the desert, and you can’t ask the monkey to carry a load. But Goodwin at different points in his analysis treats the three animals as though they have the same properties. He calculates his feedback as a camel, and partitions it as a camel, for tuning in his selection process, he updates it as a mule (Equation 5).and he applies it as a monkey (Equation 6). As I have explained somewhere above, it is ONLY the monkey that can be used in Equation 6. In a constant feedback case- this problem does not arise. He really needs to define and control three different variables. As it is, his fitting and filtering process is founded on an inconsistent use of a single variable. in place of three different variables. Because of the fitting of his lamda value via optimisation of both the magnitude and the rate of addition of new gradient to his variable, he can find parameters which SEEM to make his model work despite its inconsistency.. What he cannot do then is draw reliable inferences from his fitted parameters as though they had physical significance.

    Just for interest, I dug out an emulation of GISS-E2R and plotted the monkey, the camel and the mule on the same timeplot. The curve shapes are very different from each other. The monky curve settles into a long monotonically decreasing value. The mule, being a weighted average of its history settles into a more or less constant value, but with a lot of noise associated with crossing the axis to go from positive to negative anomaly. . For RFi forcings, after 160 years, (whether one uses the emulation data or the model data directly) the monkey gets to 2.18 (falling), the mule hovers around 2.4 and the camel reaches 1.33 (falling). For GISS-ER2 the RFi value is 4.52, hence the ECSeff estimate from the monkey is 2.08 and climbing.
    Rescaling the forcing to ERF values produces similar curve shapes. It changes the monkey to 1.94 (falling) and the mule to 2.06, more or less static as before. The camel remains unchanged at 1.33. The ERF value for 2XCO2 is 4.1. The revised estimate of ECSeff value from the monkey is then 2.14 and climbing.
    Reported ECS for this model is 2.3, all consistent with expectations, and highlighting the fact that . the differences between these feedback animals are real and predictable.

  188. kribaez,
    The main problem I’m having is that I’m not convinced that Goodwin is doing what you claim he is doing (although, I’m not entirely that I get what you’re even claiming, since you’ve rather condescendingly moved into camels and monkeys). As the text says:

    In response to each of the i source of radiative forcing there are j independently time evolving climate feedback processes, \lambda_{i,j} (t) in W m−2 K−1, such that the total climate feedback due to radiative forcing agent i is written

    \lambda_i (t) = \lambda_{Planck} + \sum_j \lambda_{i,j} (t).

  189. kribaez says:

    ATTP,
    I apologise unreservedly if you found my use of animals condescending. It was not intended. It was meant to emphasis the differences, but also to avoid having to use extended definitions each time I made reference to “feedback”.
    The feedback term that you reference is what I called the “Camel”. i.e. It defines a changing point gradient dN/dT in time. Importantly, if you sum this up across the sources, then Lim t-> infinity {R(t)/Camel(t)} does NOT yield an equilibrium temperature, or anything close. (See for example the value which appears in my GISS-E2R example ca 1.3 after 160 years, and compare it with the secant gradient value around 2.18. This is real AOGCM data. The equilibrium temperature with this type of animal is given by Limit as s->infinity {T(s) + N((s)/Camel(s)}

    Yet, if you look at Equations 6, 7 and 8, he is using the same variable name in an equation that only works for a secant gradient – my monkey(s). Equally, there is an argument for partitioning the temperature contribution from the monkey(s) feedback or the camel(s) feedback , but not the mule(t) which he uses In Equation 5, where he starts his argument for partitioning, This mule(t) “feedback” already represents a history-dependent forcing and time weighted average in Equation 5.
    There are three separate variables involved here with quite different values, but he is using the same variable name for all three. I don’t really know how to make the problem any clearer than that.

  190. kribaez,
    I’m really not following your critique. The model is clearly quite simple, but it’s essentially assuming that there are independent feedbacks that operate on different timescales.

  191. kribaez says:

    ATTP,
    There is a difference between “simple” and “simply wrong”.
    Do you at least recognise that the three definitions of feedback functions which I have used yield the same function IF AND ONLY IF the feedback is constant? As soon as you force a concave-upwards curve onto a Gregory plot, as Goodwin is doing, they CANNOT BE the same function.
    The GISS-E2R data which I summarised above provides a clear demonstration of this, and is perfectly consistent with theory. You cannot use the same named function lambda in three equations which are inconsistent with each other and hope to use the final built-up “lambda” value in any meaningful way.
    What is so hard to understand about this?
    It might be helpful if you explain to me what definition of “lambda” you think Goodwin is actually using; is it the one which conforms to Equation 5 [Mule(t)], the one which conforms to Equation 6 [Monkey(s)] or the one which conforms to Equation 4 [Camel(s)]? Or is it something different that I am not seeing? His final estimates of ECS use the property of Monkey(s) – the variation of a secant gradient – but the calculation is applied to a posterior median estimate of “lambda” (Figure 2) which is apparently built as a Camel(s) – a local dN/dT gradient. If so, then it is no exaggeration to say that this is an egregious highschool geometry failure. So please do explain to me what definition you think he is using, and more specifically, what definition you think he is using in his posterior median time-series as presented in Figure 2 ?

  192. Dave_Geologist says:

    Or is it something different that I am not seeing?

    Probably that kribaez. It’s a bit of a bad sign when you talk about “basic engineering mathematics”. I think it extraordinarily unlikely that our host, who IIRC is a Computational Astrophysicist, is unfamiliar with the mathematics taught in basic engineering courses. And extraordinarily likely that his mathematical skills extend far beyond those taught in an advanced engineering course. He’s just too polite and modest to point it out 🙂 . I may be wrong of course (I was wrong about the K/T impact*, but I was right about faster-than-light neutrinos and cold fusion 😉 ).
    * Although with some justification, because IIRC the earliest papers didn’t account for different deposition rates between the red clay and the underlying marls and limestones. Once you’ve killed off the carbonate-formers, and are left with only a pelagic rain of clay, 1 cm of formation represents a much longer time-span.

  193. kribaez says:

    “It’s a bit of a bad sign when you talk about basic engineering mathematics.”
    Did you mean to write “It’s a bit of a bad sign when you talk about basic engineering mathematics here.” ? If so, I am starting to think that I could agree with you on this.

  194. John Hartz says:

    kribaez : If you want to delve into the equations of climate science, you should visit the Science of Doom website.

    https://scienceofdoom.com/

  195. Dave_Geologist says:

    “It’s a bit of a bad sign when you talk about basic engineering mathematics here?” Well yes, but probably not in the way you mean.

    1) For the reason given above – the maths required for an astrophysics degree, let along a PhD and practising professional, dwarf that required of an engineer. Chances are, if one of you doesn’t get something or has missed something, that someone is you.

    2) It suggests that you’re approaching something you don’t know about (climatology) by analogy with something you do know about (electrical engineering or whatever). The thing about analogies, is that they’re only analogies. Far better to tackle the real thing. Those of us who’ve spent time on climate blogs have seen a succession of engineers making rookie errors and insisting that they’re right and the experts are wrong (I don’t claim to be an expert, but from his paper, Goodwin appears to be). You may well be the next Einstein, but that’s a high bar and I’d want more convincing.

    3) Fixating on your original claim and not responding to comments from the likes of Everett suggests a certain inflexibility. Which is a recipe for rookie errors. Or, perish the thought, an indication that you’re copy-pasting the argument and so can’t develop or vary it.

    4) This isn’t watsuppia. The audience won’t be flim-flammed by mathematic-y or science-y jargon but will dig down. Errors will be pointed out, perhaps uncharitably. Appearing arrogant or condescending invites that sort of response.

    5) Engineer chest-beating immediately raises the suspicion that you belong to the group that thinks climatologists are just innumerate geographers who can’t do maths or statistics. I see Goodwin has a Physics MSci (which is a fancy name for an Imperial College BSc). Betcha that requires higher math skills than an engineering degree. And he’s far from a climatology rookie. If someone has made a mistake or misunderstanding, once again, chances are it’s you.

  196. Maybe this discussion has run it’s course, but here’s my understanding of the basics of Goodwin’s model. Essentially, if there is a change in forcing at time t of dF(t), then the resulting change in planetary energy imbalance a time dt later is

    dN(t + dt) = dF(t) - \lambda_{Planck} \Delta T - \sum_j \lambda_j(dt) \Delta T,

    where

    \lambda_j(dt) = \lambda_j^{equil} \left(1 - \exp \left( -\dfrac{dt}{\tau_j} \right) \right),

    and \tau_j is the e-folding timescale for feedback j.

    So, if you know the forcing timeseries, and you know how the planetary energy imbalance varies with time (for which you need to know something about the heat capacity of the ocean’s mixed lsyer) one can then determine how the temperature changes with time if there are feedback parameters that have different timescales.

  197. kribaez says:

    Thanks Dave,
    My errors are all clear to me now after your invitation to a pissing contest.
    Please point to just one, any one, argument against what I have shown.

  198. kribaez says:

    Ait is what you want to say i.e. TTP,
    Thank you for getting on the pitch.
    Before I identify the feedback animal in your first expression, can I ask you to check your second expression to confirm it is really what you want to say. Specifically , how do you want to differentiate this expression with respect to absolute time, t?

  199. kribaez,
    Sorry, not really interested in playing games. You want to make a point, make a point. If you can do it without being condescending, that would be great. If you can’t, fine. I don’t really care.

  200. JCH says:

    The thing is, they huff and they puff with just gigantic amounts of condescension and storms of disdain and then they never blow down any houses.

  201. kribaez says:

    ATTP,
    I am not playing games. I wanted to show that any choice you made about the definition of λ as described by Goodwin leads to a serious contradiction.

    Consider the application of your first equation to the simple case where forcing is increasing linearly with time, F = Beta* t
    From your second equation, for small dt all of your time-dependent lambda terms go to zero. Taking your first equation and allowing dt to become very small, yields
    dN/dt = Beta – λ(Planck)* d (ΔT/dt) (applicable for all t)
    or N(t) = F(t) – λ(Planck) * ΔT after integration.
    This is clearly nonsense, but I think it is also evident that this does not describe what Goodwin is doing.

  202. kribaez,
    Okay, I think my first equation may have confused things. The forced climate response should really be written as

    C \dfrac{dT}{dt} = F(t) - \lambda T,

    where C is the heat capacity of the relevant part of the climate system. See, this Isaac Held post. So, in the above formalism F(t), C, and \lambda are known values and one can integrate the above to determine how the temperature changes with time. All that Goodwin has done is divide the feedbacks up into ones that operate on different timescales, with these timescales known in advance.

  203. kribaez says:

    John Hartz,
    Thank you. I agree that scienceofdoom is an excellent site.
    There was a guy who wrote quite a few articles on Lucia’s Blackboard some years back about linear and non-linear analysis of the curvilinear net flux-temperature response exhibited by AOGCMs, who used to cite SOD a lot.

    Here is an article that has particular relevance since it is an example of a very simple-to-understand application of superposition to climate model forecasting on a toy system with a non-constant feedback:-

    http://rankexploits.com/musings/2013/observation-vs-model-bringing-heavy-armour-into-the-war/

    Kyle subsequently backed off his theory – as indicated in the update in the article, but the cases tested on the toy model still provide a very useful illustration of what happens when you go from a constant feedback assumption to one where you layer in different feedbacks over varying timeframes, so that the step forcing response becomes a pronounced curve on an N vs T plot.

    I would particularly draw your and other readers’ attention to figure 8 in the article, which illustrates that the line ΔN(t) vs ΔT(t) (output) from the historical period is quite different from the line ΔN(t) vs ΔT(t) from the step-forcing data, and moreover that the secant gradient drawn from (0.0) to (ΔT, ΔN) and asymptotically illustrated in Figure 9 is the only feedback animal for which R(t)/lamda(t) {Goodwin’s Equation 6} will take you to an equilibrium estimate at large t. Three feedback animals with quite different characteristics.

    The author noted importantly:-
    “We can also see rather more clearly that the flux response in this system is multi-valued with respect to average surface temperature. The gradient of any line picked out by regression (or a secant gradient estimated between two temperature points) is dependent on the specific forcing history. This raises some serious questions about the validity of regression methods of the form (F – Net Flux) vs Temperature to estimate feedback from models or from long-term history with variable forcing.”

    Yes, that’s what I thought I wrote.

  204. kribaez,
    Hold on. That’s essentially (I think) what many people are arguing. If you try to estimate climate sensitivity via regression, the answer you get can depend on the forcing history, on how close you are to equilibrium, and might also depend on how internal variability has influenced both the warming and the system heat uptake. Hence one should be careful of assuming that some observationally-based estimate is a very robust estimate for climate sensitivity.

  205. kribaez,
    Just to be clear, are you the author of the rankexploits post that you highlighted above?

  206. Everett F Sargent says:

    “Just to be clear, are you the author of the rankexploits post that you highlighted above?”

    Appears so …

    Marvel et al.: Implications of forcing efficacies for climate sensitivity estimates – update

  207. Everett F Sargent says:

    As to dt approaching zero, well known problem, due to machine precision, for variables that approach one (e. g. 1 – variable (or one minus almost one)). The CDF (0.9999999999999999 = 1 in double precision, quad precision only extends this agony another 15 nines, there is work around though 🙂 ) and cosine of a very small angle (e. g. ~1e-4700 in quad precision = 0) are but two examples.

    I’m pretty sure Goodwin would have checked dt (after all he did change it from 1/12 to 1/48), say dt = 1/96 instead of dt = 1/48 (his method would have some numerical issues if dt < 1E-13 years starting with the largest tau value, so not an issue numerically speaking). After all, it is only one line of declarative code (const int t_step_per_year = 48; //Number of timsteps per year).

    We only ever run the dt down to a small number sufficient to produce essentially identical (and stable) results.

    Getting lost in the (numerical) limit as dt approaches zero is a fool's errand.

  208. ecoquant says:

    @JCH,

    I mostly agree, not, I think, because of any notion of tribalism, but because the agreements and disagreements with Goodwin and company are based upon a highly esoteric and abstract version of climate processes. I, as an empiricist, am yet to be convinced that ab initio physics of this kind have sufficient predictive power to render @kribaez’s arguments with depth enough to devote the time and energy to understand them. I wanted data, and failures or successes to predict, not mere equations.

    Sure, I’m willing to consider that Goodwin, et al may have had blemishes. But, rather, @kribaez, than simply reject their argument, what program of revision to do propose, in detail to fix it, irrespective of what it implies?

  209. kribaez says:

    ATTP,
    Yes, I am the author. I haven’t written blog articles for many years now.
    Wordpress forced me to change my handle some time ago – a fact I advertised widely to avoid any accusations of sock-puppetry – which is why people often call me “Paul” in response to kribaez comments.

    There is some irony in your referring to Isaac Held’s 2011 article.
    As I published in June 2011 a little after his article:-

    “In late January, Willis Eschenbach posted an article on WUWT, zero-point-three-times-the-forcing, where he showed that the global mean surface temperature time series output from a cornerstone GCM, the GISS-E model, could be reproduced reasonably well as a simple linear combination of the input forcings (used by GISS-E), with an adjustment factor applied to stratospheric aerosols, i.e. forcings from volcanoes.

    I showed in a comment in the same post, that – much to my own surprise – it was possible to get a near-perfect match of the same temperature output by applying the solution of the “linear feedback equation” to the GISS-E forcings, instead of using a regressed linear combination of those same forcings. This match was achieved using a low equilibrium climate sensitivity – about 1.3 deg C for a doubling of CO2, and a short response time for equilibration. I showed in a later comment in the same post that one could also simultaneously match the temperature and the Ocean Heat Content (OHC) data from GISS-E, using a similar model.

    The spectacular temperature match got several people excited; in particular, it got Willis sufficiently excited to make several more posts on the subject. A thoughtful response then appeared in Isaac Held’s blog…”

    The article went on to explain why the non-linearity in net flux-temperature response permitted a high feedback during history matching that could still lead to a high eventual ECS in the AOGCMs.

    This “thoughtful response” was the article by Professor Held to which you are referring. My match to GISS-E and Professor’s Held match to GFDL are both founded on a CONSTANT FEEDBACK assumption. They cannot “spontaneously combust” into a time-dependent feedback by any physics known to man. It does not describe what Goodwin is doing.

  210. kribaez says:

    Everett F Sargent,
    “Getting lost in the (numerical) limit as dt approaches zero is a fool’s errand.”

    The problem here has nothing, NOTHING, to do with machine precision.

  211. kribaez,

    It does not describe what Goodwin is doing.

    I didn’t say it described what Goodwin is doing. However, Goodwin is simply modifying the standard energy balance model to incorporate time-dependent feedbacks. Your claim now appears to be that you can match the observed warming with linear feedbacks. That isn’t really a strong argument against what Goodwin has done.

  212. kribaez says:

    ecoquant,
    “Sure, I’m willing to consider that Goodwin, et al may have had blemishes. But, rather, @kribaez, than simply reject their argument, what program of revision to do propose, in detail to fix it, irrespective of what it implies?”

    Because Goodwin is carrying the same confusion between three different feedback functions into his selection/filtering process to define his posterior suite of “matched” runs, it is not a matter of fixing it, it is a matter of throwing it away and starting again.
    He can try another bootstrap approach with consistent definitino of his feedback variables, or alternatively, he can use something similar to that described by Proistosescu et al (which by pure coinicidence is the same algorithm which I have used for many years prior to emulate AOGCM results). The serious problem with Proistosescu was his abuse of scaling relationships (swapping out forcing data in midstream, as it were) – not the algorithm he used for temperature and net flux prediction. He assumes LTI and applies convolution theory consistently.

    I find it dryly ironic that when ATTP put up the Proistosescu paper on this site (https://andthentheresphysics.wordpress.com/2017/07/11/reconciling-climate-sensitivity-estimates-part-iii-or-iv/), there were numerous ad hom attacks against Nic Lewis who correctly identified the problem of the abuse of forcing inputs, but not a single complaint against Proistosescu for assuming that the AOGCMs can be approximated as Linear Systems and could be forward-modeled using a conventional convolution algorithm.

  213. Marco says:

    “There was a guy…”
    “The author noted importantly…”

    followed a little later by
    “Yes, I am the author.”
    and
    “a fact I advertised widely to avoid any accusations of sock-puppetry”

    Gee, Paul. Any explanation as to why you suddenly stopped that advertising and decided to attempt some nice sock-puppetry here?

  214. kribaez,

    there were numerous ad hom attacks against Nic Lewis

    Care to highlight a few?

  215. kribaez says:

    ATTP,
    This is getting truly silly.

    ” Your claim now appears to be that you can match the observed warming with linear feedbacks. That isn’t really a strong argument against what Goodwin has done.”

    It was YOU who, in response to my question about what definition of lambda(t) you think Goodwin is using, at your second attempt, referred me to Professor Held’s post where HE SHOWS THAT YOU CAN MATCH THE GFDL WARMING WITH A CONSTANT FEEDBACK AND A LINEAR EQUATION.

    At this stage, I thnk I am out of here.

  216. kribaez says:

    “Gee, Paul. Any explanation as to why you suddenly stopped that advertising and decided to attempt some nice sock-puppetry here?”
    I referenced my old article and wrote immediately ” Yes, that’s what I thought I wrote.”
    You think this was an attempt at sock-puppetry?

  217. kribaez,

    It was YOU who, in response to my question about what definition of lambda(t) you think Goodwin is using, at your second attempt, referred me to Professor Held’s post where HE SHOWS THAT YOU CAN MATCH THE GFDL WARMING WITH A CONSTANT FEEDBACK AND A LINEAR EQUATION.

    All CAPS now? I was trying to highlight how you would include an expression for N(t). The point I was making is that everything else is known and constant, so the equation is integrable. I wasn’t arguing that the feedbacks are indeed linear, but that all Goodwin has done is separated the feedback term into a set of feedbacks that operate over different, but known, timescales.

    At this stage, I thnk I am out of here.

    Oh no, what will we do????

  218. JCH says:

    Throw it all alway and start over. Nic is a gawd. Nic is a gawd. Nic is a gawd.

    Go to Google Scholar, nothing. They’re also out of there.

  219. Everett F Sargent says:

    “The problem here has nothing, NOTHING, to do with machine precision.”

    Good. Then don’t go around assuming that …
    https://s0.wp.com/latex.php?latex=%5Clambda_j%28dt%29+%3D+%5Clambda_j%5E%7Bequil%7D+%5Cleft%281+-+%5Cexp+%5Cleft%28+-%5Cdfrac%7Bdt%7D%7B%5Ctau_j%7D+%5Cright%29+%5Cright%29%2C&bg=ffffff&fg=333333&s=0&zoom=2
    … goes to zero when dt itself cannot go to zero. Thanks.

  220. Everett F Sargent says:

    Well, that didn’t work. Should be 2nd equation in this post by ATTP

    The time evolution of climate sensitivity

  221. Marco says:

    “You think this was an attempt at sock-puppetry?”

    Yes, as that last sentence could just as easily refer to your earlier comments about the supposed problems with Goodwin’s method, indicating there was ‘this other guy’, who used the same argument.

  222. ecoquant says:

    To toss some fresh meat into the ring (in the sense that SlashDot uses the term), here’s

    Paul A. O’Gorman John G. Dwyer, “Using machine learning to parameterize moist convection: potential for modeling of climate, climate change and extreme events”, Journal of Advances in Modeling Earth Systems, First published: 03 October 2018.

  223. ecoquant says:

    BTW, one of the co-authors of the JAMES paper above, Dwyer, now works at Dia&Co, which uses ML to personalize shopping choices. What’s notable is that they have a pretty interesting technical blog discussing these topics.

  224. John Hartz says:

    And this from Peter Sinclair. and Yale Climate Connections…

  225. Pingback: An updated Bayesian climate sensitivity estimate | …and Then There's Physics

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.