Guest post: Do ‘propagation of error calculations’ invalidate climate model projections?

This is sort of a guest post by Patrick Brown. Patrick contacted me to ask if I’d be willing to highlight a video that he made to discuss a suggestion, by someone called Pat Frank, that ‘propagation of error calculations’ invalidate climate model projections. I first noticed this when Pat Frank had a guest post on Watts Up With That (WUWT) called Are climate modelers scientists (the irony of this title may become apparent). He also presented a poster at the 2013 AGU meeting, gave a talk at the Doctors for Disaster Preparedness meeting, he has a video that is linked to in Patrick Brown’s introduction below, and his ideas were then discussed in a recent magazine article titled A fatal flaw in climate models. Just for background, what he is suggesting is that there is a large cloud forcing error that should be propagated through the calculation and that then produces such a large uncertainty that climate model projections are completely useless. I won’t say any more, as Patrick’s video (below) explains it all. It’s maybe a bit long, but it covers quite a lot of material, explains things very nicely, and I found it a very worthwhile watch. Patrick Brown’s post starts now.

Do ‘propagation of error calculations’ invalidate climate model projectsions?

As a climate scientist I am often asked to comment on videos and writings that challenge mainstream views of climate science. Recently, I was asked for my thoughts on some claims made by Patrick Frank regarding ‘propagation of error’ calculations and climate models. I took a look at Dr. Frank’s claims and considered his arguments with an open mind. As I reviewed Dr. Frank’s analysis, however, I began to feel that there were some serious problems with his methodology that end up totally undermining its usefulness. I outline the issues that I have with Dr. Frank’s analysis in the video below.

Links: The same video on Patrick Brown’s blog.

This entry was posted in Climate change, Climate sensitivity, ClimateBall, Global warming, Research, Science, Watts Up With That and tagged , , , , , . Bookmark the permalink.

103 Responses to Guest post: Do ‘propagation of error calculations’ invalidate climate model projections?

  1. I’ll make a couple of quick comments. I think the latter part of Patrick Brown’s video which discusses base state errors versus response error is pretty key. If we don’t know with certainty where to start a calculation, you don’t propagate that uncertainty through the calculation. Essentially not knowing where the first step starts does not mean that there is then a similar uncertainty in the second step, relative to the first; it just means that the final state will be a combination of the uncertainties in each step (which you do propagate) and the uncertainty in the base state (or where it started). As I understand it, this is essentially the difference between precision and accuracy.

    An uncertainty in the base state may be important, but if you’re more interested in a how a system changes in response to some external influence, then it may not be that important. Your final state may not be accurate, but you may still be able to reasonably estimate the change between the initial and the final state. The uncertainty in cloud forcing is really a base state error, not an uncertainty between each step (i.e., you don’t expect the cloud forcing in a climate model to be uncertain by +-4 W/m^2 at each step).

    A second point is that the above seems pretty obvious, so it is surprising that someone who claims expertise would make this kind of mistake. Joshua would probably argue for motivated reasoning, and he may well be right. However, to make this argument you have to assume that a large number of experts have missed a pretty basic problem with climate models and I continually find it remarkable that supposedly bright people can think that they have noticed something pretty simple and crucial that many, many, many others who have at least as much expertise have somehow missed.

  2. Ens Josh says:

    Very good video. Thanks.

  3. Bernard J. says:

    Probably worth noting that Tamino debunked Pat Frank in 2011:

    Frankly, Not

  4. angech says:

    ” Essentially not knowing where the first step starts does not mean that there is then a similar uncertainty in the second step, relative to the first; it just means that the final state will be a combination of the uncertainties in each step (which you do propagate) and the uncertainty in the base state (or where it started). As I understand it, this is essentially the difference between precision and accuracy.”
    The uncertainty in where the first step starts here is the base temperature, not the uncertainty in the the calculating process.
    There are uncertainties included in the model calculations, there must be.
    One of these is cloud cover modelling.
    Each step in the calculating has to include these uncertainties and they do multiply thus decreasing the accuracy of the model with repetition.
    The question is as to the range of the uncertainty and thus the effect on the accuracy and precision.
    One might do better to question Dr Franks assertions of cloud uncertainty size than the maths which simply states that large repeated errors included over time will quickly make noise outweigh signal.
    When we say this error is not part of the calculation where is that error that must be part of the calculation?

  5. angech,

    There are uncertainties included in the model calculations, there must be.

    Of course, but there’s a difference between not being sure as to the accuracy of the calculations, and there being an uncertainty that must be propagated from step to step.

    One of these is cloud cover modelling.

    Indeed.

    Each step in the calculating has to include these uncertainties and they do multiply thus decreasing the accuracy of the model with repetition.

    In this case, I think this is not quite correct. The way this is normally handled is to either rerun the same model with different conditions, or to treat the model spread (i.e., all the different models) as representing this uncertainty. This is not strictly correct (as Victor’s post points out) but is a representation of the uncertainty.

    One might do better to question Dr Franks assertions of cloud uncertainty size than the maths which simply states that large repeated errors included over time will quickly make noise outweigh signal.

    Except he’s wrong. The uncertainty he is talking about (a base state uncertainty) does not propagate in the manner he suggests.

    When we say this error is not part of the calculation where is that error that must be part of the calculation?

    I don’t even quite know what you mean. The base state error does not propagate as Franks suggests and if we’re more interested in changes, than in absolutes, it plays virtually no role in the uncertainty calculation. It would only be important if it changed the base state so much that it changed how the system would respond to changes.

  6. Marco says:

    Rhetorical question: will Pat Frank retract his presentations, after being shown wrong at so many levels?

  7. Marco says:

    Angech, there’s a difference between precision and accuracy that may be helpful to understand ATTP’s last point (if I get that one correctly).

    An example: if my thermometer can read temperatures with a precision of 0.1 degrees, but an accuracy of only 1 degree, which of the two do you think is the relevant error statistic to use when I want to look at trends? Obviously, it is the precision. I don’t care when I look at trends whether my temperature at the start was 13, 14, or 15 degrees, because this *systematic error* does nothing to my trend, and I don’t care about the exact temperature at the end. What Pat Frank seems to have done is to claim a “precision” error which isn’t a precision error (and let’s forget the added problem with the time period).

  8. russellseitz says:

    Unfortunately, the propagation of errors in climate projections imparts a strange sort of levity to the results.
    Even as they are being pruned from the next generation of models, and pri=omulgated in the executive summaries of the last, the lines of code that generate the least plausible outliers accumulate anti-heuristically, and rise to the top of the climateball tree

    There, the products of some are cherrypicked, while those too absurd for scientists to touch with a ten foot pole float off to form a sort of Heaviside layer that sheds reflected glory on the denialosphere.

  9. Fergus Brown says:

    Good presentation – clear and to the point.
    One obvious point: a genuine hypothesis (i.e., the original question) would compare the results achieved from the method applied and compare this with observational experience. The logical conclusion would be that either the hypothesis or the method fails, or both. (point 5).
    This is another example of trying to mess with people’s head using bad maths which looks genuine.
    (cf Tamino). All sorts of games are going on with the original notion, which claims that uncertainty self-multiplies. Depends what uncertainty you are talking about, and whether or not you take either Bayes or even the Monte Carlo fallacy into account, or any number of other ways of playing with numbers.
    In NWP, initial condition errors matter. In Navier-Stokes equations, they matter. But these are not the same in any way as statistical uncertainties/error bars, though Frank seems to want them to be so.
    Gosh, there are so many things one could say about why this is dumb, it’s hard to know where to stop…

  10. This argument is quite bizarre. While watching the video I didn’t understand what that 4W/m² was reffering to, but as soon as it was clear that he was talking about the difference in base state between models and observation, it became also clear that this was bulls**t.

    One must wonder if the uncertainty is indeed that large, why all ensemble members stay so much closer to the ensemble mean, right?

  11. One must wonder if the uncertainty is indeed that large, why all ensemble members stay so much closer to the ensemble mean, right?

    Indeed. If he were correct, they really should be all over the place.

  12. John Hartz says:

    It appears that the vanishing arctic sea ice didn’t get the memo that the climate model projections are invalid. (sarc)

  13. Dan Riley says:

    Isn’t this just another case of not appreciating the difference between initial state (weather) and boundary (climate) conditions?

  14. anoilman says:

    It was an interesting video. We face the same problems with drilling oil wells. When drilling we use a combination of dead reckoning and sensor readings to predict where we are. We have a fairly large initial error, and plenty of error in measurement. Drillers struggle to follow the plan (if any…) to hit the pay zone.

    And if you don’t watch what you’re doing like a hawk, you’ll drill off course. (This happens more often than companies will admit.)

  15. Dan,

    Isn’t this just another case of not appreciating the difference between initial state (weather) and boundary (climate) conditions?

    I don’t think it’s quite that. I think it is more that there are some conditions that models don’t necessarily represent accurately. However, this doesn’t mean that one should propagate those uncertainties through the model, because either this error is being compensated for elsewhere, or the model is not correctly representing the absolute state, but might still be useful for determine how the system responds to changes.

  16. Dan Riley says:

    You definitely wouldn’t just repeat the initial uncertainty at every step (regardless of the step), that’s clearly wrong–what’s called for (and what’s done, AIUI) is a proper analysis of the sensitivity to initial conditions. My point was that a model with boundary conditions is much less likely to show extreme sensitivity to initial conditions, so the difference between initial and boundary conditions is what tells us that how the system responds to changes is likely to be well enough behaved to be useful, and unlikely to blow up the way Frank imagines.

  17. Dan,

    My point was that a model with boundary conditions is much less likely to show extreme sensitivity to initial conditions, so the difference between initial and boundary conditions is what tells us that how the system responds to changes is likely to be well enough behaved to be useful, and unlikely to blow up the way Frank imagines.

    Okay, I see what you mean. Indeed, as you say, the boundary conditions tend to constrain how the system can respond to changes and largely prevent it blowing up as Frank seems to be suggesting it should.

  18. I got to the point where the video explained the ideas of Pat Frank. Then I noticed I would have to invest another 30 minutes and never got to the rebuttal. Most people likely do not understand either side and just see debate.

    A good example of how scientists debate. A less good example of informing the public on climate change. Maybe an email would have been better.

  19. brandonrgates says:

    Victor, I appreciate your objection to the video format as I tend to dislike it and prefer to read. However, I did watch the whole thing, and it is about the best rebuttal of Frank’s argument I have ever seen, most notably because Brown sticks to the scientific argument and does not editorialize or otherwise engage in polemics. It also contains some arguments which I’d not previously seen elsewhere, which I think were quite devastating to Frank’s notion of error propagation in climate modeling.

    On the other hand, I found some of Brown’s arguments not as convincing, perhaps even dubious, but I don’t have the expertise to judge them. If you could find the time to watch the entire video, I would actually appreciate it if you could offer a substantive critique on the content itself rather than the format — especially because I find it useful to see scientists debate, particularly when they’re more or less in agreement on the core principle but in disagreement about particulars.

  20. On the other hand, I found some of Brown’s arguments not as convincing, perhaps even dubious, but I don’t have the expertise to judge them.

    An example?

  21. Nick Stokes says:

    I’m sorry to see that people are still wasting time on Pat Frank’s stuff. It is just incorrigibly nutty. And when you think it can’t get nuttier, it does. And it has wasted so many people’s time for so long. I was amused to see in a recent WUWT thread, that he cited in his defence a 2011 thread at the Air Vent. This was one that he invited to promote something that had been published in E&E. The discussion went over two threads, and the skeptic scientific notables gathered. I collected some of their conclusions:

    Lucia: “Look Pat, I don’t take your paper seriously. I think it’s meaningless exercise in sophistry.”

    Jeff Id: “Everything you have written to date simply confirms my initial points. This is not an accurate representation of ‘uncertainty’ in anything.”

    Steve Fitzpatrick: “Stop wasting your time Pat. You still are confusing internal variability with uncertainty about the true state of the system.”

    RomanM: “I am currently traveling away from home and don’t have time right now to address all of your points, however I suggest that you go wrong right from the start.”

    DeWitt Payne: “I have read your papers, they are wrong and Jeff and Lucia have not been refuted at all, much less thoroughly.”

    Carrick: “Pat, I feel that i owe you a bit less of a cryptic explanation of my concerns about E&E. The comment you make about judging a paper regardless of its source is a good one, my comment was more about how E&E has done you a disservice by allowing the paper to be published without as thorough of a vetting as it deserves. As a result, I think you have a paper that has substantive flaws in it.”
    and more succintly:
    “Pat, your error bars regardless of how you obtained them do not pass the smell test”

    We need to learn from this.

  22. Magma says:

    ^^ What Nick said. ^^

    I’m not saying I’m not guilty of this myself in some venues. (Guilty.)

    But most of us would acknowledge the truth of the observation that it requires at least an order of magnitude more work to refute a given amount of BS than it does to generate it.

    Time is precious. Choose your fights carefully, whether they be with drive-by economists, cranky professors emeriti, or Internet randos.

  23. Bernard J. says:

    I do have to echo Marco’s rhetorical question about whether Frank will retract his incorrect ‘methodology’ and apologise. It’s not like he hasn’t had time to confront his lack of appreciation of his statistical inadequacy – Gavin pointed out a similar FUBAR back in 2008:

    http://www.realclimate.org/index.php/archives/2008/05/what-the-ipcc-models-really-say/#comment-86545

    I also wonder if Frank uses the same sloppy statistical approaches in his professional work, or if he leaves the subtleties to his colleagues…

  24. brandonrgates says:

    Anders,

    > An example?

    At around the 23:00 mark, “some models are essentially in perfect net energy balance (when averaged over long enough time periods) when they start historical simulations in 1861.”

    What I liked about that part of the discussion is that Brown calls out Frank for only looking at error in cloud radiative balance. What I think is dubious is the *implication* that those models will have very little uncertainty. Relative to Frank’s calcs, sure, but the “real” uncertainty, no. I understand Brown isn’t actually arguing that, but I can see how someone might mistake (or “mistake”) him for arguing that.

    I could overlook it, except for this slide at around the 26:00 mark …

    … to me somewhat reinforces that notion. He does it again with the analogous illustration of height vs. age. These are all excellent illustrations of the basic principle of the argument he’s making. I guess I would have liked to have seen a little more discussion about ensemble spread which is the most visible “uncertainty” we lay folks see in the GMST plots.

    It could be that I’m just being nitpicky here. It surely is also because I’m just quite curious about how these things are actually done, but don’t exactly know the right questions to ask.

  25. Brandon,

    What I think is dubious is the *implication* that those models will have very little uncertainty.

    Yes, but the point is (as I think you get) that there isn’t some massive uncertainty at each timestep. These models may be uncertain in the sense that they may not exactly match reality. However, they’re not uncertain in the sense that you don’t know if energy balance will be maintained because maybe cloud forcing will suddenly change by 4W/m^2.

    I guess I would have liked to have seen a little more discussion about ensemble spread which is the most visible “uncertainty” we lay folks see in the GMST plots.

    Yes, but that is somewhat independent of what Frank is suggesting. There is, of course, an uncertainty in projections and that’s because we can’t perfectly model the climate. One way to estimate this is to consider the model spread under an assumption that the differences in the models somehow represents the uncertainty (it doesn’t really, as has been pointed out). However, Frank is essentially arguing that an individual model should have a large uncertainty because of the uncertainty in the cloud forcing, but the cloud forcing in an individual models does not vary wildly from step to step; it may differ – in an absolute sense – from observations, but that difference doesn’t mean that the cloud forcing in that model will wildly vary relative to observations.

  26. Bernard,

    Gavin pointed out a similar FUBAR back in 2008:

    Wow, almost a decade and it’s still going. No wonder people refer to things like this as voodoo myths.

  27. brandonrgates says:

    Anders,

    Thanks for your detailed response. As I suspected, my doubts were rather trivial. Cue to the seepage jokes due to where I hang out most of the time. 😉

    > However, Frank is essentially arguing that an individual model should have a large uncertainty because of the uncertainty in the cloud forcing, but the cloud forcing in an individual models does not vary wildly from step to step; it may differ – in an absolute sense – from observations, but that difference doesn’t mean that the cloud forcing in that model will wildly vary relative to observations.

    I’ve seen Frank challenged (and I’ve done it too, once was enough) on the interpretation of his uncertainty bounds, because they imply one of two things:

    1) That inter-annual variability can bounce around anywhere in the range
    2) That the long-term trend can approach either the higher or lower bound after a sufficient amount of elapsed time

    Neither are physically plausible, and the models don’t exhibit that behaviour over the historical runs (as Brown points out). Frank told me that I just don’t understand uncertainty propagation, at which point I gave up. This is his schtick, and not even Gavin Schmidt could get him to budge from it — as Bernard points out, this goes back a ways — it’s pretty much a voodoo myth or zombie argument in my view as well.

    Thanks and kudos to Dr. Brown for doing this video so patiently and to the point, I did learn from watching it.

    Cheers.

  28. > Frank is essentially arguing that an individual model should have a large uncertainty because of the uncertainty in the cloud forcing, but the cloud forcing in an individual models does not vary wildly from step to step; it may differ – in an absolute sense – from observations, but that difference doesn’t mean that the cloud forcing in that model will wildly vary relative to observations.

    https://en.m.wikipedia.org/wiki/Fallacy_of_composition

    A variation is the “meteorological fallacy” – the idea (eg Senior’s) that modulz can’t have much skill for the whole world because it fails on regional scale.

  29. Windchaser says:

    Fundamentally, you can look at this as a unit problem. W/m2/year is not the same as W/m2, so you can’t multiply your uncertainty (in W/m2) by time (in years) in order to increase it unendingly. That would give you W*year/m2.

    Frank and I argued about this on WUWT. He says there’s an “implicit time aspect” in the equation of uncertainty, which is just a bollocks way of saying that he’s changing the units with no justification.

    Are Climate Modelers Scientists?

    C’mon, guys, this is basic calculus.

  30. Rereading Pat’s comments at Jeff’s made me recall Mr. T:

    The Uncertainty Monster

    MarkT alluded to this classic thread:

    http://rankexploits.com/musings/2011/blah-blah-blah-mt-and-communicating-climate/

    Audits never end.

  31. Windchaser,

    An Internet to you, Sir. Very good comments at Tony’s.

    Grab SpeedoScience sportsmen by the units!

  32. Compare and contrast Winchaser’s ClimateBall style with VikingExplorer in that other Tony’s thread:

    Do Climate Projections Have Any Physical Meaning?

    VikingExplorer has the right if it, but the salt, O the salt!

    Sorry for the lack of quotes. Tablet obliges.

  33. Ken Fabian says:

    I’m another who went a round or two with Pat Franks – as I recall it I made some of the points made here, like pointing out the obvious, that in practice GCM’s don’t propagate that widening fan of propagating uncertainty, because what Franks thinks GCM’s are doing is very different to what GCM’s actually are doing. A lot of anti-climate talking points works that way, starting with a plausible sounding misrepresentation and demonstrating with cherry picked points that reality isn’t conforming, QED, climate science is false. Was it Gavin Schmidt that called Frank’s misrepresentation of GCM’s “toy models”?

    Off topic – another statement by Academies of Sciences on climate, this time 17 of them, including, I’m pleased to see as an Australian, the Australian Academy of Sciences, although not including the US NAS this time.

    http://science.sciencemag.org/content/292/5520/1261

    I approve but think it isn’t enough – or, rather, the medium restricts the reach. People in positions of trust and responsibility who ought to already be on top of this issue are unlikely to pay it any more mind than all the previous reports and statements. It needs the kind of public exposure only big budget, quality video productions, heavily promoted, can achieve. When those people claim they are representing the wishes of the public, whilst playing a big part in giving credibility and respectability to denying the science is valid, there is a feedback mechanism at work that needs a clog jammed in it’s works. Perhaps a better informed public can be that clog in the works. Disturbing to realise the extent our leaders and power brokers rely on an uninformed and misinformed public.

  34. Steven Mosher says:

    “One must wonder if the uncertainty is indeed that large, why all ensemble members stay so much closer to the ensemble mean, right?

    Indeed. If he were correct, they really should be all over the place.ne must wonder if the uncertainty is indeed that large, why all ensemble members stay so much closer to the ensemble mean, right?”

    Yes. He’s also made similar claims about the temperature record. The point is if he were correct about the uncertainty we would see.. different answers when we run ensembles many times ( there are examples of large ensembles ) and we would see wild differences when we compile temperature series from different subsets of data.

    In some ways you can see a statement about uncertainty as a prediction. What Frank predicts doesn’t happen. For years he has been stuck on stupid. As others note there is a lesson there.

    Indeed. If he were correct, they really should be all over the place.

  35. russellseitz says:

    Nick :
    ““Pat, your error bars regardless of how you obtained them do not pass the smell test”

    We need to learn from this.”

    In cases such as this, much can often be learned by cutting out both the error bars and the data, and noting the slope of the hole that remains.

  36. January 22 – 28, 2017 406.48 ppm
    January 22 – 28, 2016 403.12 ppm

    Noisy number showing 3.36 ppm increase over last year. We have blown past Dr. Mann’s 405 warning number. Propagate the errors in this situation.

    Mike

  37. DMA says:

    As a surveyor, I have spent my last 40 years concerned with error propagation. Before GPS positions of survey monuments were located using series of distance and angle measurements. Random errors tend to cancel out but systematic errors propagate and the resultant final error grows with each iteration. I think Dr. Frank has demonstrated that the errors in the models are not random within each model or between models. His analysis doesn’t show that the models’ resultant temperature is just as likely to be plus 12C as minus 12C as anything in between it shows that the reliability of the model result is very low. If in had a stretchy tape and a loose transit so each measurement had more than random measurement error , after 10 legs I could not testify with any credibility that the distance and direction from point 1 to point 11 was what I calculated. I had to traverse back to point 1, compute the error and assign parts of it to each distance and angle. If I properly controlled the systematic errors my expected closure followed the equation Frank and Brown used but, within the error ellipse I had no way of guessing the true coordinates of the measured point. The error ellipse had dimensions but no point inside it was more likely to be the true position than any other. I think Franks work is analogous to mine and the fact that the error bars are out of reasonable dimensions only says you cant be sure where inside them the true answer lies. The base state is of no importance to the propagation just the uncertainty of the cloud forcing present at the start. The cloud error was used but probably a better analysis would treat every the sum of all the uncertainties but I can’t imagine that analysis would shorten the error bars.

  38. DMA,
    Are you suggesting that Pat Franks’s error propagation calculation is not wrong?

  39. Since I’m not sure if DMA will come back and respond, I’ll comment further.

    Random errors tend to cancel out but systematic errors propagate and the resultant final error grows with each iteration.

    This isn’t, however, a systematic error; it’s a base state error. If – as can be the case – a climate model has an error in the absolute cloud forcing. If you ran such a model without any changes, it would probably settle into a quasi-steady state in which the mean temperature was slightly wrong (compared to observations). The mean state will be set by the amount of energy coming in being balanced by the amount going out. If the cloud forcing is wrong, then that will mean that the surface will either heat up more (or less) in order to reach energy balance. However, once it’s done so, it can – on average – sustain that balance. So, there is no continual propagation of an error.

    Now, if you consider changing the system (by adding GHGs, for example) then you will force it away from that quasi-steady state. That quasi-steady state may not match reality, but as long as it is close enough (as it almost certainly will be for any realistic climate model) how it changes with respect to that state will not depend strongly on that state. In other words, if we’re interested in how the system responds to some external perturbation, then the absolute value of the base state is probably not that important.

  40. DMA says:

    My understanding of Dr. Franks lag one analysis is that he used the uncertainty derived from the comparison of each model and the observed to compute the systematic error involved and resolved that for a one year potential error and then ran the propagation formula to get the error bars.
    If I have a tape that is 101 ft. long and measure a 1000 ft. baseline I determine that it is 990 ft. long+whatever random error is in my work. If I do it again and am just as careful my results will be within about .03 ft. but I will know nothing of the true length without comparing my tape to a standard. If there is systematic error in the model of the magnitude demonstrated by the lag one analysis it will propagate as Franks demonstrates with each iteration and the result will be that any answer within the error envelope has as much chance of being correct as any other.

  41. DMA,
    Your analogy isn’t quite comparable here, for the reasons I tried to explain above and which are explained in the video. In the case of our climate, the system will tend towards energy balance – the amount of energy being received matching the amount of energy being radiated back into space. If something like the cloud forcing is “wrong” then that would mean that the response will probably also be “wrong”. In this case, it would probably mean that the temperature to which we would tend would also not match reality. However, this would simply end up being an offset; the error would not accumulate. Hence, this is more like an uncertainty in your initial position, rather than an uncertainty in how far you move during each step.

    However, since climate models are mainly being used to understand how the system responds to changes, such an offset does not necessarily mean that how we would respond to an external perturbation would somehow be wrong. Of course, all models are wrong, but some are useful.

  42. As a simple illustration, the Planck response is 3.2W/m^2/K. In other words every 1K of warming/cooling changes the outgoing flux by 3.2W/m^2. If the long-wavelength cloud forcing is wrong by 4W/m^2 and is the only “error” in the model, then energy balance would require a surface temperature that was in “error” by about 1.25K. To be clear, by “error” I mean a difference from what would be expected from observations.

  43. DMA says:

    I don’t think error analysis depends at all on the physical process being modeled. If one step in the process depends on the result of previous steps, systematic errors can propagate and expand the uncertainty of the outcome. If we accept Dr. Franks emulation of the model results with linear equations and his lag one results for determination of systematic results. The error analysis is just math and independent of the physical process being modeled.
    It would be good to get Chris Essex’s or Ross McKittrick’s take on this as they were among Dr. Franks reviewers and are far more qualified than an old land surveyor to understand error analysis.
    In response to Dr. Brown’s final question about how long do we need to watch the model work to be able to accept it as accurate I think the answer is forever. If it is an accurate model it will work all the time and be useful for prediction. If it doesn’t work at some time it doesn’t qualify and needs to be modified.

  44. DMA,

    I don’t think error analysis depends at all on the physical process being modeled.

    In a sense, yes, but understanding the system being modelled is still an important part of understanding how you would propagate errors.

    If one step in the process depends on the result of previous steps, systematic errors can propagate and expand the uncertainty of the outcome.

    Except, in this case, the state to which the system will tend depends on the various processes associated with the energy fluxes. If something like the cloud forcing does not exactly match observations (as seems to be the case) then the state to which the system will tend will be different, but the difference between what would be expected were the cloud forcing accurate, and the state to which it will tend if the cloud focing is in error, does not grow with time. It’s simply an offset.

    Consider the following. Imagine we have a model that accurately represents all the processes in the system and produces an output that accurately matches observations. Imagine that we now run this model with the solar flux 4W/m^2 smaller (or bigger) than what it is observed to be. Would one propagate this 4W/m^2 error through the model and claim that the output was getting increasingly uncertain, or would you expect this model to simply settle to a state slightly different to that of the model that is accurate. (Hint: the latter).

    If we accept Dr. Franks emulation of the model results with linear equations and his lag one results for determination of systematic results. The error analysis is just math and independent of the physical process being modeled.

    We shouldn’t accept it, because it is wrong.

    It would be good to get Chris Essex’s or Ross McKittrick’s take on this as they were among Dr. Franks reviewers and are far more qualified than an old land surveyor to understand error analysis.

    How about Gavin Schmidt who clearly knows more than both Essex and McKitrick. Gavin Schmidt says:

    Frank confuses the error in an absolute value with the error in a trend. It is equivalent to assuming that if a clock is off by about a minute today, that tomorrow it will be off by two minutes, and in a year off by 365 minutes. In reality, the errors over a long time are completely unconnected with the offset today.

  45. We can build an infinite number of ways in which a calculation could be composed incorrectly. Using Gavin’s clock analogy the clock could have 1 minute offset and no linear error, or it could be gaining 1 minute per day, or offset by 30 seconds and gaining 30 seconds per day. Again, we can build an infinite number of possible errors, assuming any one of these is true without proof is problematic. It appears that DMA can only envisage one possible way the error could be wrong. It also appears he thinks Frank knows more about the true uncertainty than the experts. This too is problematic.

  46. Marco says:

    “It would be good to get Chris Essex’s or Ross McKittrick’s take on this as they were among Dr. Franks reviewers and are far more qualified than an old land surveyor to understand error analysis.”

    Well, yes and no. In principle they perhaps should both be. However, McKitrick has already been caught playing statistical trickery not that long ago (https://quantpalaeo.wordpress.com/2014/09/03/recipe-for-a-hiatus/), and many years earlier Essex and McKitrick managed to make a mistake that was so bad, you really wonder whether this was just plain stupidity, or outright malice (https://archive.is/M7Tx5).

    Oh, and McKitrick was also on a paper that mixed up degrees and radians (http://scienceblogs.com/deltoid/2004/08/26/mckitrick6/).

    Sorry, if you put your trust in these people to see the potential flaws in Franks’ analysis, you are setting yourself up to be fooled.

  47. Magma says:

    @ Marco
    Don’t forget McKintyre and McKitrick’s “debunking” of Mann, Bradley and Hughes’ 1998 and 1999 paper’s statistical methods that blogger Deep Climate later showed (via reading McKitrick’s R code) used the cherry pick of all cherry picks to generate plots used in their 2005 GRL paper:

    “That’s “some” PC1, all right. It was carefully selected from the top 100 upward bending PC1s, a mere 1% of all the PC1s.”

    Replication and due diligence, Wegman style

  48. I notified Pat Frank of my video and he has replied on my website:

    Do ‘propagation of error’ calculations invalidate climate model projections of global warming?

    I will respond soon when I am able to find the time.

  49. Ken Fabian says:

    DMA –
    Franks said (A Climate of Belief) – ” By 50 years, the uncertainty in projected temperature is ±55°. At 100 years, the accumulated physical cloud uncertainty in temperature is ±111 degrees. Recall that this huge uncertainty stems from a minimal estimate of GCM physical cloud error.”

    But multiple climate model runs don’t show the kind of widening band of results Franks – or you, agreeing with him – expect; they generally show less than 1C spread at the beginning of a run covering a century and less than 1C spread at the end. Reality – in this case the reality of the actual results of multiple climate model runs – should be enough to show that Franks is in serious error.

  50. Patrick,
    Wow, Pat Frank’s comments are almost a blog post in themselves. Seems that he still doesn’t get that he’s wrong.

  51. Marco says:

    I see Frank also does not understand what Hansen wrote in his 2005 response to Crichton. But that’s to be expected for someone who suffers from major Dunning-Kruger (this is very, very obvious right now).

  52. Marco says:

    Oh, and I forgot to say that the earlier link to Tamino’s earlier evisceration of Frank should have been a clear reminder that Frank will never get that he’s wrong.

  53. Frank will never get that he’s wrong.

    Yes, this seems fairly obvious.

  54. Patrick,
    Very good, but I think you’re right that you and Pat Frank are probably at an impasse.

  55. Susan Anderson says:

    Kind of a “last man standing” thing. I learned about this in the early days here:
    http://duoquartuncia.blogspot.de/2008/07/aps-and-global-warming-what-were-they.html

    (If you are interested enough to take a look, please don’t read through just go to the bottom of the comments and look at how long and complex the fake rebuttals continue to be, and how proven wrong the premise of the fake arguments are.)

    Unfortunately, time itself is not on the side of a self-perpetuating argument. Climate doesn’t argue, it acts (don’t mean to anthropomorphize it, manner of speaking). Still, I hope aTTP’s unwavering courtesy and choices are benefiting some invisible lurkers who are noting the kindness and patience of his “tone” as well as the availability of vast realms of facts.

    I think we all need to note that the situation is urgent and hold in the self-satisfactory insults and undermining people’s basic need for self respect and things to hold on to. Empathy and compassion include tolerance. The contrast is obvious, and insults undermine the evidence for that. (I make an exception for insiders like Dr. Curry who are fomenting falsehood for apparent gain and need to be called out on their for-profit advocacy. Profit includes fame as well as fortune in her case.)

  56. Dan Riley says:

    [apologies for cudgeling an expired equus]

    The cited Hooper and Henderson article uses an analogy to repeated stopwatch timing of the same athlete running a 400 m race, hoping to measure changes that are much smaller than the reaction time uncertainty triggering the stopwatch. I don’t think that’s a very good analogy, but maybe it can be turned into a better one.

    Consider an athlete running a 500 m race with lap times every 100 m, which (I claim) is more like what multi-year runs of climate models. One way you could do this is with 5 people, each with a stopwatch, one for each lap. The uncertainties on the start and stop times would all be independent, so if you add all the lap times to get the total race time, all those uncertainties accumulate.

    Now try something more like a modern lap timer–have a free running clock, and record the times for start, stop, and completion of each lap via a mechanically actuated switch. Calculate the lap times by taking the difference of the time to start and complete the lap. Each lap time will have an uncertainty, but adding all the lap times to get the race time those uncertainties don’t add. That’s because we’ve added a constraint that the stop time of one lap and the start time of the subsequent lap are simultaneous, so the lap times have to sum to the difference between the start and stop time of the entire race. Then the only uncertainties that matter for the sum are the race start and stop times.

    Let’s further suppose that most of the uncertainty in the reaction time is systematic, such that the timer is always late by the same amount. When we take the differences that constant offset drops out.

    The claims are that (1) due to the boundary constraints the uncertainty in cloud cover in year n only weakly affects the cloud cover in year n+1, and (2) the uncertainties in modeled cloud cover are systematic offsets that cancel out in the difference. (1) is essentially proved by the stability of climate models under small perturbations. (2) is certainly testable, but instead of addressing it, Frank looks at the spatial variation in time averaged cloud cover, which is like using the variability in lap times to estimate the uncertainty in the length of the race track.

  57. Pat Frank says:

    The valid questions on this web-site have probably been answered in my replies on Patrick Brown’s site. Dr. Brown has been very polite throughout, and I admire that and am grateful.

    Nick Stokes’ comment, though, is active dishonesty. Is lying by omission your new critical forte, Nick?

    Dan Riley, the correct analogy would be the known varying lap times of a runner are used to estimate the rms uncertainty in future lap times by that same runner. There’s no connection with the length of the race track.

  58. Pat,
    I’m not a huge fan of people making accusations of dishonesty, but since you’re theme of the post, I’ll let it slide.

    Given that people have been trying to explain this to you for a decade, I doubt you will suddenly get this now, but I am amazed that you continue to promote this. The 4W/m^2 is not the uncertainty in the cloud forcing at every timestep, it is an offset. As I was trying to illustrate on Patrick Brown’s site, it’s not dissimilar to the Solar forcing being off by 4W/m^2, which you seem to realise, you would not propagate in the way you suggest for this error.

  59. Marco says:

    “Active dishonesty” would be to claim that JeffId “grudgingly admitted” something, when in that same thread JeffId explicitly states that he has *not* retracted his claim and thus not “grudgingly admitted” the point Pat Frank claims he grudgingly admitted, because it wasn’t the point he was trying to make and made once again in that blog post he wrote!

    I do have to say that that thread at the Air Vent, which Pat Frank uses as evidence of something it isn’t, was yet another excellent example of Pat Frank being unable (unwilling?) to see what he does wrong, even if four people try to explain it to him in excruciating detail.

    Nick Stokes is right – trying to explain Pat Frank what he does wrong is futile, as he won’t admit to any mistakes and even doubles down on his mistakes.

  60. Pat Frank says:

    That was not an accusation. Inclusion of the link made it a demonstration.

    Nick Stokes twice presented that series of critical comments as though they were definitive, neglecting to even mention that every single one of them was effectively countered. This is to lie by omission, a standard of scholarly dishonesty.

    I carried the debate to which Nick referred. He’s never mentioned that, either. He may not even know it, which would indicate negligent scholarship, as well.

  61. Pat Frank says:

    Marco, Jeffid and Lucia wrote that I had confused weather noise for temperature measurement error, in a published paper (869.8 KB pdf).

    It took me quite a while to figure out that this mistaken perception was the source of their critical view. When I did figure that out, I challenged them to demonstrate the weather noise error in any Figure or Table in my error analysis. They couldn’t do it, because the error is not there.

    Jeffid grudgingly admitted having made that mistake. It reduced his claim to zero. Whatever he now says, he was objectively mistaken.

    I’ve provided a link to that paper. It’s open access. See for yourself whether there’s a weather noise error.

    The debate revealed that both Jeffid and Lucia had criticized my work without ever having read through the paper carefully; probably without having read through it at all. Rather analogous to your approach here, Marco, spouting off without knowing what you’re talking about.

  62. Willard says:

    Dear PatF,

    I read that exchange at Jeff’s and it has very little resemblance with what you’re saying right now. You were the first to “go Meta” so to speak and in ClimateBall it is bad omen.

    Unless and until you address AT’s simple point, whatever you may think happened at Jeff’s should stay there.

    Thank you for your concerns.

    W

  63. Pat,
    You still aren’t addressing the point that the error in the cloud forcing is an offset, not an error at every timestep. Hence it doesn’t propagate as you suggest. However, given that people have been pointing this out to you for about a decade, without success, I doubt I will be able to convince you now.

  64. Dr Frank, when you write “…every single one of them was effectively countered.” where did you get the idea they were “effectively” countered. Yes, you wrote something in reply, but that doesn’t mean it was effective. obviously it didn’t convince Nick Stokes, or Dr Brown, or ATTP, or – for that matter – myself.

    I have to deal with and write uncertainty budgets on a daily basis and I wasn’t swayed by anything you wrote. Your handling of units just had me confused. So I think for most here your statement would better reflect reality if it were restated “…every single one of them was ineffectively countered.”

  65. Dr Frank: ” .. the ±4 W/m² LCF root-mean-square-error (rmse) is the annual average CMIP5
    thermal flux error..”
    Dr Brown: “Differencing two 20-year means does not produce an “annual error”

    Me: Of course differencing two 20-year means doesn’t produce and annual error. Is that what Dr Frank thinks? Reads Dr Frank’s comment at February 5, 2017 at 8:28 pm It appears so. Hmmm.. I won’t waste any more time on this.

  66. Pat Frank says:

    AT, I have addressed the point that the ±4 W/m² is not an offset. It’s the global root-mean-square LCF simulation error per year per grid-point, just as my dimensional analysis showed at Patrick Brown’s site.

    For each of 27 CMIP5 models, find each of 27 twenty-year mean errors as (model 20-year mean CF) minus (twenty-year mean observed CF) at each 1-degree x 1-degree grid point. A 20-year mean is (sum of magnitudes)/(20 years) = average magnitude/year.

    Model 20-year mean CF has dimension (simulated W/m² /year). Mean observed has dimensions (obs’d W/m² )/year.

    Calculate (mean simulated W/m²/year) minus (mean observed W/m²/year) for 27 CMIP5 models to get 27 sets of (model mean error W/m²/year). Calculate the rms of 27 model (mean error W/m²/year) values to get rms error of ±4 W/m² per year, representative of the 27 models. Is that really so hard?

    Willard, I was polite throughout that thread. Let’s see you show where I “went meta.”

    Oneill, you really can’t see that (annual mean modeled) minus (annual mean observed) = annual mean error?

    There’s no further need to argue something so obvious.

  67. Willard says:

    > I was polite throughout that thread. Let’s see you show where I “went meta.”

    Sure, PatF. Here’s the last sentence of the comment before your last:

    Rather analogous to your approach here, Marco, spouting off without knowing what you’re talking about.

    Both meta and the opposite of polite.

    And here’s your very first sentence:

    The valid questions on this web-site have probably been answered in my replies on Patrick Brown’s site.

    Meta. It doesn’t strike me as very polite too.

    I’m not sure how you connect not being meta with being polite, but here you go.

    Challenge met.

    It was easy to meet – you made four comments so far on this thread, PatF, and most of them are both meta and not very polite.

    From now on, please stick to your pet topic.

  68. [Wrong thread, Susan. – W]

  69. Hi Dr. Frank,

    I will be happy to reply more fully to your most recent comments, but before doing so, I think we really need to drill down on point number 1 because, as I see it, our entire disagreement rests critically on this issue.

    You say:

    “Lauer and Hamilton LCF rmse dimension is ±4 Wm⁻² year⁻¹ (grid-point)⁻¹, and is representative of CMIP5 climate models. The geospatial element is included.

    … Finally, the clear per-year dimension in the rmse LCF error shows that Dr. Brown’s conclusion, …“The point is that the ±4 W/m² root-mean-square error does not have an intrinsic annual timescale attached to it.”… is proven wrong. Likewise, Dr. Brown’s following, …“Its units are W/m² not W/m²/year. Thus, the choice to compound annually is arbitrary,”… is also wrong.”

    The annual propagation step is obviously not arbitrary. The (year)⁻¹ bound emerges directly from the calculation of an annual mean error. ‘Per year’ is the dimension of the yearly mean of multiple-year values, and is unarguably attached to the rms LCF error.

    I must insist that the unit is W/m² not W/m²/year.

    I tried to show this intuitively in the above excel screenshot which illustrates that the underlying temporal resolution of the data can be arbitrarily scaled up to any temporal resolution you want. Remember, the annual timescale is not the ‘native’ temporal resolution of either the climate model data or the observational data. As a thought experiment, imagine that we lived in a world where the convention was to archive data at the 5-year (60 month) temporal resolution instead of the annual temporal resolution. In this world, the data that underlies the Lauer and Hamilton ±4 W/m² value would be an average of 4 60-month-long segments rather than an average of 20 1-year-long segments. If you lived in this world, on what grounds would you argue that the unit for the ±4 value is W/m²/year rather than W/m²/60-months or W/m²/5-years? You wouldn’t have any grounds to argue that.

    That’s the intuition but what’s wrong with your unit accounting? Your mistake is that you are not applying the average formula correctly. For the time-mean that we are discussing, the unit ‘year’ actually appears in the numerator as well as the denominator of the average formula:

    sum( i = 1, n_years ;   year_i * value_i ) / sum (i = 1, n_years; year_i )

    Here is a more intuitive example using IQs instead of years (provided from my boss, Ken Caldeira):

    If it is average IQ of people you want to calculate, you would normally add the IQs and divide by number of people, but you really need to multiply each by 1 person in the numerator also.

    Imagine if you first clustered people by IQ, so you had 3 people with a 90 IQ , 5 with 100 IQ, and 4 with 110 IQ. The mean would be:

    ((3 people * 90 IQ) + (5 people * 100 IQ) + (4 people * 110 IQ)) / (12 people) = 100.83 IQ

    If you didn’t cluster people first, you still would need to write something like:

    ((1 people * 90 IQ) + (1 people * 90 IQ) + (1 people * 90 IQ) + (1 people * 100 IQ) + … ) / (12 people) = 100.83 IQ

    The unit for this average is actually just IQ not IQ/person.

    When it is just 1 value for each thing we are counting we normally get sloppy and don’t think about the unit multiplication in the numerator, but when we are formal, it is there.

    “… All of Dr. Brown’s argument in Point 1 is now vacated.”

    No, it is not.

  70. Pat Frank,

    I have addressed the point that the ±4 W/m² is not an offset. It’s the global root-mean-square LCF simulation error per year per grid-point, just as my dimensional analysis showed at Patrick Brown’s site.

    I realise that many have pointed this out to you before, but I’ll do it again anyway. If there were a +- 4W/m^2 uncertainty in cloud forcing every year, you’d expect some climate models to vary wildly (the Planck response is 3.2W/m^2/K). They don’t show this, therefore your assertion is clearly flawed.

  71. Hi Dr. Frank,

    Here is one more scenario for you. This is the simplest situation that I can think of that gets at the crux of the W/m2 vs. W/m2/year issue. I would appreciate an answer to this question before moving on.

    Imagine you are driving down the highway for a total of 3 hours or 180 minutes. For the first hour (60 minutes) you are driving 60 mph and for hours 2 through 3 (minutes 60-180) you are driving 90 mph. What is your average speed? Is it:

    a) 80 mph/hour
    b) 80 mph/minute
    c) 80 mph

    If you choose (a), please explain why you chose (a) instead of (b). If you choose (b), please explain why you chose (b) instead of (a).

    ANSWER:

    The correct answer is (c), which you can get from using either temporal resolution:

    Avg speed = ((1 hour)*(60 mph) + (2 hours)*(90 mph))/(1 hour + 2 hours) = 80 mph

    Avg speed = ((60 minutes)*(60 mph) + (120 minutes)*(90 mph))/(60 minutes + 120 minutes) = 80 mph

    The average speed does not have /hour OR /minute as the denominator because those units are implicitly in the numerator as well.

    [crossposted at https://patricktbrown.org/2017/01/25/do-propagation-of-error-calculations-invalidate-climate-model-projections-of-global-warming/comment-page-1/#comment-1460%5D

  72. That reminds me of a question we ask our first years. Someone is rowing across – and then back – a lake that is 1 mile wide. When they first cross the lake, they row at 1mph. When they come back, they row at 0.5mph. What is their average speed?

  73. Eli Rabett says:

    Here is a better one to ask. To qualify for a race cars have to average 180 kph over three laps on a 1 km track. Car A has trouble starting and only averages 60 kph over the first lap. How fast does Car A have to go to qualify. (BTW, no ethical scientist uses miles, it’s kind of like protomatter:)

  74. BTW, no ethical scientist uses miles, it’s kind of like protomatter

    I was just following Pat Brown’s example. I assumed it was an American thing 🙂

  75. Dr Frank: “Oneill, you really can’t see that (annual mean modeled) minus (annual mean observed) = annual mean error?”

    I don’t see it because it isn’t true. As Dr Brown has stated, here and on his website, the mean error is the mean error, it is not the annual mean error.

    Here is a simple example from synthetic data.

    Set L is a linear function increasing in value with each year. Set R is a random normal distribution, and Set S is a sine function. Do the RMSEs in and of themselves tell us anything about these functions? No. Only by looking at the yearly data, not the means, are we be able to see this.

    If we extend these datasets to 100 or 1000 years Set R and Set S will not require the graph be rescaled. The mean error is not accumulating with each timestep. Only Set L, where we have a known incremental error/year, has an accumulative error.

    So, if you want to hypothesize that the error is accumulative you have to actually *show* that it is accumulative, but the accumulative hypothesis that you have put forward has no supporting evidence.

  76. Pat Frank says:

    AT wrote, “I realise that many have pointed this out to you before, but I’ll do it again anyway. If there were a +- 4W/m^2 uncertainty in cloud forcing every year, you’d expect some climate models to vary wildly…

    No you’d not. You’re confusing an uncertainty statistic with an energetic variation.

    Uncertainty bars condition the model expectation value. They have no influence on the model itself, nor any impact on model behavior.

    [No more meta, PatF. Please. -W]

  77. Pat,
    That doesn’t make any sense. Either the cloud longwave forcing can vary by +- 4W/m^2 (1 sigma?) at each step, or it can’t. If the latter (i.e., the model does not potentially diverge greatly) then your uncertainty calculation has no basis in reality.

  78. Pat Frank says:

    AT, error is (simulated minus observed). How does a calculated (simulated minus observed) exert any influence on a model?

    The statistic doesn’t say that the model itself is being subject to a +/- energetic perturbation. It says the model is making an error.

    The model makes that error in every simulation step. As a consequence, the uncertainty in the expectation value increases with step-number.

    A general question: how can “longwave forcing … vary by +- 4W/m^2 … at each step? The “+/-” interpreted (incorrectly) as a forcing implies simultaneously offsetting magnitudes of opposite sign. How can a net-zero influence, i.e., +4-4 = 0, have any impact on a model at all?

    [Playing the ref. – W]

  79. Pat Frank says:

    [Playing the ref. -W]

  80. Steven Mosher says:

    Still wrong Pat

  81. Pat,

    AT, error is (simulated minus observed). How does a calculated (simulated minus observed) exert any influence on a model?

    If the simulated minus observed is an offset (as it is here) then the impact it has on the model is that it will settle to a lightly different state to what it would settle if there were no offset. You don’t propagate this difference at every step. If the solar insolation were wrong by 4W/m^2, you wouldn’t propagate that.

  82. Pat Frank says:

    AT, it’s not an offset. The disproof of your offset claim is right there in Patrick Brown’s slide showing simulated cloud error. It exhibits positive and negative error regions. The 20-year averages also show large positive and negative errors.

    Cloud error is inherent in the models. The correlation of simulated cloud error among CMIP5 models establishes that. As an error inherent to the models, it shows up in every step of a simulation.

    I’m negatively impressed with you, AT, deleting out my response to false accusations of rudeness, as you did. You’ve taken unfair advantage of your position as blog owner.

    Steve Mosher, you’ve never given any indication that you understood iota one of the debate at Jeffid’s; not then, and not since. Your opinion is as worthless as ever.

  83. Pat,

    The disproof of your offset claim is right there in Patrick Brown’s slide showing simulated cloud error. It exhibits positive and negative error regions. The 20-year averages also show large positive and negative errors.

    The large positive and negative errors are for different models, not within a single model. There is nothing in the slides that indicates that the cloud forcing in a single model can vary by +-4W/m^2 every timestep (or every year) as you suggest.

    I’m negatively impressed with you, AT, deleting out my response to false accusations of rudeness, as you did. You’ve taken unfair advantage of your position as blog owner.

    It wasn’t actually me. The W means Willard, who helps me moderate. That you can say Your opinion is as worthless as ever. might illustrate why you sometimes get moderated.

    Are you at least capable of considering that you’ve blundered rather spectacularly? Given that many people have tried to explain this to you over many years, my guess would be no.

  84. Pat Frank says:

    AT, you wrote, “The large positive and negative errors are for different models, not within a single model. There is nothing in the slides that indicates that the cloud forcing in a single model can vary by +-4W/m^2 every timestep (or every year) as you suggest.

    The slide at minute 14:22 in my DDP talk shows exactly that individual models make positive and negative cloud errors.

    The slide at 16:23 shows the average rms fractional cloud errors (FCE) for individual models. They range over FCE = ±0.06 to ±0.16.

    If we take the net thermal cloud feedback to be about -27.6 W/m^2. (Stephens, 2005), then one can estimate the range of average rms global LWCF error those individual models produce, as (FCE*-27.6 W/m^2) = ±1.7 W/m^2 to ±4.4 W/m^2.

    This is a crude estimate, and not the method that Lauer and Hamilton used. Nevertheless, it establishes the conclusion of individual model error.

    Incidentally, that ±1.7 W/m^2 rms model LCF rmse produces an uncertainty of about ±7 C after 100 years of temperature projection.

    Stephens, G.L., Cloud Feedbacks in the Climate System: A Critical Review. J. Climate, 2005. 18(2), 237-273.

    [Playing the ref. – W]

    As blog owner, AT, you don’t get to absolve yourself of responsibility for the unethical behavior of your staff.

    [More whining. – W]

    You assert I “blundered rather spectacularly” without having demonstrated any blunders.

    There is no weather-noise mistake in my paper about air temperature measurements. The cloud rms LWCF error is a systematic model error, not a single-sign, base-state, constant offset error.

    And, by the way, your view that I suggest, “the cloud forcing in a single model can vary by +-4W.m^2 every timestep” is a rather spectacular blunder of your own. Further, I have never suggested any such thing.

    I pointed out to you already (February 21, 2017 at 12:17 am) that error isn’t an energetic perturbation on the model. And yet, here you are, representing it as such yet again.

    The very same mistake is obvious and evident in your (February 21, 2017 at 7:27 am), “If the simulated minus observed is an offset (as it is here) then the impact it has on the model is that it will settle into a lightly different state to what it would settle if there were no offset.”

    You have an uncertainty calculated after the simulation is completed turning into an energetic perturbation and then circling back in time to affect the model.

    Spectacular. Blunder.

  85. Pat,

    As blog owner, AT, you don’t get to absolve yourself of responsibility for the unethical behavior of your staff.

    They’re not my staff and neither they, nor I, benefit from this blog (other than through the interaction with interesting people). Complaining about moderation is, itself, a reason for being moderated (try reading the policies).

  86. Pat,
    Okay, I’ve looked at your few minutes of video, and you still seem to be missing the point. The surface temperature to which the system will tend depends on energy balance – how much energy is coming in balance by how much is going out. The amount we receive depends on the amount we receive from the Sun. The amount going out depends on the surface temperature and the various radiative effects within the atmosphere, including the influence of clouds. What happens if the cloud response (or cloud forcing) is wrong? Well, that would simply change the expected equilibrium temperature. Since the Planck response is 3.2W/m^2/K, a 4W/m^2 error in cloud response would mean an “error” in equilibrium temperature of just over 1K. However, this does not accumulate, it is simply an offset. You can’t propagate this error, because it’s not an error at every step in the calculation, it’s simply an “error” in the magnitude of the cloud response.

  87. Pat Frank says:

    [Playing the ref. – W]

    You looked at my video, and did not acknowledge that it refutes your February 22 at 7:36 am post. Individual models do indeed show positive and negative cloud errors, persisting in a 20-year mean.

    You say I’m missing the point, and then go on to treat cloud error as an energetic perturbation yet again!

    Jeffrey Kiehl pointed out in 2007 that models have offsetting parameter errors, that allow the models to reproduce recent global mean air temperatures despite having very different climate sensitivities. That is, despite their internal errors, climate models nevertheless maintain global energy balance and reproduce known air temperatures. This alone is sufficient to set aside your objection.

    The models always are in overall energy balance. The cloud errors arise because the models partition the available energy incorrectly into the various climate sub-states. That and the significant parameter uncertainties mean the projected air temperatures are not unique solutions to the climate energy state.

    The uncertainty in future air temperatures arises from that; from the multiple available solutions, none of which are known to be correct, and the fact that the underlying physics is wrong or incomplete.

    Over projection time, the incorrectly partitioned energy causes the simulated climate to wander in unknown ways away from the physically correct future climate. But no one knows what the physically correct future climate looks like. Therefore simulation error cannot be calculated. Only an uncertainty estimate is available.

    This unknowable wandering of the simulated climate plus the lack of any way to calculate a physically real error, means less and less certainty can be retained by each simulation time-step. Uncertainty bars are an ignorance width, and large ones tell us that we do not know the phase-space position of the simulated future climate relative to the physically correct future climate.

    You wrote that the cloud error is “not an error at every step in the calculation.” Except that it is.

    The cloud error arises from model error. The physical theory deployed within the model is wrong or incomplete. The model mis-calculates the clouds in every single step, and each new simulation error is produced on top of the error in the prior state. The uncertainty necessarily propagates and the magnitude of the error itself is unknowable.

    The annual average ±4 W/m^2 is a representative uncertainty, and just allows us to estimate the consequences of continuous error on the extent of our knowledge about the state of the future climate.

  88. Pat,
    One useful aspect of moderating people, is that you can tell something about them by their response to being moderated.

    You wrote that the cloud error is “not an error at every step in the calculation.” Except that it is.

    There’s a difference between the cloud forcing being in error by some offset at every timestep, and the cloud forcing being able to vary by that amount at every timestep.

    If you were correct, you could run a single model many times with slightly different initial conditions and you would expect to see a large range of results (varying by many degrees over a timescale of decades). We don’t see this. Therefore you are wrong.

  89. craigmccoll says:

    Pat Frank, I might have missed this, but apart from the word “year” in “20-year mean”, what else makes you think the 4W/m^2 is a per-year error? What is special about the 12-month time frame?

  90. Pingback: A skeptic attempts to break the ‘pal review’ glass ceiling in climate modeling | Watts Up With That?

  91. Pingback: Watt about breaking the ‘pal review’ glass ceiling | …and Then There's Physics

  92. Pingback: 2017: A year in review | …and Then There's Physics

  93. MMM says:

    Frontiers in Earth Science: Atmospheric Science has embarrassed itself by accepting Frank’s “paper”. Sad.

  94. MMM,
    Thanks for letting me know, I guess 🙂

  95. Pingback: Propagation of nonsense | …and Then There's Physics

  96. Pingback: Climate Intelligence [sic] Foundation - Ocasapiens - Blog - Repubblica.it

  97. Naoise says:

    Quote:
    “Except he’s wrong. The uncertainty he is talking about (a base state uncertainty) does not propagate in the manner he suggests.

    When we say this error is not part of the calculation where is that error that must be part of the calculation?

    I don’t even quite know what you mean. The base state error does not propagate as Franks suggests and if we’re more interested in changes, than in absolutes, it plays virtually no role in the uncertainty calculation. It would only be important if it changed the base state so much that it changed how the system would respond to changes” – end quote

    In the context of the claim that a ‘base state’ does not propagate in a super complex system such as the climate I would like to refer ‘…and then there’s Physics’ to the science of complexity/chaos which was one of the break through sciences of the 20th century. The key insight in this science is that highly complex systems are highly sensitive to initial conditions, such that if one has even a slight gap in data, or put another way a degree of uncertainty in the base state, then it is impossible to predict the future state of that system with precision, only within broad parameters, the system’s point of attraction.

    Now the science of complexity appears to be highly significant for this debate, as it has to do with propagation of error through a linear calculation. Complexity explains why weather cannot be predicted beyond a few days, essentially because there is no way to have all the relevant data, and a tiny gap in knowledge of initial conditions leads to significant errors in predictions of later states of that system, also known as the butterfly effect.

    Given that weather is a property of climate, and that climate calculations are equally complex as weather calculations, it seems to me to be highly unlikely that (1) base error does not propagate, as Frank claims; (2) that this propagation does not change the system state. Well-proven complexity science would affirm that base error is crucial to the ability to predict all complex systems, and that sensitivity to initial conditions in complex systems means base errors in a model would invalidate any predictive power.

    But of course the ultimate test of a model is not whether it affirms another similar model (precision), but rather whether it affirms physical reality (accuracy). In the case of climate models over 90% have considerably overshot in their warming predictions for the past thirty or so years. That is to say, 90% of climate models have poor accuracy relative to physical reality. That last point cannot be talked away using abstruse language. Models are only models, physical reality is physical reality. The former attempt to explain the latter, while the latter is the sole measure by which we judge the performance of the former.

  98. Naoise,

    I don’t even quite know what you mean. The base state error does not propagate as Franks suggests and if we’re more interested in changes, than in absolutes, it plays virtually no role in the uncertainty calculation.

    No, the base state error shouldn’t propagate as Frank suggests. Consider the following. Imagine we don’t know the solar flux precisely; let’s say it’s uncertain by about 4 W/m^2. We can select a value from within this range and run a simulation. If there are no additional external changes, then it should settle to some kind of quasi-equilibrium, with the variability simply coming from the internal dynamics. However, this quasi-equilibrium state may not be quite correct because we didn’t know if our initial solar flux was correct. So, we can select the solar flux again from within the range and repear the simulation. It will settle to a slightly different quasi-equilibrium state. We can repear this over and over again to get a range of final states that is set by the range of possible solar fluxes. This range won’t grow with time, it will simply depend on how well we know the solar flux.

    The error that Pat Frank is using is essentially equivalent; it’s an uncertainty in the cloud forcing, not an uncertainty in the cloud response (feedback). Hence, this will influence the state to which the system will settle, but won’t lead to an ever increasing uncertainty range. Also, as Patrick Brown’s video indicates, the actual uncertainty in the state to which the system will setttle should depend on the uncertainties in all the conditions that determine this state, not just one condition.

  99. Pingback: They made me do it! | …and Then There's Physics

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.