Propagation of nonsense – part II

I thought I would look again at Pat Frank’s paper that we discussed in the previous post. Essentially Pat Frank argues that the surface temperature evolution under a change in forcing can be described as

\Delta T(K) = f_{CO2} \times 33K \times \left[ \left( F_o + \sum_i \Delta F_i\right) / F_o \right] + a,

where f_{CO2} = 0.42 is an enhancement factor that amplifies the GHG-driven warming, F_o = 33.946 W m^{-2} is the total greenhouse gas forcing, \Delta F_i is the incremental change in forcing, and a is the unperturbed temperature (which I’ve taken to be 0).

Pat Frank then assumes that there is an uncertainty, \pm u_i, that can be propagated in the following way

u_i = f_{CO2} \times 33K \times 4 Wm^{-2}/F_o,

which assumes an uncertainty in each time step of 4 Wm^{-2} and leads to an overall uncertainty that grows with time, reaching very large values within a few decades.

Since I’m just a simple computational physicist (who is clearly has nothing better to do than work through silly papers) I thought I would code this up. That way I can simply run the simulation many times to try and determine the uncertainty. Since it’s not quite clear which term the uncertainty applies to, I thought I would start by assuming that it applies to F_o. However, F_o is constant in each simulation, so I simply randomly varied F_o by \pm 4 Wm^{-2}, assuming that this variation was normally distributed. I also assumed that the change in forcing at every step was \Delta F_i = 0.04 Wm^{-2}.

The result is shown in the figure on the upper right. I ran a total of 300 simulations, and there is clearly a range that increases with time, but it’s nothing like what is presented in Pat Frank’s paper. This range is also really a consequence of the variation in F_o ultimately being a variation in climate sensitivity.

The next thing I can do is assume that the \pm 4 Wm^{-2} applies to \Delta F_i. So, I repeated the simulations, but added an uncertainty to \Delta F_i at every step by randomly drawing from a normal distribution with a standard deviation of 4 Wm^{-2}. The result is shown on the left and is much more like what Pat Frank presented; an ever growing envelope of uncertainty that produces a spread with a range of \sim 40 K after 100 years.

Given that in any realistic scenario, the annual change in radiative forcing is going to be much less than 1 Wm^{-2}, Pat Frank is essentially assuming that the uncertainty in this term is much larger than the term itself. I also extracted 3 of the simulation results, which I plot on the right. Remember, that in each of these simulations the radiative forcing is increasing by 0.04 Wm^{-2} per year. However, according to Pat Frank’s analysis, the uncertainty is large enough that even if the radiative forcing increases by 4 Wm^{-2} in a century, the surface temperature could go down substantially.

Pat Frank’s analysis essentially suggests that adding energy to the system could lead to cooling. I’m pretty sure that this is physically impossible. Anyway, I think we all probably know that Pat Frank’s analysis is nonsense. Hopefully this makes that a little more obvious.

Advertisements
This entry was posted in Climate sensitivity, ClimateBall, physicists, Research, Satire, Scientists and tagged , , , , . Bookmark the permalink.

55 Responses to Propagation of nonsense – part II

  1. Could you rerun this simulation for a few billion years? Or for the creationists for 6 thousand years? Would be interesting to see in which fraction of the runs live on Earth is possible, must be pretty close to zero.

  2. Nick Stokes says:

    “Pat Frank’s analysis essentially suggests that adding energy to the system could lead to cooling.”
    Yes, that is the problem with these random walk things. They pay no attention to conservation principles, and so give unphysical results.

    But propagation by random walk has nothing to do with what happens in differential equations. I gave a description of how error actually propagates in de’s here. The key thing is that you don’t get any simple kind of accumulation; error just shifts from one possible de solution to another, and it then depends on the later trajectories of those two paths. Since the GCM solution does observe conservation of energy at each step, the paths do converge. If the clouds created excess heat at one stage, it would increase TOA losses, bring the new path back toward where it would have been without the excess.

  3. Nick says:

    “But propagation by random walk has nothing to do with what happens in differential equations.”

    Therein lies the rub. ENSO is so clearly not a random walk and so obviously the solution of a DiffEq. The connective tissue is a stochastic DiffEq such as Ornstein-Uhlenbeck which can be considered as a random walk inside an energy well. That has a definite bound given by the depth of the well.

    p.s.
    Is Carl Wunsch a friend of Patrick Frank’s ?

  4. angech says:

    Nick Stokes says: “Pat Frank’s analysis essentially suggests that adding energy to the system could lead to cooling.”
    Yes, that is the problem with these random walk things. They pay no attention to conservation principles, and so give unphysical results.”
    ATTP . “ I’m pretty sure that this is physically impossible.“

    The comments above seem somewhat misplaced.
    Adding energy to a system does cause warming of the overall system
    We know it is physically impossible to cause overall cooling
    So doe Pat Frank.
    Therefore the suggestion that he claims that adding energy to the system could cause cooling (to the system) is wrong.
    You should at least treat his argument in the proper spirit it is put forward rather than misquoting it.
    Two points.
    The argument is that the amount of energy received has an error range which means that the system could receive less rather than more energy , hence it could become increasingly colder.
    Otherwise there could be no random walk cooler.
    Second but not important is that when heat is added to a system it is not added or emitted equally so of course one part of a system can get temporarily colder if the rest overwarms.
    Play fair.

  5. dikranmarsupial says:

    It is reasonable to use a linearised approximation (e.g. Taylor series) to analyse the behaviour of a physical model, which is basically what Franks has done here. However to assume that statistical uncertainties in the approximation apply to the physical model is clearly unreasonable as the physical model may contain constraints or feedbacks that quickly limit the effect of the uncertainties. In this case I am thinking of Stefan-Botlzmann law, which means that if the Earth warms by 20C or so, there needs to be a big input of energy to compensate for the extra (fourth power) energy radiated into space.

    It also ought to be obvious that a local linearisation can’t be extrapolated out a long distance, at least without some form of analysis of the stability of the approximation.

  6. Everett F Sargent says:

    ATTP,

    It is a discrete 1D random walk, PF’s plots only show the one sigma curve, for that one sigma i get:

    Error (degrees C) = 1.578*SQRT(time), it is a conic (parabola) section. For reasonable N (say anything above ~20) the distro is gaussian with zero mean.

    In the limit (as dt approaches zero) it is a form of diffusion equation, the diffusivity coefficient would have to be units of Kelvin^2/second.

    You can do the random walk long hand for any N, but it is much faster to just use a gaussian distribution (I use the polar method) for any specific N.
    https://en.wikipedia.org/wiki/Marsaglia_polar_method

    The LLN/CLT dictates that the gaussian distro is exact.

    I’ve been meaning to ask if anyone here uses the Ziggurat algorithm, as in Is it exact (I have some code but I need to run it a few billion times)? I already know that the polar method is exact.

  7. Victor,
    Well, if you run it for thousands of years, then you’re right that the Earth itself becomes unlikely and there is a non-negligible chance of the temperature dropping below absolute zero.

  8. Nick,

    error just shifts from one possible de solution to another, and it then depends on the later trajectories of those two paths.

    Yes, a good point.

  9. angech,

    The argument is that the amount of energy received has an error range which means that the system could receive less rather than more energy , hence it could become increasingly colder.

    This doesn’t make any sense. He’s essentially applying an uncertainty to the change in forcing, but his reasoning is that this is a consequence of an uncertainty in the cloud forcing (I’ll ignore here that he’s confused the cloud forcing and the cloud feedback). Hence he is indeed suggesting that an increase in the external forcing could lead to cooling due to the uncertainty in how the clouds will respond. This is pretty nonsensical.

  10. angech says:

    ATTP
    It is a published paper.
    With reviewers that I think you respect.
    He can do maths.
    We are discussing it.
    In this light the claims of it being nonsensical have to address the substance of the maths and the substance of the claim, here the uncertainties in the cloud forcing.
    (ATTP “I’ll ignore here that he’s confused the cloud forcing and the cloud feedback”)
    * Could we agree that cloud forcing has an element of being a negative feedback or failing that that Pat Frank is certainly describing a forcing which can be negative as well as positive?
    If so, then cloud cover can cause a decrease in temperature ( cooling) just as much as an increase in cooling.
    Which is not nonsensical.
    If your argument is that cloud forcing can only ever be positive then your opinion on it being nonsensical holds, not because of his maths but because you have denied the validity of his claim on cloud feedbacks and forcings.
    So * ?

  11. Nick Stokes says:

    I’ve put up a post here on error propagation in differential equations, expanding on my comment above. Error propagation via de’s that are constrained by conservation laws bears no relation to propagation by a simple model which comes down to random walk, not subject to conservation of mass momentum and energy.

  12. angech,
    The cloud response is a feedback, not a forcing. It is logically inconsistent for a feedback to lead to long term cooling when the initial change in forcing lead to warming. If clouds could counteract the change in forcing, then the response should eventually return to 0 and the cloud feedback should then turn off. In this case, you’d still have the change in forcing, which was positive. Hence, the idea that the cloud response could lead to long term cooling even if the initial change lead to warming, does not make any sense.

  13. Nick,
    Very nice post. Thanks.

  14. dikranmarsupial says:

    “ATTP
    It is a published paper.”

    angech, it is a paper that was reviewed and rejected 12 times before it was accepted by a journal. Failures of peer review happen, submitting it 13 times makes that very, very likely.

    Say there is a 0.95 probability that a journal will reject a bad paper. Thus the probability of it not being rejected by a journal is (1 – 0.95). The probability that *all* of the thirteen journals will reject it is (assuming independent reviews) is (1 – 0.95)^13 = 1.2207e-17. Thus the probability that one or more would accept it is 1 – 1.2207e-17 is too close to 1 for it to be represented as anything other than 1 in double precision floating point arithmetic.

    Less hubris please. ATTP also has a good command of maths, but also a good grasp of the physics, which is conspicuous by its absence in Franks work. Stefan-Boltzman means that cloud feedback is not going to cause 20C of warming in 100 years.

  15. dikranmarsupial says:

    Sorry too early in the morning. it should of course be 1 – 0.95^13 which is about a 50:50 chance of a paper being accepted by one of the journals. Mea culpa. The point is that a paper being published doesn’t make it valid or correct or reliable. If you submit it thirteen times, it becomes a coin flip as to whether it gets published or not, no matter how wrong it is, and that is what Franks has done.

  16. Everett F Sargent says:

    dm,

    PF also had to pay-to-play, I think those 13 other failed attempts were to ‘so called’ real journals (no author publishing fees).

    p = 1 for a paper that claims that 1+1=3, as long as you are willing to pay for it. Check it out in The International Journal of Incorrect Addition! 😉

  17. dikranmarsupial says:

    Yes, from what I have seen, it seems that he was getting to influence the choice of reviewers as well, which alters things somewhat. Journals should *never* ask the authors to nominate reviewers, it is a recipe for pal-review and if competent editors cannot select reviewers by themselves (because they don’t know the sub-field well enough), then the paper is likely to be out of the scope of the journal.

  18. Nick says:

    “I’ve put up a post here on error propagation in differential equations, expanding on my comment above. Error propagation via de’s that are constrained by conservation laws bears no relation to propagation by a simple model which comes down to random walk, not subject to conservation of mass momentum and energy.”

    The title of your post is misleading as in addition to Ornstein-Uhlenbeck there are many other stochastic DiffEq that have elements of random walk (i.e. diffusion) and higher-order natural responses to them. One in particular is Fokker-Planck, which is the ubiquitous transport equation.

    The uncertainty can be in the forcing or the transport coefficients (diffusivity, mobility) and this will have a differing impact on the result depending on which terms have the error. The bottom-line is that you can’t in general get away from the random walk component.

    I really don’t care what Frank is trying to compute because his initial premise is wrong, and I tend not to correct the detailed work when the premise is wrong. In the real-world, there’s no extra credit for work past this point.

  19. Nick Stokes says:

    Paul
    “there are many other stochastic DiffEq”
    I’m sure that is true. I am talking about error propagation in the Navier-Stokes equations as implemented in GCMs. I have decades of experience in dealing with them. It is supposed to also be the topic of Pat Frank’s paper.

  20. Chubbs says:

    Good posts here and by Nick. Conservation laws and other underlying physics are driving GCM results not the state of motion. Saw the paper below referenced recently by Peter Thorne. Water vapor feedback is captured well in “good” and “poor” performing GCM’s. In other words, the physics is robust to uncertainty/errors in fluid motion.

    https://www.pnas.org/content/106/35/14778

  21. Nick, The point is that whether it is Fokker-Planck or Navier-Stokes, there is an Einstein relation that connects diffusion (i.e. random walk) to mobility (or inverse viscosity w/ N-S) . But random walk in these cases is not the same as random walk in other situations since the ensemble statistics of the particles smooth out the excursions so all one sees is the average.

    The analogy that the misguided effort is trying to portray is that of individual trajectories, which would be like the Black-Scholes equivalent of Fokker-Planck. That is, if one follows a individual trajectory, one can try to isolate a trend or drift from the random walk of the object. But this is misguided, as the main variability is due to ENSO anyways and this is a deterministic path with the expected ensemble averaging of the diffusional component smoothing out the excursions.

  22. Everett F Sargent says:

    At what scales are your governing equations applicable to (geometric, kinematic and dynamic)? We already know that tides are global. Although people often use a partial domain (say North Atlantic and GOM along the eastern seaboard) with the correct deep water open ocean boundary conditions specified.

    According to PF there is about a 32% chance that air temperature will vary by more than ~1.6K in any given year/ Again, I ask what does a random walk have to do with the conservation laws? Nothing!

    I did a 2D random walk and consumed a lot of energy because I was really TokedOutDude while the door was only ever one step away the whole time. See that? The start of an energy conservation law.

  23. Everett F Sargent says:

    Consider a equidistant diagonal lattice …

    where t is time and x is temperature. All possible paths of N diagonal steps have the exact same path length!

    If we associate the same energy consumed for a single diagonal move then all possible paths taken consume the exact same energy.

    That is PF’s energy conservation law, all paths consume the exact same amount of energy. I would add that this is not a very useful energy conservation law.
    https://www.researchgate.net/profile/Francesca_Colaiori/publication/1950133/figure/fig1/AS:670714738786312@1536922263731/Typical-configuration-of-m-4-friendly-walkers-on-the-diagonal-square-lattice.pbm

  24. Everett said:

    “At what scales are your governing equations applicable to (geometric, kinematic and dynamic)? We already know that tides are global.”

    I’m not certain which discussion this is directed to, but here is Scientific Reports hot off the press.
    Switch Between El Nino and La Nina is Caused by Subsurface Ocean Waves Likely Driven by Lunar Tidal Forcing

    They asked the obvious question and looked at the data, which is the time-series of the predominate subsurface wave associated with ENSO. They essentially demonstrate that it’s not the Kelvin wave or wind forcing that is responsible — the consensus reasons — which leaves the only possibility left, which is Carl Wunsch’s favorite forcing for ocean transport.

    “For each time step of simulation, the actual positions of the Sun and Moon are calculated using the semi-analytic planetary theory Variations Seculaires des Orbites Planetaires (VSOP87), and the associated gravitational forcing is determined. This tidal forcing option has not been used in the MPI model’s climate predictions or IPCC runs. Nevertheless, the MPI model has demonstrated that it is possible to add to GCMs explicit time-varying gravitational forcing from the Sun and Moon. The VSOP87 source code is available online (http://neoprogrammics.com/vsop87/), and we hope that the climate modelling community could install it to the climate models and conduct long-term coupled ocean-atmosphere experiments, which may provide insights on the relationship between tidal forcing and ENSO as suggested by the our observational study. If the model experiments confirm that lunar tidal forcing drives the observed subsurface ocean waves leading to the switch between El Nino and La Nina, this new physics will provide valuable long-range predictability, and help to improve the ENSO forecasts and decadal to multi-decadal predictions of global climate change.”

    Pinning this down will do more than anything else to eliminate the variable/random component, which allows the Frank’s of the world to contribute to the FUD.

  25. Holger says:

    @Nick Stokes Good post at your blog regarding basic numerical analysis.

  26. anoilman says:

    angech says: “It is a published paper.”
    You’re going too far with that claim.
    https://en.wikipedia.org/wiki/Frontiers_Media

    angech says: “With reviewers that I think you respect..”
    “Frontiers has used an in-house journals management software that does not give reviewers the option to recommend the rejection of manuscripts” and that the “system is setup to make it almost impossible to reject papers.”
    https://books.google.ca/books?id=dwFKDwAAQBAJ&pg=PA304&lpg=PA304&dq=journal+frontiers+predatory&redir_esc=y&hl=en#v=onepage&q=Frontiers&f=false

    angech says: “He can do maths.”
    “In July 2016 Beall recommended that academics not publish their work in Frontiers journals, stating “the fringe science published in Frontiers journals stigmatizes the honest research submitted and published there”, and in October of that year Beall reported that reviewers have called the review process “merely for show”.”
    https://web.archive.org/web/20161127152107/https://scholarlyoa.com/2016/10/27/reviewer-to-frontiers-your-review-process-is-merely-for-show-i-quit/

    angech says: “We are discussing it.”
    Its a junk journal angech, pure and simple. And dude. THIS IS NOT WHERE SCIENCE IS USED. You got that? Scientists publish to communicate to other scientists. Do you understand that very very basic principle behind publishing scientific papers? The only way his paper will prove itself is if you see a slew of scientists applying his new and novel theories to their own work. Judging by the actual public from real scientists… His paper was born on the fringes, and is still born.

  27. Windchaser says:

    I note at this point that Roy Spencer has also written a blog post on this paper, basically agreeing with the major argument of Stokes:

    While this error propagation model might apply to some issues, there is no way that it applies to a climate model integration over time. If a model actually had a +4 W/m2 imbalance in the TOA energy fluxes, that bias would remain relatively constant over time. It doesn’t somehow accumulate (as the blue curves indicate in Fig. 1) as the square root of the summed squares of the error over time (his Eq. 6).

    https://www.drroyspencer.com/2019/09/critique-of-propagation-of-error-and-the-reliability-of-global-air-temperature-predictions/

  28. Windchaser,
    Yes, I also noticed that even Roy Spencer doesn’t agree with Pat Frank’s analysis. I wonder what names Pat is going to call him?

  29. MMM says:

    Another way to think about this problem: pretend one of the GCMs is actually the “real world”, and the other 40 are all emulations of that GCM. There will be almost exactly the same average error in radiative forcing between the “real” GCM and the “emulations” as there is between the GCMs and the real Earth. So why don’t all the GCMs diverge in a Frankian manner from the “real” one? Because Frank is dead wrong. And no scientist worth his salt should take more than 30 seconds to realize that the paper isn’t worth the paper it isn’t printed on. It is truly embarrassing for the journal, and for any reviewers who didn’t recommend immediately throwing it into the circular file.

  30. Richard Arrett says:

    I read Dr. Roy Spencer’s post with great interest. I also read Dr. Pat Frank’s comment in response. I am leaning toward Spencer – but confused about the +- 20 unit issue. Pat says the +- 20 isn’t temperature, which Roy took it to be. Didn’t read hard enough to figure out what the units are for the +-20 (or is it unitless?). Anyway – this is interesting (to me at least).

  31. Richard,

    Pat says the +- 20 isn’t temperature, which Roy took it to be. Didn’t read hard enough to figure out what the units are for the +-20 (or is it unitless?).

    Pat’s just talking more and more nonsense. If it isn’t temperature what is it? His equation for temperature evolution is:

    \Delta T_i (K) \pm u_i = 0.43 \times 33K \times \left[ \left( F_o + \Delta F_i \right) / F_o \right] \pm \left[ 0.42 \times 33K \times 4 Wm^{-2}/Fo \right].

    Clearly the u_i term has unit of K and is clearly the term he then propagates. To claim that it’s some uncertainty statistic, and not a temperature, is just nonsense. Uncertainties represent something real (i.e., they represent the range of possible results). They’re not some kind of statistic that doesn’t represent something real.

  32. I knew Frontiers were a bit of a shady multi level marketing scheme, but that it is not possible to recommend rejecting a paper is amazing. And after that they publish the article with your name as reviewer on it and in doing so destroy your scientific reputation. Thanks for the warning.

  33. verytallguy says:

    Richard,

    “Anyway – this is interesting (to me at least).”

    Which aspect is interesting? (Genuine question)

  34. Nick Stokes says:

    VV,
    “And after that they publish the article with your name as reviewer on it”
    No, I think the options are submit a recommendation to publish, or don’t submit a recommendation at all. In this case Carl Wunsch and Zanchettin submitted recommendations to publish, and their names are listed. The other two referees did not.

  35. Nick, good. I still will not review for them.

    So that is another two reviewers for the computation of dikranmarsupial. In a normal journal, two rejections and you are dead.

  36. Everett F Sargent says:

    PF: “I’m sorry Roy. You’ve made one very fundamental mistake after another. Your criticism has no force.”

    Translation: You all are just steaming barking mad that I discovered the real error first…

  37. dikranmarsupial says:

    I wouldn’t want to submit articles for a journal where the reviewers cannot unequivocally reject a paper. Any reviewer that rejects my paper when I am wrong (which does happen, no matter how hard you try – we are only human) is a good friend to me and has my best interests at heart (if they didn’t they’d be happy to see me destroy my academic reputation).

    Not going to review or submit to Frontiers.

  38. Mark B says:

    Richard Arrett says: September 11, 2019 at 9:26 pm – I read Dr. Roy Spencer’s post with great interest. . .

    What I found most interesting thing about Dr Spencer’s post is that is that it wasn’t until the 24th paragraph* that Spencer gently said Frank was talking nonsense after having aired his own grievances about the barriers to skeptics getting published** and climate modeling in general.

    This rather speaks to the problem of the “red team” review in that there seems to be little coherency in what the red team believes except that AGW isn’t a problem.

    * – 24th paragraph in the first version I saw, it’s been revised at least twice since

    ** – difficulties of skeptics getting published – a bit weird to tie this in with a review of a deeply flawed paper

  39. anoilman says:

    ATTP: Aren’t models ensemble means of simulations? The reason I ask is that averaging different runs is done to remove random error. Its how we clean up data.

    You pointed to VVs bit in 2015;
    http://variable-variability.blogspot.com/2015/09/model-spread-is-not-uncertainty-nwp.html

  40. AoM,
    I think the problem is that what’s presented isn’t really an ensemble of one model with a range of initial conditions and with a range of possible input parameters. It’s a whole mixture of models. Hence the range is a representation of uncertainty, but isn’t really a formal uncertainty (it doesn’t consider all possible sources of uncertainty). At least, I think that is roughly right.

  41. anoilman says:

    ATTP: So… if you had error in such a system, then it would be reduced. Just humor me and average all the runs of your forcing graph. (Yeah I get this isn’t kosher, but then neither is Pat Frank pretending simulations are single runs with massive wide error.)

  42. AoM,
    I’m not sure I follow what you mean by “it would be reduced”? If I average all the runs of my top graph, it would fall right in the middle.

  43. anoilman says:

    yup.. In 100 years it would be right in the middle…

  44. dikranmarsupial says:

    I think it is important to bear in mind that the ensemble mean is not directly a (conditional) prediction of the observed evolution of the real climate, but only of the forced component of the real climate (because the chaotic internal variability that depends on the initial conditions will be averaged out as well, but it will still be present in the observed climate).

  45. anoilman says:

    dirkran… (not that I want to propagate nonsense…)

    I know.. I just just that a lot of folks look on simulation output as though its some sort of single factual run. But it really isn’t. Although that is how current temperatures are compared against simulation runs.

    The reason for executing multiple simulations (an ensemble), is in part to average out error. It also gives us information about standard deviation, etc.

    (I guess I can just drop this…)

  46. lerpo says:

    Seems Pat Frank has been at this for over a decade. Some good responses from the realclimate crowd at the time: http://www.realclimate.org/index.php/archives/2008/05/what-the-ipcc-models-really-say/comment-page-9/#comment-95545

  47. Everett F Sargent says:

    lerpo,

    Yup, but back in 2008, it was a linear with time propagation error ,,,
    https://www.skeptic.com/wordpress/wp-content/uploads/v14n01resources/climate_of_belief.pdf

  48. Marco says:

    Good find, lerp.

    I especially like this inline comment from Gavin:
    “Response: Ummm… a little math: log(C/C0)=log(C)-log(C0). Now try calculating log(0.0). I think you’ll find it a little hard. indeed log(x) as x->0 is undefined. What this means is that the smaller the base CO2 you use, the larger the apparent forcing – for instance, you use CO2_0 = 1 ppm – but there is no reason for that at all. Why not 2 ppm, or 0.1 ppm, or 0.001 ppm?”
    http://www.realclimate.org/index.php/archives/2008/05/what-the-ipcc-models-really-say/comment-page-9/#comment-95789

    Remind anyone of the arbitrary 1-year period Franks uses in his paper?

  49. dikranmarsupial says:

    @anoilman There is a good discussion of the meaning of the ensemble here

    http://julesandjames.blogspot.com/2010/01/reliability-of-ipcc-ar4-cmip3-ensemble.html

    I find it hard to understand how the IPCC continued with the “truth centred” interpretation of the ensemble rather than the “statistical exchangabiloty” interpretation, as the former is obviously wrong (consider an ensemble of “perfect” earths, e.g. Earths from parallel realities with similar forcings).

  50. In statistical mechanics, an ensemble is used because that’s the way we understand the physics.

    In climate science, an ensemble is used as a crutch because we can’t isolate the true physics.

    Having said that, I seriously think we can do better. If I didn’t, I wouldn’t be working on the challenge.

    So James Annan says the following view is implausible, because we can never know the “truth”:

    For comparison, I played a game of darts with respect to this grand-scale geophysics topic earlier this year: https://geoenergymath.com/2019/02/13/length-of-day/

    How close to the bulls-e can we get this? At some point, we realize that we are hitting the bulls-eye directly and then it’s a matter of piling on 2nd-order effects to finish it off and claim victory.

    Why can’t this be done for climate variability? Nothing says it can’t.

  51. Paul,
    The problem is that the intrinsic variability means that the range could represent the range of actually physically plausible pathways, rather than simply representing an uncertainty around the actual pathway. If we have a perfect model, and chaos was not a problem, then maybe we could determine the actual pathway, but we probably can’t. Hence, we shouldn’t necessarily expect the ensemble mean to represent the best estimate of reality.

  52. I find it hard to understand how the IPCC continued with the “truth centred” interpretation of the ensemble rather than the “statistical exchangabiloty” interpretation

    If you formulate it like this, the IPCC position makes perfect sense communication-wise. 😉

  53. said:

    ” If we have a perfect model, and chaos was not a problem, then maybe we could determine the actual pathway, but we probably can’t.”

    Chaos is the other crutch. If you look at the analysis of the geophysics problem that can be accurately modeled, that of LOD, you will realize that is a simple rigid-body response. That’s of course straightforward to represent as a direct forcing. But climate variability is a non rigid-body or fluid response to a forcing. This only has the complication that the response is non-linear, and that non-linearity can be accommodated by solving Laplace’s Tidal Equations (aka the GCM’s) properly.

    I seriously think we can do this, and from my earlier comment above #comment-162350, I am not alone in this opinion.

  54. angech says:

    …and Then There’s Physics
    “The problem is that the intrinsic variability means that the range could represent the range of actually physically plausible pathways, rather than simply representing an uncertainty around the actual pathway. If we have a perfect model, and chaos was not a problem, then maybe we could determine the actual pathway, but we probably can’t. Hence, we shouldn’t necessarily expect the ensemble mean to represent the best estimate of reality.”
    Thanks for this refreshing bit of reality.
    Also I see.
    “James Annan says the following view is implausible, because we can never know the “truth”:

    This sums up the Pat Frank controversy.
    He is pointing out that the way we do things still has lots of potential errors in them.
    This means that there is a small chance that the arguments for AGW may be wrong.
    Shooting the messenger is not the right response.
    Improving the understanding as Paul suggests is the correct way to go.
    People should be thanking him for raising the issue and addressing his concerns on their merits. Then doing the work to address the concerns.

    My worry is that if he is correct the models have a lot more self regulating in them addressed to TOA than they should which in turn makes them run warm.

  55. angech,
    Pat Frank is wrong. He is definitely wrong. You really don’t need to worry. What Pat Frank has presented has little bearing on reality, let alone on how climate models actually work.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.