Guest post: Some thoughts on Climate Modeling

This is guest post by Michael Tobis, partly motivated by some discussions we’ve had about modelling. In particular, the difference between modelling in the physical sciences, and modelling in other research areas. My personal – and not entirely ill-informed – view is that physically-motivated models (like climate models) have an advantage over other models, in that they are constrained by the fundamental laws of physics. That doesn’t, however, mean that they don’t have flaws and can’t be improved. Michael’s post is an attempt to start some kind of discussion about this general issue.

SOME THOUGHTS ON CLIMATE MODELING

Some recent discussion here on climate models in the thread It’s more difficult with physical models raised several interesting issues.

In particular I’d like to expend some effort on Richard Tol’s claim that

“GCMs have a remote relationship with physics. The models are full of fudge and tuning factors. The fact that they roughly represent observations may, for all we know, reflect skill in model calibration rather than skill in forecasting”

John von Neumann famously said “With four parameters I can fit an elephant, and with five I can make him wiggle his trunk.”, and understanding this is very important in evaluating models.

(For an example of a four parameter elephant, see:

http://www.johndcook.com/blog/2011/06/21/how-to-fit-an-elephant/ )

Tol’s critique seems to imply that exactly this sort of curve-fitting is going on.

It is the case that there are a significant number of parameters in a climate model that are not strongly constrained by first principles. These parameters are not just magic numbers – they have an actual physical meaning. So they are constrained by observations. Here’s an example of how tightly coupled modeling and observational studies can be in investigating the best values for these parameters.

But effort is not enough. It’s important to see results. In this regard, a succinct summary is provided by Steve Easterbrook here.

It’s important to realize that full-fledged climate models (GCMs) do not just output a single number (global temperature): they output a three-dimensional realization of the climate system advancing through time. Their output contains immense amounts of data about a system with complex but repeatable behaviors. That immensity and complexity cannot be captured with a few numbers like the “elephant” could.

On the referenced thread here, I responded to Tol in a different way:

“Tol’s suggestion that GCMs can be tuned to give any desired result is a testable hypothesis.

The oil companies have enormous technical talent at their disposal. Presumably if there were anything to this hypothesis they would have tried to test it at some time in the last quarter century.

The motivation to create an alternative model which can comparably well replicate observed and paleo climate with very low sensitivity is surely enormous. Where is their result?”

This got far less attention that I had hoped. Well, none actually. But I think it’s a suggestion worthy of consideration.

Maybe we should drop our mutual animosity, roll up our sleeves, and try the experiment I suggested.

I don’t mean to suggest that such an effort would be trivial.

Nevertheless I’d like to begin to discuss climate models and their role in climate science in some earnest, and discuss in detail the extent to which Tol’s suggestion is right or wrong.

Despite the points I made in defense of climate models above, I am someone who has not been entirely uncritical of the climate modeling enterprise.

I recently came across a printout of an essay I wrote in 2004 or so and OCR’ed it back to life. My essay, with very minor edits, is here.

The main points I raised are 1) that the software methodology of these models is antiquated, 2) that the career paths for people interested in applying both software and climate expertise are woefully unrewarded 3) that the attachment to hard-won code bases is ill-advised 4) that the attachment to pushing complexity bounds distracts from important problems and 5) that cumulatively these problems are impeding scientific progress. Alas, I think these points are hardly less true than when I wrote them. Arguably, it’s worse.

Various people have responded to this essay by agreeing that there are similar issues in other physical sciences.

The question of whether climate models can be dramatically rather than incrementally improved, using new optimization methods emerging in computational sciences, remains open in my opinion. If we corner ourselves into needing climate geoengineering, we will need much better modeling tools than are now available. And testing the naysayers’ claim that the models are somehow tuned to give the gloomy result that we are presumed to prefer has value in itself. Building another impenetrable pile of Fortran will not move us toward either goal.

Is something different and better possible? I think so.

Could something different and better actually help us better constrain the climate sensitivity in an objective way? I think that’s sufficiently plausible that we ought to consider it.

This entry was posted in Climate change, ClimateBall, Global warming, IPCC, Science and tagged , , , , , , . Bookmark the permalink.

176 Responses to Guest post: Some thoughts on Climate Modeling

  1. jsam says:

    Economists are not batting from strength when discussing models.

  2. Joshua says:

    Off topic, but..

    ==> “…and not entirely ill-informed”

    I’m certainly willing to accept that your view is not un-informed, and I assume that’s what you meant…but from a semantic standpoint, seeing someone describe their own view as not being ill-informed looks like something I’d read from Judith or may other “skeptics.”

  3. Hmm, is there a difference between not being ill-informed, and not being un-informed? I meant the latter, if there is. I’m not entirely getting your point, though.

  4. Joshua says:

    ==> “The motivation to create an alternative model which can comparably well replicate observed and paleo climate with very low sensitivity is surely enormous. Where is their result?””

    It would certainly be interesting to see results from such an effort. assuming it hasn’t already been undertaken).

    It reminds me of the question as to why, with as much focus as we see among “skeptics” generally regarding the magnitude of the “consensus” (despite their claims that it’s irrelevant and/or irrelevant to consider that magnitude instructive in any way), there aren’t more efforts from “skeptics” to quantify that magnitude.

    Not that only “skeptics” should be motivated to conduct that sort of experiment. Seems to me that such an approach should be a fundamental component of trying to quantify the probabilties associated with different sensitivity estimates.

  5. anoilman says:

    My understanding is that they are going back and forth between the models and the source science. If there’s a discrepancy in testing, they can go back, and correct the models… but also the physics. i.e. they may suspect that something else going on, and investigate.

    Perhaps ‘curiosity driven research’ has a different meaning to economists like Richard Tol?

  6. Joshua says:

    It’s not really worth wasting any more electrons on…just a semantic nitpick….maybe you just delete my 3:15 (as well as this comment).

  7. It’s fine. I guess I should use un-informed in future 🙂

  8. mt says:

    Joshua: Not that only “skeptics” should be motivated to conduct that sort of experiment. Seems to me that such an approach should be a fundamental component of trying to quantify the probabilties associated with different sensitivity estimates.

    Indeed. I agree.

    The sorts of computation that interest me in this regard are sometimes known collectively as “uncertainty quantification” (UQ). My last formal position in science involved applying some of those techniques to an extant GCM (NCAR’s CCSM). I believe that the cutting edge CGCMs are too complicated (and in some ways mathematically problematic) to perform UQ and related optimization techniques effectively. This is part of why I think some group should build a relatively minimal but maximally tunable CGCM.

  9. To add a more serious comment, I largely agree with what I think MT is saying. Scientists are often very keen to add more and more complexity to their models so that they can try and understand a system in as much detail as possible. The problem with this is that it can lead to some element of tuning (although as MT points out, this type of tuning is often constrained by observations, or some kind of physical understanding) and it can often mean that it’s very difficult to understand why the system might behave in a particular way. Often, it’s valuable to keep things simpler and to try and probe the different processes in more detail/depth. That way you can more easily gain some actual understanding of what might influence a system and how it would do so. Running a very complex model and saying “here’s what happened and I think it’s because of this” is not always as instructive as running a simpler model where you can probe the processes in more depth.

  10. Joshua says:

    ==> :”Running a very complex model and saying “here’s what happened and I think it’s because of this” is not always as instructive as running a simpler model where you can probe the processes in more depth.”

    This does seem very close to a theme I’ve seen on “skeptic” sites…and I think it’s also close to a theme that I’ve seen among some modelers where they describe the downside of investigating sensitivity by developing ever more complex models to incorporate more (and also uncertain) parameters. Of course, I’m only speaking here as someone who has very limited ability to understand all of this.

    And considering that, I have a more basic question: My understanding is that sensitivity is an outcome, not an input parameter. How would one go about (if it’s possible to explain in very basic terms 🙂 ) comparing models where sensitivity is an outcome to models where sensitivity is an input? Is that even what is being suggested?

  11. anoilman says:

    Michael Tobis: I don’t really disagree with your opinions on what might be wrong with Climate Model development, but I don’t see any benefit in attempting to replace it.

    I’m not clear that science is being held up by model code. I’m not clear what ‘faster’ or ‘better’ will really win.

    I pointed out that old code often has fixes for oddities (self inflicted?) that just aren’t in pure math models… tossing old code is very very costly as you rediscover those fixes the hard way.

    I’d agree that adding more Fortran is incredibly unwise. I’m under the (misguided) opinion that this would be cruft in a box by now. I’ve found software abstraction of any form has just been pure folly compared to knowing what the problem is that you are solving and directly solving it. I’ve been brought in many times to clean up after a team has done exactly that.

    I liked the essay. One point though is that computer programming and being a scientist is often very very different, Easterbrook mentions this. I get that a good team is the way to go, but the sheer difficulty of putting together a kick ass team of ninja scientists is ‘hard’. You will go through a lot of warm bodies trying to do that. You may fail.

    Enough of being negative…

    I’m an implementer/optimizer by nature, and I like to get to the heart of a problem as directly as possible. I get the impression that you’d like to see an optimized climate model designed and built from the ground up to be ‘way fast’, and cruft free.

    Oil industry has long since adopted CUDA cores for processing, and I can only assume that wish to utilize the same technology.
    http://http.developer.nvidia.com/GPUGems3/gpugems3_ch38.html
    And the latest clusters can be built with these;
    http://www.nvidia.ca/object/visual-computing-appliance.html
    (FYI: These guys are about to replace a PC cluster with just 2 Video cards; http://biglazyrobot.com/)

    I doubt the transfer over could be incremental.

    (FYI: I am under the impression that seismic is easy by comparison to climate modeling. I don’t think there’s much new science there, and they’ve been building custom clusters for as long as I can remember.)

  12. Joshua,
    Yes, sensitivity is an outcome (or an emergent property, I think) of climate models. However, I don’t think the suggestion here is that one moves to models where sensitivity is an input. I think the suggestion might be that one could develop models that ignore many complex processes and simply focus on some aspects. In that way one might be able to – for example – better understand clouds, or the hydrological cycle. It’s true, I guess, that some of the effects that are seen in simpler models may change when you add complexity, but you could still gain a lot of understanding from the simpler models, even if making detailed projections isn’t possible.

  13. Arthur Smith says:

    It might be helpful to indicate what the main physical constraints are. The inputs are: incoming light from the Sun, the constituents of the atmosphere and oceans, the physical properties of Earth (eg. gravitational constant, geography (land/ocean locations etc), rotation rate, orbit, etc), the physical properties as far as we know them of the molecules in the atmosphere etc. And then physical laws are applied – conservation of mass, conservation of energy and momentum, Newton’s laws, laws of radiation and thermodynamics etc. The result is a computational model of the planet over time from which properties like temperature and radiative exchange can be extracted. Those are the outputs. The size of radiative forcing is not an input, it’s an output. Temperature change and therefore sensitivity to forcing is an output. Various components of the energy balance are all outputs of this sort of model. In principle there is an exact mathematical transformation from the inputs to the outputs, of which any model is just an approximation. If you think the system will really do something different from what the models show then you should be able to find out where the model approximation is failing. I think that’s the challenge Michael presents above – why haven’t those with vested interests in minimizing the implications of climate change tried? Perhaps that’s good evidence that the physical constraints on models don’t leave much room for alternate answers.

  14. Arthur,
    Indeed, this is one of Michael’s suggestions

    I think that’s the challenge Michael presents above – why haven’t those with vested interests in minimizing the implications of climate change tried?

    which I do think is an interesting point.

    I’ve just realised that my earlier comment was more responding to what Michael said in his longer essay, especially where he talks about whether to Supercompute or to just Pretty-Big-Compute.

  15. mt says:

    “Perhaps that’s good evidence that the physical constraints on models don’t leave much room for alternate answers.”

    I absolutely think that’s true. But I think that for decades we have been stuck with a roughly twofold disparity between production climate models on sensitivity is an indication that it’s possible to do better.

    I’m not really asking for models of individual subprocesses as the goal of the enterprise, though I think a more formal and contemporary software development technique will force those to happen as a side effect. I’m asking for a more complete and formal exploration of models of complexity comparable to those of the first successful atmosphere/ocean/sea-ice CGCMs, which are now two decades old.

    The pretty-big computers when I wrote the essay are now comparable to off-the-shelf workstations. So one sub-goal would be a model that could be used for experiments by amateurs. But the main scientific output would come from very large ensembles of such models, probably run on facility clusters, but conceivably distributed in a climateprediction.net sort of way.

    It’s fair to point out, by the way, that climateprediction.net is largely an effort to do UQ on climate models. Some of the limitations of that project tie into the nature of the models being tested.

  16. bill shockley says:

    Hansen puts modeling at the very bottom of the knowledge food chain:
    1)paleoclimate data
    2)modern observations
    3) modeling

    And yet we’ve seen empirical scientists savaged by modelers recently. Schmidt vs Wadhams, Archer vs Shakhova.

    Michael, I’ve even seen you fire a volley or two at Shakhova.

  17. mt says:

    Re CUDA and other efficiency tricks, I leave that to the cutting edge and supercomputers. There is much to be learned from very high resolution models.

    But the work that can be done with full fluid dynamics at coarse resolutions (we called them R-15 and T-42 for reasons you can look up) has not been completed. I am suggesting that re-examining this space allows for several advantages.

    Unfortunately I bungled my career too badly for me to have a chance to appeal directly to funding agencies, but I suspect the real reasons this isn’t happening are institutional, not really scientific.

    So I’m hoping to get someone else interested, because even if I get nothing out of it but a tip of someone’s hat, I believe it’s still worth someone doing.

  18. mt says:

    bill shockley – Modeling is hardly the “bottom of the food chain” in any sense, and much of Hansen’s most valuable work is model based. I am not sure what quote you are going from but I think you misunderstood.

    If you are looking for verification of climate theory insofar as it affects policy works, which the public seems to think is what climatologists mostly do, that ordering makes sense.

    But people who actually understand climate somewhat want to understand it better. We would do this whether or not there were policy issues. And modeling would be crucial to the effort, much as it is in other physical sciences.

    Identifying whom to pay attention to in a science in which one is inexpert is a challenge, especially once public controversy emerges.

    Wadhams and Shakhova are among those who have negligible impact within science but a large impact outside science. There are others frequently quoted by footdraggers and naysayers. Some of those (Legates, Christie spring to mind) are also observationally oriented. The best scientists are not just “stamp collectors” in Rutherford’s sense.

  19. Willard says:

    Speaking of curve-fitting:

    The implied model of Tol’s meta-analysis is that the published studies represent the true curve plus independent random errors with mean 0. I think it would make more sense to consider the different published studies as each defining a curve, and then to go from there. In particular, I’m guessing that the +2.3 and the -11.5 we keep talking about are not evidence for strong nonmotonicity in the curve but rather represent entirely different models of the underlying process.

    In short, I don’t think the analysis can be fixed by just playing with the functional form; I think it needs to be re-thought. You just can’t think of these as representing 14 or 21 different data points as if they represent observations of economic impact at different temperatures. The data being used by Tol come from some number of forecasting models, each of which implies its own curve.

    http://andrewgelman.com/2014/05/27/whole-fleet-gremlins-looking-carefully-richard-tols-twice-corrected-paper-economic-effects-climate-change/

  20. anoilman says:

    mt: CUDA isn’t cutting edge supercomputing any more. I use my PC at home for rendering, and I get 9.2 TFlops out of a pair of GTX 980s which cost $1100 US (for both). The beast I linked to before costs $50k US, and is murderously more powerful.
    https://en.wikipedia.org/wiki/GeForce_900_series

    My mechanical engineer has point out however that fluid dynamics has resisted GPU based computation. I spec’ed his computer for fewer cores but higher clock speed. (We do a lot of simulation for down hole engineering for things like cavitation, and erosion simulation.)

    Perhaps I’ll look into all this…

  21. bill shockley says:

    mt,

    Hansen frequently waxes philosophical on how he perceives the knowledge hierarchy in his particular corner of science. He seems to base his views on how fruitful the various avenues have been, and so the ranking. That is not say that modeling has not contributed to some crucial results, probably, primarly, the pattern of regional impacts around the globe. His comments seem geared to counter an over-reliance on and misplaced confidence in climate models.

    Here’s an excerpt that reflects this opinion:

    Paleoclimate, changes of climate over Earth’s history, provide valuable insights about the effects of human perturbations to climate, even though there is no close paleoclimate analog of the
    strong, rapid forcing that humans are applying to the climate system. International discussions of human-made climate change (e.g., IPCC) rely heavily on global climate models, with less
    emphasis on inferences from the paleo record. A proper thing to say is that paleoclimate data and global modeling need to go hand in hand to develop best understanding — almost everyone
    will agree with that. However, it seems to me that paleo is still getting short-shrifted and underutilized. In contrast, there is a tendency in the literature to treat an ensemble of model runs
    as if its distribution function is a distribution function for the truth, i.e., for the real world. Wow. What a terrible misunderstanding. Today’s models have many assumptions and likely many
    flaws in common, so varying the parameters in them does not give a probability distribution for the real world, yet that is often implicitly assumed to be the case.

    You can also find such comments in his lectures available on youtube, where he has a slide, ranking the three contributing fields. Give me a day and I’ll hook you up.

    As far as impacts in science, I don’t see how you can deny the influence of the empirical data of sea ice loss in the Arctic, for which Wadhams is the poster child and on which models have fallen flat. Same with Antarctic ice sheets.

    As for Shakhova, maybe she’ll be remembered at a later date, as are so many of the great scientists throughout history, who were merely connecting dots. I would like to know what the particular beef is with her work. Seems like some are willing to actually challenge the veracity of her data.

  22. Andy Skuce says:

    I have never run anything close to a climate model, but I have written (an age ago) code for geophysical models (stress, gravity, magnetics, seismic). I have also run Monte-Carlo resource estimation models and “economic” models (these were really just discounted cash-flow models).

    Indeed, you could always get a match with observations with just a few parameters, but that never was the point although, admittedly, some mediocre practitioners sometimes thought it was. For me, the point of modelling was to bracket reality by being able to exclude cases that just couldn’t fit. Usually, I learned much more at the stage of figuring out the right range of input parameters, before hitting “run”.

    There are two pitfalls with numerical modelling, in my view: A) The audience is only interested in the output—and often only the central values of the outputs rather than their distribution—rather than the more interesting inputs and parameterizations. B) As models become more complex, the modellers themselves risk becoming lost in a virtual reality and are increasingly unable to communicate the complexity of what they are doing to outsiders.

    I suppose this is just a plea to keep things as simple as possible and for diversity. Climate science, at least, benefits from having lots of independent models that can be compares against each other. Let’s hope that diversity is maintained and and that any attempt to build the one model to rule them all is resisted.

  23. bill shockley says:

    Andy Skuce,

    nicely said. I’ve done some rudimentary modeling on my computer at home and come away with similar conclusions regarding the bracketing/narrowing of possibilities, and how “lost” you can get if you lose sight of your assumptions or overlook them entirely.

  24. bill shockley says:

    This is the typical “basis of understanding” slide and commentary:

    [10:20]
    Our understanding of this is based not mainly on climate models as contrarians will say, it’s based on understanding the Earth’s history and how the Earth has responded to forcings, to changes in the boundary conditions in the past. Changes in atmospheric composition and surface properties. And also ongoing observations—global observations—and models help, but they’re not the primary source of our understanding.

  25. mt says:

    The word “this” is important in “our understanding of this” ; that doesn’t remotely justify “the very bottom of the knowledge food chain” regarding models.

    “This” means risky anthropogenic climate change. Contrarians only care about “this”, but I think that responding with more “this” is part of our problem.

    If people knew how deep “that” and “the other” were, they might have a different understanding of “this”. We are constantly talking about climate change without explaining how much we know about climate.

    To get back to ATTP’s original point, that seems to be a hell of a lot more than economists know about economics.

    Still, I suspect that climate models can at least conceivably be a much better contributor on this front as well as others, and that existing efforts are reaching diminishing returns. Could this be for reasons having to do more with software engineering than with science? I suspect so.

  26. Andy Skuce says:

    I have posted this before on ATTP, so my apologies if you have seen this before. But anyone interested in economic modelling and modelling generally (presumably people following this thread) really should read this article. It’s by John Kay, an FT columnist and economist. It’s a long rant and entertaining. It has not gone uncontested.

    http://www.johnkay.com/2011/10/04/the-map-is-not-the-territory-an-essay-on-the-state-of-economics

  27. bill shockley says:

    mt,

    Maybe modelers should switch their focus to the problems of sea and land ice where critical processes are yet to be coded in. I don’t know if that would be a practical thing from a familiarity/background standpoint.

    From a recent Huffington Post blog post by Hansen:

    IPCC conclusions about sea level rise rely substantially on models. Ice sheet models are very sluggish in response to forcings. It is important to recognize a great difference in the status of (atmosphere-ocean) climate models and ice sheet models. Climate models are based on general circulation models that have a long pedigree. The fundamental equations they solve do a good job of simulating atmosphere and ocean circulations. Uncertainties remain in climate models, such as how well they handle the effect of clouds on climate sensitivity. However, the climate models are extensively tested, and paleoclimate changes confirm their approximate sensitivities.

    In contrast, we show in a prior paper and our new paper that ice sheet models are far too sluggish compared with the magnitude and speed of sea level changes in the paleoclimate record. This is not surprising, given the primitive state of ice sheet modeling. For example, a recent ice sheet model sensitivity study finds that incorporating the physical processes of hydrofracturing of ice and ice cliff failure increases their calculated sea level rise from 2 meters to 17 meters and reduces the potential time for West Antarctic collapse to decadal time scales. Other researchers7,8 show that part of the East Antarctic ice sheet sits on bedrock well below sea level. Thus, West Antarctica is not the only potential source of rapid change; part of the East Antarctic ice sheet is also susceptible to rapid retreat because of its direct contact with the ocean and because the bed beneath the ice slopes landward (Fig. 1), which makes it less stable.

  28. niclewis says:

    bill shockley,

    Re Hansen’s statement that

    “climate models are extensively tested, paleoclimate changes confirm their approximate sensitivities”

    it is interesting to note what was said in the recent Annan & Hargreaves QSR paper “A perspective on model-data surface temperature comparison at the Last Glacial Maximum”. It stated (omitting references):

    “Simple calculations in which the global temperature anomaly at the LGM is divided by the total estimated forcing relative to the preindustrial state have long been used to generate estimates of the equilibrium climate sensitivity. These estimates have remained close to 3 C throughout changes in estimates of both components”

    which is consistent with Hansen’s views. However, it went on to state:

    “The most modern estimates for the (negative) forcing of 8W/m2 and temperature anomaly of 4 C would suggest a figure of just under 2 C” – actually about 1.85 C, based on the usual 3.7 W/m2 forcing value for doubled CO2 concentration.

    1.85 C is well below the climate sensitivity of even the least sensitive CMIP5 model, and not much more than half the mean sensitivity of CMIP5 models used for the RCP8.5 projections.

    So the main claim in Hansen’s statement appears to be contradicted by the most modern evidence.

  29. Nic,

    So the main claim in Hansen’s statement appears to be contradicted by the most modern evidence.

    No, I really don’t think it does. You do need to include the rest of that section to get some actual context.

    However, there are substantial uncertainties and perhaps biases associated with this approach. It is not expected that the response of the climate system to large negative and positive forcings will be perfectly linear, even at the global scale. In fact, model simulations show significant (and model-dependent) nonlinearity (Hargreaves et al., 2007). Moreover, the response to different forcings is not linearly additive. Thus, the climatic effect of large ice sheets, when combined with a reduction in greenhouse gas concentrations, is not equal to that of the same forcing when produced by changes in GHGs alone (Yoshimori et al., 2011). ……………. When constrained with the proxy-based observation of LGM cooling, this implies an equilibrium sensitivity of around 2.5 oC with a 90% confidence in- terval of about 0.5 – 4 oC. However, this result must be considered somewhat provisional, due to the small ensemble size and the previously mentioned uncertainties in forcings and proxy data.

    So, yes, if you do the very simple calculation based on change in forcing and change in temperature you do (based on Annan & Hargreaves) get a value of 1.85oC. However, even they go on to suggest that the ECS is more likely around 2.5oC with quite a large uncertainty range.

  30. bill shockley says:

    niclewis,

    What is model-data surface temperature at the Last Glacial Maximum? Is it different from surface temperature derived from paleo-data? Sounds like a modeled paleo climate.

    Hansen says in the new SLR paper that the derivation of ECS from paleo data is extremely reliable at 3.0C to within 0.5C. They did a special paper on it within the last 2 years to support their paper, “Assessing Dangerous Climate Change”, published Dec., 2013.

  31. niclewis says:

    ATTP,
    You’ve omitted the sentence preceding that giving the 2.5 C sensitivity estimate, which expalins where it came from:
    “When a range of GCMs was examined, the relationship between past and future was much
    weaker. Crucifix (2006) found no significant relationship at all across a small ensemble of PMIP2 models, but, in a larger ensemble, Hargreaves et al. (2012) did find a significant relationship between tropical SST at the LGM, and climate sensitivity.”

    Unfortunately, the significant relationship between tropical SST at the LGM, and climate sensitivity found by Hargreaves et al (2012) in PMIP2 models is not fount in more recent models. The recent Hopcroft and Valdes GRL paper “How well do simulated last glacial maximum tropical temperatures constrain equilibrium climate sensitivity?” states:

    “We analyze results from new simulations of the LGM performed as part of Coupled Model Intercomparison Project (CMIP5) and PMIP phase 3. These results show no consistent relationship between the LGM tropical cooling and ECS”.

    So sensitivity estimates based on scaling the LGM responses of climate models look dubious.

    It is also worth pointing out that the regression analysis employed in Hargreaves et al (2012) gave a mean sensitivity estimate of 2.3 C (90% range 0.5-4.0 C), not 2.5 C. The higher figure came from a Bayesian analysis, and reflected use of a PMIP2 model-based prior distribution for sensitivity with a mean close to 4 C.

  32. BBD says:

    And this affects policy how, Nic?

  33. Nic,
    Yes, I omitted some sentences, but I’m really not sure what you’re trying to get at. I don’t think those sentences really explain where it comes from. I think it explains how there are various issues with this analysis. In case it wasn’t obvious, my comment was simply to highlight that even the source of your claim that Hansen’s suggestion of an ECS is around 3C is contradicted by the most modern evidence, appears to be a partial reading of that evidence, given that it goes on to suggest that the ECS is probably around 2.5C, but with a range from 0.5 – 4.

  34. And this affects policy how, Nic?

    I think I may have asked Nic something similar before. I don’t think it went well. It’s certainly my view that the possibility that climate sensitivity might be low is not a reason to ignore that it might not be low. Nic seems to think otherwise.

  35. bill shockley says:

    Here’s the climate sensitivity paper they did to support “Assessing Dangerous Climate Change”:
    Climate sensitivity, sea level and atmospheric carbon dioxide

    Actually, it might be different from the ECS paper they were talking about in the SLR paper. I’ll need to check on that.

    Abstract

    Cenozoic temperature, sea level and CO2 covariations provide insights into climate sensitivity to external forcings and sea-level sensitivity to climate change. Climate sensitivity depends on the initial climate state, but potentially can be accurately inferred from precise palaeoclimate data. Pleistocene climate oscillations yield a fast-feedback climate sensitivity of 3±1°C for a 4 W m−2 CO2 forcing if Holocene warming relative to the Last Glacial Maximum (LGM) is used as calibration, but the error (uncertainty) is substantial and partly subjective because of poorly defined LGM global temperature and possible human influences in the Holocene. Glacial-to-interglacial climate change leading to the prior (Eemian) interglacial is less ambiguous and implies a sensitivity in the upper part of the above range, i.e. 3–4°C for a 4 W m−2 CO2 forcing. Slow feedbacks, especially change of ice sheet size and atmospheric CO2, amplify the total Earth system sensitivity by an amount that depends on the time scale considered. Ice sheet response time is poorly defined, but we show that the slow response and hysteresis in prevailing ice sheet models are exaggerated. We use a global model, simplified to essential processes, to investigate state dependence of climate sensitivity, finding an increased sensitivity towards warmer climates, as low cloud cover is diminished and increased water vapour elevates the tropopause. Burning all fossil fuels, we conclude, would make most of the planet uninhabitable by humans, thus calling into question strategies that emphasize adaptation to climate change

  36. BBD says:

    Bill, you may be looking for Hansen & Sato (2012).

  37. bill shockley says:

    BBD, thanks. That’s probably it

    They were extremely happy with the certainty they were able to achieve as per a comment that I think was in the new SLR paper.

  38. BBD says:

    Bill

    Close, and my apologies for omitting a link. This is what I meant.

  39. bill shockley says:

    Cripes—Make up your mind! LOL

    I see about 3 different dates of publication for that title:

  40. Eli Rabett says:

    A discussion many years ago with ANSYS Fluent types lead to the idea that GPUs were not well set up for solving fluid flow esp. multiphase There appears to have been some progress but the acceleration is not huge.

  41. bill shockley says:

    From BBD’s link:

    Climate models, based on physical laws that describe the structure and dynamics of the atmosphere and ocean, as well as on land, have been developed to simulate climate. Models help us understand climate sensitivity, because we can change processes in the model one-by-one and study their interactions. But if models were our only tool, climate sensitivity would always have large uncertainty. Models are imperfect and we will never be sure that they include all important processes. Fortunately, Earth’s history provides a remarkably rich record of how our planet responded to climate forcings in the past. Paleoclimate records yield, by far, our most accurate assessment of climate sensitivity and climate feedbacks.

    I don’t pretend to have a mastery of this stuff, but I’ve found Hansen’s word to be reliable.

  42. Richard says:

    I find this all very odd. I just saw an Horizon programme on the early universe, which showed how use of computer models (numerical calculations based on underlying physics and initial conditions) were essential to our modern understanding.

    There is barely a field of science – early universe, ecology, disease, economic, LHC, … – where such models are not only essential but remarkably successful in advancing our understanding of systems of all types. I wrote a blog “In Praise of Computer Models” …

    http://essaysconcerning.com/2015/05/24/in-praise-of-computer-models/

    And in it I quoted from David Potter (whose book from 1973 was gathering dust in my library) …

    “Given this new mathematical medium wherein we may solve mathematical propositions which we could not resolve before, more complete physical theories may possibly be developed. The imagination of the physicist can work in a considerably broader framework to provide new and perhaps more valuable physical formulations.” David Potter, “Computational Physics”, Wiley, 1973, page 3.

    The Earth system is broken down into subsystems, I understand, and each will require parameters based on physics and physical conditions, these different sub-systems are coupled. So we have a lot of parameters. To tar this to the von Neumann quote is surely not right. Each sub-system has no more and no less parameters than are needed to represent e.g., exchange of carbon between atmosphere and top of ocean – can anyone show that the modellers are using 5 here when say 3 will do?.

    The sum of all these sub-systems gives rise to emergent properties on a global scale for the Earth system (global temperature, ECS, etc.). If we didn’t already have the cryosphere included, is there a good reasons for not adding it? If we want to run experiments on sub-systems what is stopping us (and I would be surprised if this isn’t happening)? If there are irreducible uncertainties, we need to resolve these. Etc. Etc.

    We need these models to do sensitivity analysis (not in sense of ECS) but in sense of ‘what if’ e.g. ‘what if … ice melt mechanisms are wrong by …’ and see what happens.

    The recent ice ages / interglacials saw CO2 move between 180 and 300 ppm and we are now at 400 ppm and rising, and we can measure the energy imbalance accurately, it doesn’t take much to create a relatively simple model to indicate that warming is inevitable (this is old news), and even by how much (and that I thought was well established).

    But we need more details, we need to know how fast, how much by 2100, the impact of different scenarios, and at more granular scales (like attribution studies for heatwaves in Europe) … otherwise the policy people are flying blind! However much the measurements of the impacts reveal themselves to us, the models will never be redundant (“Oh, they were right all along, oops!”), because wherever we are (“I wouldn’t have started from here”), we need to shine a light on where we are going.

  43. Gator says:

    1) The “four parameter” elephant is a cheat. 🙂
    2) This post is a response to Tol? His claim is that GCMs are too full of arbitrary parameters to mean anything. I would think it would be his responsibility to back up that claim. Why give him further electrons? Supply a list of arbitrary parameters, and show that tweaking these (within some reasonable set of ranges) can produce so many different climates that the results in the IPCC reports are meaningless.
    I’d go one step farther than MT. Why wait for the oil companies to do this — wouldn’t any grad student worse their salt be happy to show that everyone before them has been deluding themselves? Should be easy to publish a study showing the GCMs are worthless? Why hasn’t this been done? I’m guessing because you can’t do it. Having been a grad student myself I don’t buy into the idea that grad students simply seek to confirm the work of their elders. That’s not how you get ahead.
    3. Want better code? Get the money. Code is just a tool. It’s the physics behind the modeling that justifies even starting the effort. You can always hire someone to code, but you can’t hire someone to just do a new model. That’s where the science is.

  44. Willard says:

    > Hansen’s statement appears to be contradicted by the most modern evidence.

    I thought James and Jules were talking about modern estimates.

  45. ““The motivation to create an alternative model which can comparably well replicate observed and paleo climate with very low sensitivity is surely enormous. Where is their result?””

    It would certainly be interesting to see results from such an effort. assuming it hasn’t already been undertaken).”

    A while back a very well know skeptic contacted Dan Hughes and me to study what kind of platform he should buy to run some simple experiments with GCM.

    He couldnt get the money. no skeptic or FF company is interested in doing this.

    1. Their result would never be accepted.
    2. its way more complicated than folks imagine.

    that said, very few of them will do the simplest work.

  46. Steven,

    1. Their result would never be accepted.

    I think it depends what you mean here. If someone simply tuned a GCM to give a low ECS it might not be accepted unless they could show that the tuning didn’t result in unphysical parameters. On the other hand, if someone could run a plausible simulation and show that ECS is low, that would be more interesting, even if it turned out to be wrong.

    2. its way more complicated than folks imagine.

    Indeed it is, and it’s kind of meant to be.

    that said, very few of them will do the simplest work.

    Well, yes.

    Dan Hughes himself could do with spending a little time considering how relevant the Chaotic nature of our climate really is. It’s not irrelevant, but I don’t think it’s nearly as relevant as he appears to think it is.

  47. bill shockley says:

    Richard, an interesting question would be, what would we know regarding climate change if we didn’t have computer models? Would climate change seem any less urgent? I’m thinking we’d still have good, or even the same, certainty with respect to the carbon budget but I’m not sure how much we’d know regarding impacts.

  48. mwgrant says:

    Good points, MT. What defines the goals of the effort? climate research? policy? a mix? This influences the formulation of any approach: How transparent should the new efforts should be? How does one approach QA for new codes if a potential use is to inform policy?

  49. Joshua says:

    m-dub –

    Obviously off topic, but since Judith wouldn’t allow me to point it out over at her crib, I’ll let you know here (contingent on Anders’ tolerance) that I consider it unfortunate that in our latest exchange over at Judith’s, you personalized and diverted a discussion of Hans’ rhetoric and analysis. It isn’t the first time that you’ve done that, btw, nor the first time that after you’ve done that Judith stepped in and moderated out a comment that pointed out how you’d done that..

  50. Joshua says:

    ==> “that said, very few of them will do the simplest work.”

    How much work would it be to do a “consensus” study? Seems weird since so many “skeptics” are so focused on the topic that none of them has conducted any empirical analysis.

    BTW, anyone heard anything about how the GWPF analysis of surface temp analysis is coming along?

  51. Joshua, maybe they learned their lesson from BEST: be careful doing something that looks like real science, as you might not like the answer it gives you.

  52. mwgrant says:

    Joshua,

    The intent was not to personalize, but to note that your comment requires a similar reduction. Surely you noticed only one word of yours was changed in my response to that comment. Maybe the following would have been better for you:

    The problem whereby J-person’s comments reduces complex phenomena to identify causal mechanisms that fall in line with his partisan orientation is problematic, I’m sure you’ll agree. [Note: 2nd response]

    Of course you could have responded with an ‘M-dub’ substitution and so on.

  53. anoilman says:

    rick/Joshua: Pfffft! Pseudo Skeptics aren’t concerned about trivial things like ‘facts’.

    (On the internet no one can see you roll your eyes.)

  54. Willard says:

    While reducing complex phenomena to fall in line with partisan orientation is problematic, it seems more problematic when done (a) by a scientist (b) who argues that partisan orientation diminishes the returns of science’s capital. Hence Vaughan’s reaction:

    His polarization of society into the scientists and the policy makers is a simplistic caricature of reality. Although he does not mention the IPCC by name, he [Herr von Storch] would have you believe that their periodically released reports constitute scientists trying to impose policy where they’re not wanted. This misrepresents three things.

    http://judithcurry.com/2015/09/03/ins-and-outs-of-the-ivory-tower/#comment-729337

    Since this leads to a topic that makes mwg “not too comfortable” according to him, it might be better to return to teh modulz.

    ***

    > How does one approach QA for new codes if a potential use is to inform policy?

    You might be better placed to answer that question than MT, mwg.

  55. verytallguy says:

    Oh, great, another attempt to run post match analysis of curry climateball here.

    Go team!

  56. mwgrant says:

    VTG

    “Oh, great, another attempt to run post match analysis of curry climateball here.”

    Ask your teammate.

  57. Joshua says:

    m-dub –

    Since it makes VTG cranky, I’ll write one more response and then let it go.

    Here’s my response that Judith thought violated her moderation policies:

    ———

    ==> “The problem whereby J-person reduces complex phenomena to identify causal mechanisms that fall in line with his partisan orientation is problematic, I’m sure you’ll agree.”

    Well, not much of a problem. (1) Whether I do that has not much of an impact, and (2) whether I do that has nothing to do with whether Hans does and, (3) I’m not a scientist…

    It might be interesting if you elaborated on where I’ve done that here…I might learn something.

    But rather than turn it back around on me – which I’m sure you’ll agree is a non-sequitur…. how about if you weigh in on whether Hans has any actual evidence to substantiate his speculation about his general theory of scientific capital and about whether there’s any evidence to substantiate the application of the theory to the specific context of climategate, climate science, environmentalism, etc.?

    He tried to substantiate his theoretical framework by pointing to a change over time. Where is the evidence of that change? Has there been a change? If there is evidence of that change, what evidence supports an attribution?

    ———-

  58. Joshua says:

    Something that I think might be of interest to some folks here…couldn’t really find a better thread to post it…so I’ll take a risk of going even further into the doghouse for thread-jacking. :

    “Cities are engines of economic growth and social change. About 85% of global GDP in 2015 was generated in cities. By 2050, two-thirds of the global population will live in urban
    areas. Compact, connected and efficient cities can generate stronger growth and job creation, alleviate poverty and reduce investment costs, as well as improve quality of life through lower air pollution and traffic congestion. Better, more resilient models of urban development are particularly critical for rapidly urbanizing cities in the developing world. International city networks, such as the C40 Cities Climate Leadership Group, Local Governments for Sustainability (ICLEI) and United Cities and Local Governments (UCLG), are scaling up the sharing of best practices and developing initiatives to facilitate new flows of finance, enabling more ambitious action on climate change. Altogether, low-carbon urban actions available today could generate a stream of savings in the period to 2050 with a current value of US$16.6 trillion.”

    Click to access NCE2015_workingpaper_cities_final_web.pdf

  59. Eli Rabett says:

    The ecological footprint of compact cities is most of the earth. In short those clowns are not even clowns.

  60. Joshua says:

    hmmm.

    Given that urbanization is a trend not likely to reverse, I’m wondering if you might elaborate as to why it’s clownish to advocate for low-carbon urban development in compact cities? Something more concrete than name-calling would be much appreciated.

  61. verytallguy says:

    Tobis is interesting.

    I wonder if a follow up or riposte even might be persuaded from someone in the GCM community.

    Isaac Held writes very well.

  62. izen says:

    @-Joshua
    “Given that urbanization is a trend not likely to reverse, I’m wondering if you might elaborate as to why it’s clownish to advocate for low-carbon urban development in compact cities?”

    To interrupt…
    Compact cities (low carbon especially) require very high levels of social organisation/control and a massive agricultural system integrated with effective transport.
    Then there is the water and power requirements, space-heating if you have frost days… The ecological impact is in the calorie and Joule requirement, much of this may imply authoritarian political control.

    The best extant examples of low carbon urban development in compact cities are also known as favelas.

  63. Joshua says:

    izen –

    Thanks for the interruption.

    Seems to me that as with much related to climate change, you need to compare costs and benefits of different alternatives. In this case, you have to compare low carbon compact urbanization to higher carbon non-compact urban development, or compare to suburban, exurban, or rural development. Or compare to non-development.

    Within that context, seems to me that there’s something to be said for low carbon, urban development in compact cities:

    Click to access G02648.pdf

    Click to access 0956247810392270.full.pdf

    Click to access carbonfootprint_brief.PDF

    I’m not feeling your more “authoritarian political control” thesis. There is a fair amount of urban planning going on these days that comes out of participatory planning and stakeholder dialogue.

    If you read the article I linked above, do you see advocacy for more “authoritarian political control?”

  64. Joshua says:

    I’m mostly interested in something to substantiate this statement:

    ==> “The ecological footprint of compact cities is most of the earth. “

  65. BBD says:

    Joshua

    It’s possible that Eli has ecological overshoot in mind; eg. see Wiki.

  66. anoilman says:

    BBD: That’s what I read into Eli’s comment. Although arguably, cities could be more efficient.

  67. David Young says:

    These very general observations by Tobis are probably correct. The focus on including more and more “physics” may or may not improve skill. I have given some obvious examples where adding more physics can simply place us further from the data, for example Reynolds’ stress turbulence models vs. eddy viscosity models or even simpler boundary layer models. This is well known in more rigorous fields.

    The statements about very old code based on outdated methods is indeed true for example of the NCAR model. In case people have forgotten, it uses the leapfrog scheme which was well known as early as the 1970’s to suffer from nonlinear instability and decoupling in time.

    But there are more fundamental problems that experts in other fields are more open to discussing at least. In any time accurate simulation, errors accumulate over time as 60 years of experience and theory clearly show. The only escape from this and the butterfly effect is a vague reference to the “attractor,” a favorite of Schmidt for example. The problem here is that the dimension can be very large with a complex manifold and there is no guarantee of anything in this setting.

    I find Browning’s aversion to the hyper viscosity employed in GCM’s combined with higher order spatial discretization methods to be correct. There has been some experimentation with this recently in CFD with decidedly mixed results. For one thing, there is no convincing theory for why this should work in any real sense. One simply says “the pattern” shows some real features, the same justification given by Schmidt, “every time I run the model, I get a reasonable looking climate.” I would call such statements qualitative and dangerously close to pseudo-science especially since GCM output to be useful must have quantitative information that is not too badly wrong.

  68. bill shockley says:

    Joshua, nice job handling the flak. I’m taking notes.

  69. DY,
    I’m not quite sure why you’ve come back here. I thought you’d decided to give up. Also, given some of the things you’ve chosen to say about me elsewhere, I’m not sure why you think you’d be welcome. No shame or common decency?

  70. Michael is correct to argue that, from a software engineering perspective, general circulation models are less than perfect.

    I would add that the same is true from a mathematical perspective: These models are too complex to understand.

  71. Richard,
    I suspect everything is less than perfect.

    I would add that the same is true from a mathematical perspective: These models are too complex to understand.

    Well this is probably roughly bollocks. It’s probably true that to really understand how different things can influence a system, a complex model like a GCM is not the ideal tool. Hence one might want (as Michael also suggests) to focus some effort on low-resolution models that can probe the system in a way that allows us to gain better understanding of the different processes. That doesn’t mean, however, that GCMs are too complex to understand. Or, rather, it doesn’t mean that we’re incapable of understanding the output from a GCM or incapable of using a GCM to understand how different pathways might influence our climate.

    By the way, did you leak these emails. If so, why would you do that? They’re bizarre. A great deal of what you claim in those emails is simply not true. Why would you want people to read them?

  72. Joshua says:

    Compare and contrast:

    Ric Santorum

    “”There was a survey done of 1,800 scientists, and 57 percent said they don’t buy off on the idea that CO2 is the knob that’s turning the climate.”

    Richard:

    “I think you were unfair on Santorum…Santorum had the spirit right but the letter wrong.”

    Richard:

    “Published papers that seek to test what caused the climate change over the last century and half, almost unanimously find that humans played a dominant role.”

    Yes, so the spirit of what Ric said is that it’s almost unanimous that humans have played a dominant role in climate change over the last 150 years, but he just got the letter wrong when he said that 57% don’t think that CO2 is the knob turning the climate.

    I LOVE RICHARD TOL!

  73. I do particularly like the bit where Richard tries to disown the 91% value that he obtained in his paper about Cook et al. (2013). I can understand why. It’s a pretty embarassing calculation, given that it would suggest that there should have been a stage in the analysis when the level of consensus was greater than 100%. Also, by disowning it, he can avoid looking for the missing 300 abstracts that would have to be there if the level of consensus is actually around 91%. #FreeTheTol300

    There’s also this bit

    Cook found 64 papers (out of some 12,000) that support the consensus. It is a long story why Cook thinks that 64 is 97% of 12,000.

    Of course, Cook did not think 64 is 97% of 12,000, so that bit’s wrong (in fact, completely dishonest might be a more approrpiate term to use). Also, this would seem to indicate that – at best – Richard does not have a clue what the term consensus means, and the only other people to make this argument (that I’m aware of) include Christopher Monckton. That speaks volumes, in my opinion.

    I should probably avoid making this comment, given that I don’t want this thread to degenerate into another consensus discussion. I’ll moderate quite heavily and delete whatever comments I feel like deleting. Richard, if I delete one of your comments, feel free to tweet my university to whine about it if you wish. Like you did last time.

  74. Joshua says:

    ==> “…I don’t want this thread to degenerate into another consensus discussion. ”

    IMO, the long discussions of the consensus are pretty boring, but Richard’s public display of logic is kind of interesting…as a kind of object lesson for understanding motivated reasoning.

  75. Joshua,
    I think you may be being a little generous there.

  76. Andrew Dodds says:

    @Tol

    Too complex for you to understand, perhaps. Try doing a decent set of science A levels, then come back.

  77. Becoming more like WUWT and more like Climate Etc, is not that hard ATTP.

    you’re heading that way with Joshua’s help.

  78. you’re heading that way with Joshua’s help.

    I actually think that Joshua is under-appreciated at Climate Etc. I find what he highlights quite illuminating.

  79. anoilman says:

    Since when did Richard Tol become an expert in Software Engineering? And obviously.. now physics.

    Pfft! He produces papers buggier than an ant farm.
    http://desmogblog.com/2015/08/03/richard-tol-s-gremlins-continue-undermine-his-work

    In software his paper has rolled out at Capability Maturity Level 1… Chaotic.
    https://en.wikipedia.org/wiki/Capability_Maturity_Model

    Sign me up for more of that!

  80. Joseph says:

    Sometimes conflicts can get personal.. I am not sure why Tol comes here other than to antagonize the host.

  81. Given Nic’s comment, this post is worth a read.

  82. Marlowe Johnson says:

    FWIW I’m flabbergasted that so much attention in 2015 on blogs is spent on questions about the efficacy of WGI models. the concept of diminishing returns can sometimes help focus the mind…

    It’s long past time to move on to the models that are used for WG II & III. Doesn’t it seem a little odd that trillion dollar decisions are being made at least in part on the basis of the work of a handful of individuals (e.g. Tol, Nordhaus, Hope). Think about it. We have FUND, PAGE, and DICE. That’s pretty much it.

    If you want to get real bang for your buck that’s where money should be spent in my not so humble opinion. I find it depressing how little attention is given to how these models are constructed, what there weaknesses are, and how they can be improved because that my friends is where the rubber (policy) hits the road.

  83. Joshua says:

    Steven –

    Rarely does someone hit so many notes of unintended irony in such a short comment.

  84. @Not Marlowe Johnson
    Hear, hear.

  85. BBD says:

    ATTP

    Thanks for the link to TB – very interesting.

  86. David Young says:

    Tol is of course right. GCM’s are theoretically really pretty bad. I mentioned some obvious things above, but am not expecting much of a detailed response here based on past experience. This issue @TP is so much bigger than you or me personally. I was surprised, but gratified, that you would post Tobis’ observations. They are indeed well founded in the theoretical details and should give more credence to what Lewis and Annan for example are doing.

  87. DY,
    You still here?

    Tol is of course right.

    No he’s not. Why would someone with your expertise agree with something so stupid? That you would agree with Tol is – of course – no great surprise.

    but am not expecting much of a detailed response here based on past experience.

    Why would anyone give you a detailed response? You whine about me and the site elsewhere. Your bias is obvious. You seem to think that “it’s not perfect”, or “there are problems”, is that same as “it’s useless”. Your understanding of the underlying physics seems woefully poor. What you really seem to want is for everyone to simply agree with you and bow down to your self-professed expertise.

    You could try behaving more like someone who wants to have an actual discussion, rather than someone who’s simply pushing an agenda (which is all Tol is obviously doing) and maybe you would get a better response. While you behave as you do, that you don’t get a decent response is no great surprise. Of course, maybe this is what you’re acknowledging, but I get the impression that that is not the case.

    Richard,

    Hear, hear.

    I actually agree with Marlowe Johnson too. It’s of course extremely difficult to see how we can move on to focusing on WGII and WGIII research when there are still people doing their utmost to undermine WGI. Why do you do that?

  88. Willard says:

    > Sometimes conflicts can get personal.

    Sometimes that they get personal is the very point of the exchange. Think of it as a way to tackle a fellow ClimateBall player. With experience and discipline, such tackle is easy to dodge, e.g.:

  89. @wotts
    There is an easy way to prove me wrong: Just point to the papers that develop a rigorous (in the mathematical sense of the word) understanding of GCMs.

    Just an anecdote. I was at a finance conference earlier this week, the sort of event frequented by the people who control the money, and this exact question was asked: What is the point of a numerical model the mathematical properties of which are not fully understood? The question referred to climate models, by the way.

  90. Richard,

    There is an easy way to prove me wrong: Just point to the papers that develop a rigorous (in the mathematical sense of the word) understanding of GCMs.

    Depends whether you mean “prove you wrong to your satisfaction” or “prove you wrong to the satisfaction of others”. For a long time I’ve regarded you as fundamentally dishonest. The recent emails on Climate Depot – if they are indeed from you – simply strengthen that view. The idea that I would waste my time trying to get you to acknowledge anything is utterly bizarre. Why would I possibly do so? I also think that your reputation is so poor now, that there’s little point in my trying to prove you wrong to the satisfaction of others. I suspect that noone with any sense takes you seriously anymore.

  91. I was at a finance conference earlier this week, the sort of event frequented by the people who control the money, and this exact question was asked: What is the point of a numerical model the mathematical properties of which are not fully understood? The question referred to climate models, by the way.

    A finance conference? So what? A group of people who probably confuse “I don’t understand” with “noone understands”.

  92. Willard says:

    Speaking of mathematics:

    When considering the correctness of a conventional informal proof, it’s a partly subjective question what is to be considered an oversight rather than a permissible neglect of degenerate cases, or a gap rather than an exposition taking widely understood background for granted. Proofs depend on their power to persuade individual mathematicians, and there is no objective standard for what is considered acceptable, merely a vague community consensus.

    Click to access tx081101395p.pdf

    Consensus in maths. Fancy that.

  93. David Young says:

    Repeating again, Tol is right. There is a vast literature showing why. I gave you references before and even explained it in terms of a simple example from fluid dynamics. Just asserting that something is stupid is an example of exactly the partisan and personally spiteful behavior you seem to dislike in others. Do I need to go back and explain numerical analysis of PDE’s again. Happy to do it if you are interested in the interests of science you know. Or you could just read the references.

  94. MMM says:

    1) I think it is amazing (and oddly underappreciated) just how well the simulated climate in climate models contains emergent behaviors that replicate observations. I think that the problems that climate models have (such as the double ITCZ) are the counterexample emergent processes that should make people think twice about how amazing it is that climate models do get so much right.

    2) On low climate sensitivity models, and modeling elephants: why not look at the climateprediction.net experiments? They deliberately allowed all the parameters in their model to vary, and after pruning off unphysical outcomes and ones which diverged too greatly from the historical record, were left with all the plausible combinations. Presumably that could be mined for a low sensitivity model.

    3) In my opinion, better engineered climate code would be unlikely to change the results much, but would make the process of doing research greatly more efficient and pleasant (as someone who worked with some climate code as a grad student, and found the poorly commented fortran and lack of standard testing suites to be frustrating).

    4) Regarding using the LGM to derive sensitivity: I’d be surprised if the Annan & Hargreaves 4 degree difference between the LGM and present-day turns out to be correct. That just seems too small. But having said that, if it is correct, then while that would be Bayesian evidence for a lower climate sensitivity, it would also be evidence in the direction that the impact per-degree change is larger than we previously thought.

    -MMM

  95. DY,

    Repeating again, Tol is right.

    Repeating this again is not convincing.

    I will explain something to you again. I’m hoping you can get it this time. I understand these concepts. There are issues related to viscosity, resolution, ….. None of that means that GCMs have no value. Do you get this basic concept? It is possible to agree about the facts, but disagree about the implications. I’m not disagreeing with your facts. I’m disagreeing with your interpretation. Do you at least get this?

    Do I need to go back and explain numerical analysis of PDE’s again.

    No, you don’t need to do this again. What you need to do is stop behaving with such absolute arrogance. Your hubris is remarkable. The people who work on these things are not idiots. That you seem to think that they are does you no favours. It just makes you appear ludicrous. You’re not this clever, however much you want that to be the case.

    [Mod : redacted] could you please go and do it somewhere else. There are plenty of sites that will treat you with the respect you so badly want. There’s only so many times that I can point out how little I value your views, even if not everything you say is wrong. If you want to stay here, you need to at least understand that statements of fact are not convincing. An actual argument is convincing. I don’t think anyone here believes that GCMs are perfect or that they don’t have problems. You simply pointing this out, over and over again, is simply boring.

    Is that clear enough for you? Now feel free to go and whine about me elsewhere, as you typically do – illustrating that you haven’t bothered to give this a moment’s thought. It’s hard to take someone seriously when they seem incapable of getting even the most basic of concepts.

  96. izen says:

    Paleoclimate has precedence over current observations and climate modeling because it is direct evidence of what can, and has happened to the climate under changing forcings.

    Current observations are less informative because the changes in forcings and effects are over a shorter timescale and of smaller magnitude. (so far) But can be understood with physics. (radiative transfer equations, basic thermo)

    Climate modeling is constrains more by paleoclimate than the modeled physics.

    The new work on LGM and climate sensitivity shows why. The ECS can be derived from the glacial-interstadial transitions, real historical events,by knowing with sufficient accuracy the change in forcing and the change in temperature.
    The forcings are well constrained by direct calculation and measurement. (Milankovitch insolation changes and ice core CO2 levels)
    The temperature change is much more uncertain as it has to be derived from local proxy indicators of temperature and modeling to derive a temperature change.

    The recent claims that ECS may be low, less than 2degC, from paleoclimate calculations reveals the egregious mathematical formalism that has perhaps seeped in from the economeretricians. The method of calculation and specific magnitude of a derived metric, (ECS GDP/debt) is considered to be much more important and significant than the real local, regional and global changes that it represents. So you get the ‘theological’ dogmatism of statements like – “What is the point of a numerical model the mathematical properties of which are not fully understood? “.
    Mathematical formalism Trumps utility, the cause of the failure of economeretricians to predict the crash, or provide and useful information about real world economic policy. But at least THEIR models conform to set of mutually imposed formal principles! (sarc/off)

    Suppose, just for the sake of a hypothetical, that it was possible to constrain from paleoclimate data that the ECS is 1.7degC. The global temperature change was less than previously thought. I suspect that some would declare this indicated that AGW was grossly exaggerated, as climate sensitivity was only half what the ‘consensus’ had previously declared.

    But the real implication is worse. If the majority of the land area we now inhabit is lifeless ice-cap with such a small temperature change and low climate sensitivity, how dramatic are the local and regional changes going to be for a similar forcing even if the change in the reified metric of global average temperature and ECS can be brandished as much smaller than previously calculated.

  97. Willard says:

    > Or you could just read the references.

    You could not give proper citations with workable links even if your life depended on it, DY.

  98. Marlowe Johnson says:

    careful what you wish for Richard. a horde of gremlins may follow.

  99. David Young says:

    [Mod : redacted] Paraphrasing your response “DY’s facts are right and perhaps denied by the community (Tobis seems to agree that big change is needed) but this bores me (because I don’t like the implications some might draw) .”

    This is very odd as I have avoided getting into policy issues but have stayed pretty much with science.

  100. DY,

    [Mod : redacted]

    Paraphrasing your response “DY’s facts are right and perhaps denied by the community (Tobis seems to agree that big change is needed) but this bores me (because I don’t like the implications some might draw) .”

    That’s not what I said. Misrepresenting what I said just makes you appear dishonest, stupid, or both. I doubt the community denies the facts. I suspect they understand these issues at least as well as you do, if not better. I’m all for change and improvement. That, however, is not the same as “everything we’ve done before is wrong and was a waste of time”.

    That issues exist doesn’t immediately invalidate what they’re doing or what’s been done. You need to at least illustrate that you understand the difference between computational modelling used in the physical sciences (what happens if this changes) and computational modelling used in a more corporate context (how can we design a plane that can carry 250 passengers, survive various levels of turbulence, etc). These are two different issues. Assuming that all uses of computational modelling has to satisfy the conditions that you might apply in your context, just makes you appear narrow-minded and ignorant.

    Try actually thinking about this a little. On the other hand, if you want to keep doing what you’re currently doing, feel free to do so, but just do it somewhere else. Seriously; make a comment worth posting, or I’ll just delete it. Given what you said about me elsewhere, I don’t have a great deal of time for you. If you want to keep commenting here, make comments that are worth posting. If you can’t do that, go somewhere else.

  101. lerpo says:

    Speaking of mathematics and consensus, here is an interesting article on the two in relation to economics: http://paulromer.net/mathiness/

    Some snippets:

    “The style that I am calling mathiness lets academic politics masquerade as science. Like mathematical theory, mathiness uses a mixture of words and symbols, but instead of making tight links, it leaves ample room for slippage between statements in natural versus formal language and between statements with theoretical as opposed to empirical content.”

    “The goal in starting this discussion is to ensure that economics is a science that makes progress toward truth. A necessary condition for making this kind of progress is a capacity for reaching consensus that is grounded in logic and evidence. ”

    “Science is the most important human accomplishment. An investment in science can offer a higher social rate of return than any other a person can make. It would be tragic if economists did not stay current on the periodic maintenance needed to protect our shared norms of science from infection by the norms of politics.”

  102. Paraphrasing DY’s response, “lalala I can’t hear you, AT, so I’m just gonna keep repeating over and over what I call scientific facts­ while omitting that their relevance to the grand scheme of things remains to be seen, because George Box.”

    Sometimes, it’s fascinating what a 30 seconds search can get you:

    There are procedures and methods for verification of coding algebra and for validations of models and calculations that are in use in the aerospace computational fluid dynamics (CFD) community. These methods would be efficacious if used by the glacier dynamics modeling community. This paper is a presentation of some of those methods, and how they might be applied to uncertainty management supporting code verification and model validation for glacier dynamics. The similarities and differences between their use in CFD analysis and the proposed application of these methods to glacier modeling are discussed. After establishing sources of uncertainty and methods for code verification, the paper looks at a representative sampling of verification and validation efforts that are underway in the glacier modeling community, and establishes a context for these within an overall solution quality assessment. Finally, a vision of a new information architecture and interactive scientific interface is introduced and advocated. By example, this Integrated Science Exploration Environment is proposed for exploring and managing sources of uncertainty in glacier modeling codes and methods, and for supporting scientific numerical exploration and verification. The details, use, and envisioned functionality of this Environment are described. Such an architecture, that manages scientific numerical experiments and data analysis, would promote the exploration and publishing of error definition and evolution as an integral part of the computational flow physics results. Then, those results could ideally be presented concurrently in the scientific literature, facilitating confidence in modeling results.

    http://ti.arc.nasa.gov/m/pub-archive/968h/0968%20(Thompson).pdf

    NB. You may need to add “.pdf” at the end of that link. WP seems to have a bug in their URL parser.

  103. This may be an off question but, has there been attempts to model lapse rate and associated water vapor feedback responses subsequent to a “negative pulse” of anthropogenic aerosols? To my knowledge these feedbacks are not considered in the calculation of aerosol negative forcings. There are indications that aerosols have a significant effect on lapse rate due to the inherent cooling that is targeted to the upper troposphere. see: “Global indirect aerosol effects: a review”, U. Lohhmann (2005) This indicates that the lapse rate and water vapor feedbacks from aerosol reductions would be more that the response for an identical Carbon Dioxide pulse.

    does anyone know if this has been adequately modeled? It may prove to be extremely significant in coming years!

  104. Joshua says:

    Speaking of conflict getting personal…

    Anders says to David…

    ==> “If you want to keep commenting here, make comments that are worth posting. If you can’t do that, go somewhere else.”

    Keep in mind….

    “David Young | June 8, 2014 at 5:01 pm |

    What I found is that its impossible to really discuss any possible problems with climate science or even Mann [at ATTP]”

    So over a year ago he determined that it is impossible to really discuss possible problems with climate science here. Yet subsequently he comes here to leave comments.

    Interesting.

  105. problems with climate science or even Mann

    That pretty much says it all.

  106. David Young says:

    [Mod : Seriously, not interested.]

  107. mt says:

    Joshua: ” My understanding is that sensitivity is an outcome, not an input parameter. How would one go about (if it’s possible to explain in very basic terms 🙂 ) comparing models where sensitivity is an outcome to models where sensitivity is an input? Is that even what is being suggested?”

    Thanks. Good question.

    Sensitivity is an output of a particular model with a particular parameter set.

    The way models are currently developed is to tune the parameters (within ranges considered physically reasonable) to match some observationally based metric of well-observed contemporary climate (treated as stable).

    (Essentially this can be considered an optimization of a cost function over an N-dimensional parameter space, if that’s clear enough to you.)

    If model runs were cheaper, i.e., if we ran ca. 1995 vintage models, we could do a great number of runs. Then we could do more systematic explorations of the parameters; perhaps there are multiple optima; perhaps the shape of the cost space can be dimensionally reduced, etc.

    What I’m proposing we do comes in four steps.

    The first is to establish a metric of performance for a dynamic CGCM of modest resolution and complexity – that is, how well does it reproduce the well-observed climate. There’s nothing unusual about that – it’s standard practice, though I have some suggestions as to how this might be improved.

    The second is to formally optimize a relatively simple dynamic model to find the subspace or subspaces (subsets of the parameter ranges) where it performs best according to some metric. Because the model is simple and coarse, it won’t perform as well as the current generation, but it should perform better than the 1995 vintage model to which it is physically equivalent or nearly so, as we have relatively enormous resources to apply to the parameter selection, compared to 1995.

    Third, we identify a range for where the metric is “pretty darn good”. This could be informed by uncertainties of various sorts – it would be a range where model performance is essentially statistically indistinguishable from the optimum we have found. This constitutes a constraint on the optimization of the particular model.

    Fourth, we optimize subject to that derived constraint for some other purpose. In a particular example of general interest, I am suggesting we explore the space of “pretty good” models to find the ones which are most and least terrifying insofar as climate sensitivity is concerned.

    This all depends on running not handfuls but very large ensembles of these models. The field of computational science has developed a whole stable full of tricks for this and related problems, but extant climate models are not practicable for such metamodeling.

  108. mt says:

    Andy: “I suppose this is just a plea to keep things as simple as possible and for diversity. Climate science, at least, benefits from having lots of independent models that can be compares against each other. Let’s hope that diversity is maintained and and that any attempt to build the one model to rule them all is resisted.”

    If we had more agile programming and V & V techniques this tendency would be reduced in my opinion.

  109. mt says:

    Mosh: “A while back a very well know skeptic contacted Dan Hughes and me to study what kind of platform he should buy to run some simple experiments with GCM. He couldnt get the money. no skeptic or FF company is interested in doing this.”

    Indeed that is the case. My question is why.

    The answer Mosh proposes is twofold:

    1. Their result would never be accepted.
    2. its way more complicated than folks imagine.

    The first excuse, I argue, is untrue. This is because a real CGCM has a climate, not just a sensitivity. One can demonstrate that the climate is as good as or better than other CGCMs. If Tol’s hypothesis is true, FF could tautologically build a model which performed comparably to the state of the art on the metric on which current models are tuned and yield a low sensitivity.

    If there is even a possibility of a low sensitivity, showing this would be of enormous value to the fossil fuel interests. The big oil companies in particular have the requisite skills and resources in house. I know of at least one oil company that actually runs paleo-GCMs as part of their oil exploration endeavors. This is not a matter of scrounging up volunteers. It’s a matter of devoting a very small fraction of their disposable income to a line of self-defense that would be very useful to them.

    I infer that the real reason for their failure to announce such a model is either

    3a. They have indeed tried to do this and failed
    or
    3b. They have not bothered to do this because they understand it would fail

    I offer this argument in refutation to RIchard Tol’s blithe suggestion that CGCMs can be tuned to give any desired sensitivity (and as an implicit corrolary, that climate science is a den of troublemakers whose purpose is to disrupt the world economic order).

    I am not asking why the likes of Watts or Hughes don’t do this. I am asking why Exxon or BP don’t. If it’s possible, it’s inexplicable to me that they haven’t done so already.

  110. mt says:

    ” I am not sure why Tol comes here other than to antagonize the host.”

    Admittedly I trolled Tol. He raised a question that I think I have a compelling answer to. It’s not surprising that he showed up.

    What surprises me is that rather than engaging my suggestion, he threw some red herrings on the table. I expected a counterargument of some sort.

  111. mt says:

    Marlowe: “I’m flabbergasted that so much attention in 2015 on blogs is spent on questions about the efficacy of WGI models. the concept of diminishing returns can sometimes help focus the mind…

    It’s long past time to move on to the models that are used for WG II & III. Doesn’t it seem a little odd that trillion dollar decisions are being made at least in part on the basis of the work of a handful of individuals (e.g. Tol, Nordhaus, Hope). Think about it. We have FUND, PAGE, and DICE. That’s pretty much it.

    Hear! Hear! from me as well. I am convinced that the IAM school of economic analysis is totally broken and unsuited for purpose, but that’s for another discussion. Certainly there is nothing remotely comparable in utility to a climate model

    Marlowe again: “If you want to get real bang for your buck that’s where money should be spent”

    I’m not at all convinced that we have anything remotely resembling a long term economic theory that is sufficient to justify significant expense on this. I think we should take Tol’s word for it: those models can be tweaked to give any answer you like. He doesn’t really know of another kind. I don’t either. Century-scale economics is too important to be left to economists of the present generation. I am not even sure this can be repaired. There is no a priori guarantee that a problem is suitable for formal analysis.

  112. mt says:

    VTG: “Tobis is interesting.”

    Thanks, I always thought so 🙂

    “I wonder if a follow up or riposte even might be persuaded from someone in the GCM community.

    Isaac Held writes very well.”

    I cannot hold a candle to Held as a dynamicist or a user of the current generation of GCMs – I’m very much a dilettante compared to him. But in compensation I may have been able to spend more time on the state of computational science as an independent discipline and of software engineering as a practice.

    I don’t know if he has any disagreement with me on these matters. I don’t have any disagreement with him that I know of.

  113. mt says:

    JJM: “This may be an off question but, has there been attempts to model lapse rate and associated water vapor feedback responses subsequent to a “negative pulse” of anthropogenic aerosols? ”

    This misunderstands the nature of the GCM. Both the lapse rate and the feedback are emergent properties – they are not explicitly represented in the model design but emerge from it.

    This is a common sort of misunderstanding and not an unintelligent one. People familiar with models in other fields where the constraints are weaker (this includes economic models and also includes ecological models) expect all the key features are part of the model specification. But they are not.

    The success of physical models (in climate, astrophysics, geology) depends on ATTP’s dictum that started the discussion: it’s more difficult to mess with physical models. I would say they are models of a fundamentally different sort than economic or ecological models. I prefer to call them simulations rather than models.

    A simulation is a sort of model where when it disagrees with observations is as likely to point out something wrong with the observations as with the model specification.

    I don’t know where biomedical models fall on this spectrum; I’d be interested to know.

    Anyway, in answer to your question, the Clausius-Clapeyron water vapor feedback to cooling forcings is indistinguishable from the feedback to warming forcings in a GCM.

    Clouds are a more complicated feedback, as aerosol distribution directly impacts cloud physics. So indeed, representing that correctly is a key direction of ongoing research.

  114. mt says:

    MMM: heartily agree on all 4 points

  115. mt says:

    Lastly for today:

    Tol: “What is the point of a numerical model the mathematical properties of which are not fully understood?”

    This is so mind-bogglingly upside-down that I am at a loss for a quick answer.

  116. Marlowe Johnson says:

    Michael,

    Like it or not IAMs are used to inform policy (e.g. US efficiency regs) and as such I’d argue that it would indeed be money well spent to improve them.

    The main problem as I see it is how the output is interpreted rather than the models themselves. Too often decision makers latch on to the central estimate for the SCC as if that were the most relevant piece of information. If addressing climate change were simply an optimization exercise this might be sensible. Howeve,r as many here would agree (I think), AGW is really a risk management problem as Gernot and Weitzmann recently argued. As such, it makes much more sense to use the 95th % value when evaluating different policies. If that were the case then far more aggressive mitigation policies would be on the table than what we see today.

  117. anoilman says:

    mt says:
    September 12, 2015 at 12:31 am

    Lastly for today:

    Tol: “What is the point of a numerical model the mathematical properties of which are not fully understood?”

    This is so mind-bogglingly upside-down that I am at a loss for a quick answer.

    Perhaps we should adjust Richard Tol’s statement to say, ““What is the point of a numerical model the mathematical properties of which are not fully understood by Richard Tol?”

    Its much easier to answer that way. 🙂

  118. Eli Rabett says:

    That Richard does not believe in Feynman integrals because he cannot mathematically understand them is perhaps no huge surprise but it does undress his pompous blather.

  119. Richard says:

    Eli, perhaps we have discovered a new addition to the scientific method, that Kuhn, Popper, etc. overlooked …

    Tol’s Law: “If I don’t understand, it cannot be true” (Oh, and it sure saves me a whole lot of effort bothering to try).

  120. Richard says:

    Oh, and on the point about producing some formal proof of climate models … Helloooo … the whole point of using numerical methods … even (in the case of classical dynamics) for the > 3 body problem … is that analytical closed proofs are not accessible. The whole of modern science would come crumbling down if we set the bar there. The underlying physical models are very well understood, but apparently not to everyone (what a surprise).

    Interestingly, it is claimed that Thomas Young (died 1829) was the last man to know everything. So since the, we all have to really on using stuff that others develop. If RT’s ambition is to become a latter day Young, or is a requirement for action on global warming, it is a doomed project.

  121. For what it’s worth, the sentiment expressed in my previous intervention was not mine — I appreciate the value of numerical modelling, if well done, and I count Myles Allan and Carl Wunsch among my heroes. It was a French mathematician, steeped in Debreu and Bourbaki, who said what I wrote.

  122. Richard,
    So, a French Mathematician that you won’t name and who was influenced by a French Economist/Mathematician who has been dead for more than 10 years and a group who go under the pseudonym of Bourbaki is your influence? Why?

  123. guthrie says:

    Maybe Richard could do something useful, by arranging a conference/ pub meeting between climate modellers and finance modellers, with the aim of increasing understanding of each others work?

  124. Eli Rabett says:

    Richard tries the double back flip:

    the sentiment expressed in my previous intervention was not mine — I appreciate the value of numerical modelling,

    It’s those nasty finance types.

  125. Pingback: Constraining model ECS | …and Then There's Physics

  126. Joshua says:

    “intervention.”

    Richard’s comments aren’t comments, they’re “interventions?”

    Interesting.

  127. Joshua says:

    There’s a logical question I have here that maybe some smart person here can dumb down enough to explain the answer to me.

    So some models are useful even though all models are wrong.

    If someone is sitting in a tree directly over me with a 20 lb weight, and they drop it, I’m going to say that despite that in a couple of hundred years hence people will say that my model of gravity was wrong, and in fact even contemporary brilliant physicists’ models of gravity were wrong, and that weather patterns and wind currents that affect the fall of the weight are extremely complex and involve many variables with highly uncertain and perhaps chaotic influences, I should step to the side.

    Can someone give me a useful framework for understanding how to think about the, IMO legitimate, criticism of climate modeling – that there does seem to be a problem with trying to model complex and highly uncertain phenomena with chaotic elements, by adding complexity to the model by trying to model even more complex, highly uncertain, and somewhat chaotic elements?

  128. Eli Rabett says:

    Models in general (not just climate models) are much better answering differential than absolute questions. (e.g. how does the climate change if [CO2] goes from 400 to 450 ppm rather as opposed to staying at 400.

    The reason is that even if the model does not have absolutely all the physics, not all the physics is needed in most cases (that is well captured by MTs appeal for simpler models.

  129. Mal Adapted says:

    WRT parameterization constrained by observations: the 2013 Nature paper by Kosaka and Xie, Recent global-warming hiatus tied to equatorial Pacific surface cooling is highly cited. From the abstract:

    We present a novel method of uncovering mechanisms for global temperature change by prescribing, in addition to radiative forcing, the observed history of sea surface temperature over the central to eastern tropical Pacific in a climate model. Although the surface temperature prescription is limited to only 8.2% of the global surface, our model reproduces the annual-mean global temperature remarkably well with correlation coefficient r = 0.97 for 1970–2012 (which includes the current hiatus and a period of accelerated global warming)

    Stefan Rahmstorf said at the time:

    They show this with an elegant experiment, in which they “force” their global climate model to follow the observed history of sea surface temperatures in the eastern tropical Pacific. With this trick the model is made to replay the actual sequence of El Niño and La Niña events found in the real world, rather than producing its own events by chance. The result is that the model then also reproduces the observed global average temperature history with great accuracy.

    I suppose Tol would consider the “trick” (I rather wish Stefan hadn’t used that word) a fudge?

  130. Mal Adapted says:

    Richard Tol:

    What is the point of a numerical model the mathematical properties of which are not fully understood?

    The other Richard:

    the whole point of using numerical methods … is that analytical closed proofs are not accessible.

    Even if a mathematical model is both formally proved and computationally tractable, it may not prove (heh) to be very useful:

    “Beware of bugs in the above code; I have only proved it correct, not tried it.” – Donald Knuth

  131. mwgrant says:

    @Willard

    > How does one approach QA for new codes if a potential use is to inform policy?

    You might be better placed to answer that question than MT, mwg

    That and the other questions are rhetorical. There are a lot of factors to be considered when developing a model. MT’s ideas have merits. I take his post as an attempt at initiating open-ended discussions.

  132. Richard says:

    Joshua – most if not all non-linear systems (even very simple ones, like x[n] = r * (1 – x[n-1]) ) can behave chaotically in regions (e.g. values of parameter “r”) and not in others. Take the lamina flow around an aerofoil that also exhibits turbulence along the upper trailing edge: I till fly even though some aspects of the system my have limits to their predictability.

    As I understand it, one technique used in weather forecasting is to vary key parameters and to how sensitive the predicted outcomes are. Complexity and chaos do not equate to inability to predict things.

    Climate is not weather. Consider a pot of water being brought to the boil: predicting roughly when it will reach boiling point is easy, but predicting how many bubbles will be present at a specific point in time might be well nigh impossible (but maybe predicting the size distribution of bubbles is possible). So we may struggle to predict the weather more than a week in the future, but predicting global or emergent properties of the planetary system decades in the future is possible.

  133. Willard says:

    > That and the other questions are rhetorical. There are a lot of factors to be considered when developing a model. MT’s ideas have merits. I take his post as an attempt at initiating open-ended discussions.

    Rhetorical questions are ways to freeride while otters are doing all the discussin’.

    If you have something to share about QA or anything else, mwg, there are more direct ways to discuss openly than asking rhetorical questions.

  134. Willard says:

    Next time you see that guy, RichardT, ask him about the Cournot principle.

  135. Richard says:

    Joshua – typo, sorry, should have read … x[n] = r * x[n-1] * ( 1-x[n-1] ) of course. Go to …

    http://tuvalu.santafe.edu/~jgarland/LogisticTools.html

    … and have a play. Set iterations to 50, x0 to 0.2 and vary r from 2 to 4 (e.g. 2, 2.5, 3, 3.5, 3.7) each time restart the simulation. See how the system behaves differently for varying r. Just to reiterate, chaotic behaviour can arise from any non-linear system, even simple ones like this, but that does not mean lack of predictable outcomes.

    Conversely, order can appear from complex inter-linked non-linear systems. We don’t even need a computer to demonstrate it because nature provides interesting examples …

    https://en.wikipedia.org/wiki/Belousov–Zhabotinsky_reaction

  136. Magma says:

    Michael Tobin: I infer that the real reason for [oil companies’] failure to announce [a GCM demonstrating low sensitivity] is either
    3a. They have indeed tried to do this and failed
    or
    3b. They have not bothered to do this because they understand it would fail

    Yes. Given the stakes involved this seems to be an obvious point. In fact the same could be said about many other research areas pertaining to AGW and CC in which relatively minor funding efforts could return significant benefits… if they thought the results would differ from those that have helped form the current consensus. That groups with such large pools of geoscientific expertise and capital have not done so speaks for itself.

  137. mwgrant says:

    If you have something to share about QA or anything else, mwg, there are more direct ways to discuss openly than asking rhetorical questions.

    Perhaps I was interested in what others might say about those topics, Willard. Perhaps I was interested in what MT might say. That seems consistent with the tenor of MT’s post.

  138. Richard says:

    The European oil & gas companies at least acknowledge that AGW is real (see their recent letter to FT).

    “Widespread carbon pricing is vital to tackling climate change”, Financial Times, 1st June 2015, Signed by: Helge Lund, BG Group plc; Bob Dudley, BP plc; Claudio Descalzi, Eni S.p.A.; Ben van Beurden, Royal Dutch Shell plc; Eldar Sætre, Statoil ASA; Patrick Pouyanné, Total S.A.

    But are in a state of cognitive dissonance (eg. Shell’s exploration of Arctic). Why would they try to resolve that mental state by building a climate model, to prove what they already know?

  139. David Young says:

    This is an extract from The Lukewarmer’s Way blog where ATTP and I have been exchanging views in a more civil way. For the record, it should be posted here too.

    However, ATTP, you are misrepresenting my views on GCM’s and in fact the science and theory that backs up those views. You deleted the part about GCM’s being tremendously valuable as weather forecasting tools. The part about the Met Office changing their GCM ONLY if the change improves weather forecast skill. Climate scientists get what they get and it may be not too great.

    GCM’s are a giant mess as i said and as Tobis more or less said at your blog in more words even though he would not choose those exact words probably.

    My views are similar to Tobis’ but based in actual rigorous knowledge and experience. GCM’s are not useless, but they are sucking the money and talent out of more promising avenues of research. GCM’s also often use very old and outdated methods. I pointed out the hyper viscosity combined with higher order spatial methods. These are just bad methods.

    But the main problem here, which is not numerical, is just the idea that the “attractor” will suck in all trajectories in the long time limit. There is no science to back that up, its just colorful fluid dynamics. “Every time I run the model I get a reasonable climate.” That’s not science.

    So what should we be working on instead of GCM’s?

    1. Simpler models are often more accurate and easier to constrain with data. We have seen that in many, many instances. Yet there is a consistent and dangerous dogma that “more physics” must be the answer in many fields of modeling.
    2. I personally believe the “theory” of climate needs improvements. Tropical convection is one such very hard problem where we need much more than we have at the moment especially as the theory seems to not agree very well with the data.
    3. We need much better data. As I’ve explained at your blog many times, in climate we care about very small deltas to much larger absolute quantities. That means our models and our data are probably quite inaccurate for what we care about. Let’s get busy.

    Finally, my comment here [Lukewarmer’s] about you is accurate, targeted, and dispassionate. Your behavior has gotten much more partisan and nasty. That is your issue and not mine. It is not whining as anyone can go and read for themselves.

    There is something of a revolution going on in fluid dynamics modeling too. As I said in the comment you so kindly deleted, there is a new review paper being written on CFD by two very big names in the field that I was privileged to have some input to and it will be a very big step in the right direction in terms of honestly dealing with the successes and also the strong limits of numerical modeling of chaotic flows. This is an important step forward.

    If you are even remotely interested in this science, there are a couple of references you could read that would be useful.

    1. AIAA Journal, August 2014, Young et al. on simpler models of viscous compressible flows. Contains some interesting comparisons of methods.
    2. AIAA journal, August 2014, Venkatakrishnan, Kamenetskiy et al. A really big paper about multiple solutions for the steady state Navier-Stokes equations and also “near solutions” that are often found by inadequate numerical methods.
    3. AIAA Journal, July 2015, LeDoux et al. Showing how bad the results can be using single point optimization of airfoils and also some comparisons between methods.
    4. To appear, Booker et al. Analyzing uncertainty in CFD.

    Only if you are really serious about this subject because its very technical.

  140. BBD says:

    Yes, David. Models are a work in progress. Palaeoclimate behaviour is a thing.

  141. dhogaza says:

    David Young:

    “My views are similar to Tobis’ but based in actual rigorous knowledge…” I was unaware that MT lacks actual rigorous knowledge …

  142. bill shockley says:


    Because of a large unmeasured forcing, we have a large uncertainty of what the net human forcing is. Doubling Down on Our Faustian Bargain
    The tragedy of this science story is that the great uncertainty in interpretations of the climate
    forcings did not have to be. Global aerosol properties should be monitored to high precision, similar to the way CO2 is monitored. The capability of measuring detailed aerosol properties has long existed, as demonstrated by observations of Venus. The requirement is measurement of the polarization of reflected sunlight to an accuracy of 0.1 percent, with measurements covering the spectral range from near ultraviolet to the near-infrared at a range of scattering angles, as is possible from an orbiting satellite. Unfortunately, the satellite mission designed for that purpose failed to achieve orbit, suffering precisely the same launch failure as the Orbiting Carbon Observatory (OCO). Although a replacement OCO mission is in preparation, no replacement aerosol mission is scheduled.

    Earth’s Energy Imbalance and Implications
    No practical way to determine the aerosol direct and indirect climate forcings has been proposed other than simultaneous measurement of the reflected solar and emitted thermal radiation fields as described above. The two instruments must be looking at the same area at essentially the same time. Such a mission concept has been well-defined (Hansen et al., 1992) and if carried out by the private sector without a requirement for undue government review panels it could be achieved within a cost of about $100M.

    Earth’s Energy Budget Remained Out of Balance Despite Unusually Low Solar Activity
    The updated energy imbalance calculation has important implications for climate modeling. Its value, which is slightly lower than previous estimates, suggests that most climate models overestimate how readily heat mixes deeply into the ocean and significantly underestimates the cooling effect of small airborne particles called aerosols, which along with greenhouse gases and solar irradiance are critical factors in energy imbalance calculations.

    Global Temperature Update Through 2013
    The approximate stand-still of global temperature during 1940-1975 is generally attributed to an approximate balance of aerosol cooling and greenhouse gas warming during a period of rapid growth of fossil fuel use with little control on particulate air pollution, but satisfactory quantitative interpretation has been impossible because of the absence of adequate aerosol measurements4

  143. bill shockley says:

    BBD,

    I was way off. The Hansen claim of having nailed ECS at 3.0C +- 0.5C goes all the way back (at least) to the 2008 AGU meeting.
    https://11e9d3af15b78cc736c057bc0a44a3493a1015ed.googledrive.com/host/0B6KqW0UlivnVMHNVcjQwSEtQU1E

  144. The maths are beyond my skills, but rough understanding, I hope, is not. Eli, special thanks for the bouquet (thinking of expressive language as a kind of gourmet treat):

    undress his pompous blather which I take in an emperor’s new clothes kind of way.

    But the main reason I showed up here, aside from appreciation of MT’s work, is that the New York Times, my regular rag, has come up with some summaries of this new paper:
    http://www.nytimes.com/2015/09/12/science/climate-study-predicts-huge-sea-level-rise-if-all-fossil-fuels-are-burned.html

    Original here, open access:
    http://advances.sciencemag.org/content/1/8/e1500589

    I don’t know if, since this is a modeling effort, it is at all relevant, but worth noting in any case.

  145. David Young said September 13, 2015 at 12:13 am,

    “GCM’s are a giant mess…”

    Assuming that the most important thing is what happens in the long run, Marotzke and Forster (2015) with their 62 year runs and Steinman, Mann, and Miller (2015) with their NMO show well enough that the models are accurate enough with respect to the *underlying* behavior over the long run. They take into account oscillations up to roughly 60 years in length to address the underlying behavior. Look at the three graphs and most especially the graph of the 60 year running mean I give in this comment
    https://andthentheresphysics.wordpress.com/2015/05/30/hmmm-entering-a-cooling-phase/#comment-57068
    on May 30, 2015 at 2:22 pm under the thread “Hmmm, entering a cooling phase?” and note that it clearly tracks the graph of a positively accelerated function. The models are consistent with this. It is not accurate to call the models a big mess when they are accurate enough in this most important sense.

  146. Richard says:

    @KeefeAndAmanda – any chance you could write this and your linked contribution up as a fully fledged blog piece?

  147. bill shockley says:

    K&A, doesn’t this imply that the mid-century aerosol burden was not a significant forcing? Because removing the long oscillations would not undo the aerosol forcings.

  148. bill shockley says:

    Susan Anderson,

    The Caldeira, et al paper and the NY Times coverage of it come off as spin control on the Hansen, et al paper. 1000 years for complete submersion vs 50 years for effective dysfunction of coastal cities.

  149. MT,
    Do you agree with what DY said here?

    GCM’s are a giant mess as i said and as Tobis more or less said at your blog

    As I’ve been trying to explain to David, my issue with his views is partly that he seems to think that the various computational issues related to climate modelling warrants calling GCMs a giant mess. I disagree, I suspect MT disagrees (although correct me if wrong), and I think this illustrates that DY does not understand how computational models are used in a scientific context. I also find DY’s apparent sense that he knows better than a host of actual experts somewhat irritating. There are plenty of clever and informed people that I can talk with, without talking with those who feel the need to tell me how clever and informed they are. This comment where DY says:

    Without appearing immodest, 40 years of experience gives me a little bit of an edge on Schmidt

    seems to illustrate this issue.

    Since I have little time for either of these positions, I can’t see much point in posting many more of DY’s comments. I think I understand his position and can’t see any value in discussing it further.

  150. Joshua says:

    Anders –

    ==> ” I can’t see much point in posting many more of DY’s comments.”

    Consider again, why David posts comment here. From his own words, we have evidence supporting a conclusion that it isn’t to discuss problems with climate models.

    FWIW, IMO, your mistake is in trying to engage him in such a discussion (and good god, why did you try actually having a fruitful discussion over at Lukewarmer’s Way?). In that sense, I think that your annoyance is kinda self-inflicted. Not that it matters meaningfully either way, but there is an alternative to refusing to post any of his further comments: You might try thanking him for his concerns about models and move on.

  151. Joshua,
    Yes, yes, it was foolish. I should have known better by now, but sometimes I just can’t quite resist, however hard I try 🙂

  152. Kevin O'Neill says:

    Without appearing immodest, 40 years of experience gives me a little bit of an edge on Schmidt

    Yes, exactly. Why David Young wasn’t named to head GISS I’ll never know. Obviously Gavin is out of his league and we should have given the job to one of the real experts. (/snark)

  153. David Young says:

    Joshua, The reason I payed attention to this post was because I thought MT was onto something important that some of us have been saying for a long while. I was not expecting much detailed response because Tobis seemed to get no detailed response either. There is nothing contradictory or unusual about that.

    Yes, ATTP, there are plenty of clever and smart people. There is nothing negative however about being proud of your group’s work. I quite frankly don’t care much either about the climate blogosphere. It sometimes helps me learn things, but it is in general abysmally ignorant and politicized. I devote time to it to learn from the occasional science content or links to papers.

  154. Joshua says:

    David –

    ==> “There is nothing contradictory or unusual about that.”

    I wasn’t suggesting that you did something contradictory. Quite the opposite. It seems to me that your comments here are/were very consistent with your earlier conclusion that it is impossible for you to discuss problems with GCMs here: Despite a surface gleam suggesting otherwise, beneath the surface your comments and approach didn’t seem conducive for (or solicitous of) that type of discussion – which is what Anders kept saying. IMO, your comments seemed to be very much consistent with a belief that such an exchange would not be possible for you here.

    Nor do I think such an approach is even remotely unusual. It is more the norm when people are as tribalistically oriented, and even more importantly, resistant to accepting and controlling for their own tribal orientation.

  155. Bill Shockley, as an occasional here with specific limitations on evaluating actual science, I try to tread carefully despite my “fools rush in where angels fear to tread” lapses. Sometimes I even get it right.

    I doubt very much that Winkelmann, Levermann, Ridgwell and Caldeira embarked on a AAAS (Science Advances) article to do “spin control”. Since the NYTimes failed to cover the Hansen material at all except in the biased Revkin DotEarth blog (covered by Tamino and Rabett, as well as infuriating people like myself who did their possible to point out that Dr. Hansen was making his point for a reason and with considerable scientific backing and ability), I doubt it was necessary; NYT editors ensured that Hansen’s discussion article did not receive unbiased coverage. I read both with interest (inasmuch as time and ability permitted) and found them both useful and not contradictory, as they both indicated possibilities for thought. Caldeira and his colleagues did not sound unconcerned. There was another article about not dismissing the long tail that I saw yesterday that also seemed relevant:
    http://www.skepticalscience.com/long-hot-tail-eocene.html

    I would suggest that the scientifically inclined read the actual material, not that I would be critical of the Gillis coverage. Andy Revkin did a mostly reasonable job of his interview, but as usual he came in with a bias and a point to make, so if there was spin it was mostly his strong dislike of Dr. Hansen’s material with a spice of self-promotion. I prefer Elizabeth Kolbert’s coverage.
    http://advances.sciencemag.org/content/1/8/e1500589.full

    Here’s Kolbert:
    http://www.newyorker.com/news/daily-comment/if-we-burned-all-the-fossil-fuel-in-the-world

  156. Susan,
    Thanks. I’m actually reading Elizabeth Kolbert’s book (The Sixth Extinction) at the moment. Finding it really very good, but not the easiest of bedtime books 🙂

  157. David Young makes two claims that his actions directly contradict. (1) he spends a lot of time and effort on the blogosphere, and (2) his material is highly politicized and appears to generate a lot of correction for its bias.

    A case of believe what I say, not what I do? I agree with those who suggest he will not change; he’s been doing this for a long time, like many others who arrive and take up time and energy until their inflexibility becomes too obvious to ignore.

  158. Well thanks! Perhaps I should have refrained from the most recent (feel free to remove) but it is annoying at my level to see deceptive tactics deployed so regularly.

    The final chapter is the best. She is an excellent writer and researcher, not just in climate. Very honest!

  159. Joshua says:

    Susan –

    As much as I think that David doesn’t accept accountability for his own approach, I don’t think that you can hold him accountable for the decisions others make about how they spend their time and energy.

    This is kind of a pet peeve of mine about internet discourse.

  160. > I thought MT was onto something important that some of us have been saying for a long while.

    Here’s what that some of us have been saying for a long while:

    Please beware that it’s a technical comment.

  161. Willard,
    That comment seems moderately sensible. I hadn’t appreciated that it originiated here.

  162. bill shockley says:

    Susan,

    I’m not going to insist that I’m right or even try to prove it. Let me just tell you how I sprang (LOL) to my opinion.

    Susan said:
    I doubt very much that Winkelmann, Levermann, Ridgwell and Caldeira embarked on a AAAS (Science Advances) article to do “spin control”

    Mainstream scientists are conservative. There are reasons for this. James Hansen even wrote a formal paper on the subject and it is not a really radical assertion. But more than that, I came across a Caldeira paper a few months ago that was “sold” to the public in a similar manner “never been done before”, was really nothing new and, IMO, was a bunch of bull. So, I did not start out neutral with (due) respect to this group.

    On to the science:

    From the Caldeira paper:
    If the 2°C target, corresponding to about 600 GtC of additional carbon release compared to year 2010, were attained, the millennial sea-level rise from Antarctica could likely be restricted to 2 m.

    From a recent Hansen lecture:
    And we know from the Earth’s history that the last time the planet was warmer than it is today was during the Eemian, when it was less than one degree warmer than it is now, and sea level was 6 to 8 meters higher

    I don’t think Hansen’s 6-8m was even a mild bone of contention in his recent SLR and Storms paper. For an apples to apples comparison, you would have to add in the other global sources of global ice melt to the Caldeira number, but these are small in comparison to the melt volume from Antarctica, so there is a huge disagreement here.

    From the Caldeira paper:
    If the 2°C target, corresponding to about 600 GtC of additional carbon release compared to year 2010, were attained, the millennial sea-level rise from Antarctica could likely be restricted to 2 m.

    From a recent Hansen blog post:
    IPCC conclusions about sea level rise rely substantially on models. Ice sheet models are very sluggish in response to forcings. It is important to recognize a great difference in the status of (atmosphere-ocean) climate models and ice sheet models. Climate models are based on general circulation models that have a long pedigree. The fundamental equations they solve do a good job of simulating atmosphere and ocean circulations. Uncertainties remain in climate models, such as how well they handle the effect of clouds on climate sensitivity. However, the climate models are extensively tested, and paleoclimate changes confirm their approximate sensitivities.

    In contrast, we show in a prior paper and our new paper that ice sheet models are far too sluggish compared with the magnitude and speed of sea level changes in the paleoclimate record. This is not surprising, given the primitive state of ice sheet modeling. For example, a recent ice sheet model sensitivity study finds that incorporating the physical processes of hydrofracturing of ice and ice cliff failure increases their calculated sea level rise from 2 meters to 17 meters and reduces the potential time for West Antarctic collapse to decadal time scales. Other researchers7,8 show that part of the East Antarctic ice sheet sits on bedrock well below sea level. Thus, West Antarctica is not the only potential source of rapid change; part of the East Antarctic ice sheet is also susceptible to rapid retreat because of its direct contact with the ocean and because the bed beneath the ice slopes landward (Fig. 1), which makes it less stable.

    Here Hansen is pointing out orders of magnitude differences between what ice sheet models currently do with rate of ice loss and what they should be doing.

    And beyond this he has gone to the trouble of studying the paleo literature, and employed scientists who make it their specialty, to document times in recent earth history (the Eemian period) when sea level rose at high rates. Work on his paper began 9 years ago. Hansen believes Earth’s history provides the best basis for understanding climate change and that modeling should be done in conjunction with paleo studies to keep the models honest.

    Conservative scientists get media coverage in conservative media and likewise for progressive scientists/progressive media. Thus Hansen/HuffPost, Caldeira/NYTimes.

    These were my thoughts when I saw your post.

    FWIW.

    Any details or links you would like, please ask.

    Regards,
    Bill

  163. Joshua, you’re right, I should have resisted the temptation to state the obvious. We can only change ourselves.

    As a result, my original intent, which was to ask about the modeling involved in the Caldeira et al. study was relevant to mt’s point, got lost in the fog.

    I do get irked at the one-sided requirement for perfection. The unskeptical skeptics can carry on forever, but if we try to show where the loose connections have been manipulated, our behavior comes in question.

  164. Bill, I have been passionately involved in all this which is a bit over my head for a while now. I thought you were referring to the only NYTimes coverage of the Hansen discussion paper which was an attack by Andrew Revkin at the DotEarth blog; Revkin’s bias sticks in my craw, especially since he complains about sensationalism and was quite sensationalistic about it. I probably transferred my anger about that to your comment.

    In the main, I follow a variety of materials that seem to indicate that reality is getting ahead of model projections as represented by proper science, such as the IPCC summaries, and coming in at the high end.

    In addition, I have a direct interest as my home in Boston is about 5 feet above high tide, perhaps less than 3 feet above storm surge tide. I am planning to move, and expect this to become noticeably problematic somewhere around 15 years from now.

    In fact, I don’t find the Hansen and Caldeira efforts contradictory; they both want us to think about the range of possibilities. Revkin not so much: he has an agenda and appears to have abandoned progress on renewables, like Cameron (led by Osborne imho) and a good few others.

  165. Joshua says:

    Susan –

    Cheers. It’s sometimes easy to get distracted by the personality politics.

  166. bill shockley says:

    Susan, thanks.

    Caldeira is my “Revkin”.

    Aaaarrrrgh!

    PS, Kolbert did a nice piece on Hansen in 2009.

    I was a religious NYer reader for a long time. Then I found Chomsky.

    If you trust Caldeira, your home in Boston should be fine…. 🙂

  167. MMM says:

    “But the main problem here, which is not numerical, is just the idea that the “attractor” will suck in all trajectories in the long time limit. There is no science to back that up, its just colorful fluid dynamics. “Every time I run the model I get a reasonable climate.” That’s not science.”

    This. Exactly. And, the clear takeway is that the Universe itself is clearly an inferior product, because look at the relative stability of the Earth’s climate – it makes no sense. In such a complex, coupled system as the Earth, there should be a lot more chaotic behavior. By now, we should have both frozen solid and boiled. I demand that our Creator stop using her clearly outdated methodologies.

  168. bill, thanks. that’s good info, i did wonder about caldeira, keep up the good work! susan

  169. bill shockley says:

    Susan, thankyou. You are very kind!

  170. In my reply to my comment
    https://andthentheresphysics.wordpress.com/2015/09/09/guest-post-some-thoughts-on-scientific-software-in-general-and-climate-modeling-in-particular/#comment-62704
    on September 13, 2015 at 6:27 am, Richard said immediately afterwards on September 13, 2015 at 6:45 am,

    “@KeefeAndAmanda – any chance you could write this and your linked contribution up as a fully fledged blog piece?”

    Thank-you, but in general, I prefer to continue what I’m doing, mostly every so often trying to compose some meaningful comments in reply to something said that compelled me to respond. That being said, I welcome you or anyone with a blog to write any blog piece that comments on and links to any set of comments I write and reproduce as much as one wants of their text or embedded figures.

    bill shockley said on September 13, 2015 at 7:01 am,

    “K&A, doesn’t this imply that the mid-century aerosol burden was not a significant forcing? Because removing the long oscillations would not undo the aerosol forcings.”

    I don’t know that it necessarily implies it but in conjunction with some recent studies discussed at this blog that suggest that albedo does not and perhaps cannot change much (at least not as much as was thought prior to these studies), it seems to (perhaps even strongly) suggest it. It depends on what happened to albedo not just during the mid-century aerosol burden but before and after it.

    On this point that albedo does not and perhaps cannot change much, see ATTP’s post
    “New albedo paper?”
    https://andthentheresphysics.wordpress.com/2015/03/11/new-albedo-paper/
    and the comment thread underneath. I commented on this paper and on some of ATTP’s comments about it several times throughout the comments thread. Here are two of these comments by ATTP:

    “[In reply to a David Blake comment “The reflected energy from Earth is highly regulated & this regulation by clouds.] Yes, but this simply implies that the albedo doesn’t change much, not that it responds to balance changes in external forcings.”

    And:

    “Something that struck me about this whole cloud feedback thing, is that if you consider Soden & Held (2006) they suggest cloud feedbacks of between abdout 0 and 1.2 Wm-2. More recent numbers are maybe 0.2 – 0.7Wm-2. As I understand it, this is both albedo and long-wavelength effect, while what Stephens is looking at is albedo only. So, it’s possible that this work implies that the next change in albedo (surface + clouds) is always going to be small, but that says little about the role of clouds in reducing the outgoing long-wavelength flux.”

  171. bill shockley says:

    K&A,

    Thanks for the reference to that other post + comments… will have a look.

    Have to say… that’s quite a nice looking curve (with long oscillations removed). My immediate reaction was this can’t be valid, but now I’m beginning to wonder… If valid, I think Hansen would be quite interested!

  172. Richard says:

    BREAKING NEWS (to me) –

    Question (asked in this thread): Why won’t the fossil fuel companies create their own climate models to demonstrate they can be done better and that there is not a problem

    Answer: They already did that! But decided that they didn’t like the answer.

    http://insideclimatenews.org/news/15092015/Exxons-own-research-confirmed-fossil-fuels-role-in-global-warming

    In late 1980s they flipped from openly financing and doing climate science, to finding doubt and obfuscation.

    Do take the trouble to see the clips from ex-Exxon scientists. Fascinating. Scandalous.

  173. Richard says:

    … funding doubt and obfuscation.

  174. bill shockley says:

    Really nice piece of investigative reporting. Thanks. Message here about ego, competitiveness, goal orientation, and especially the influence of the group. The human condition. I can empathize.

    A similar pattern of investigation followed by obfuscation took place in the US military around the time of the Reagan aministration. I don’t recall many of the details, but the story is covered in the 3-part documentary “Climate Wars”, by Dr Iain Stewart. Used to be available on youtube:. A shame, I really liked that flick. BBC link.

  175. sidd says:

    J. Pipitone and S. Easterbrook, “Assessing climate model software quality: a defect density analysis of three models,” Geoscientific Model Development, vol. 5, no. 4, pp.
    1009–1022, 2012

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.