Can we trust climate models?

I got into another slightly silly discussion yesterday about climate models, which I tried to resolve by directing the other person to a new paper by Julia Hargreaves and James Annan called Can we trust climate models?. This is a paper I quite like because it both presents the aspects of climate models that we might regard as robust and the aspects that are less certain.

For example, the paper says,

It seems that genuinely useful climate forecasting on the multiannual to decadal timescale may be still some way away at this time. Thus it is clear that the models can currently only be relied upon for a broad picture of future climate changes.

Seems reasonable to me. Individual models may be able to represent decadal variability, but collectively models are still unable to reliably predict what might be expected in the coming years or decade. I would actually argue that this is one reason why claiming that climate models have failed because they didn’t predict the “hiatus” is wrong. They were never really capable of predicting such a hiatus and so suggesting that they’re wrong because they didn’t do something they were not capable of doing seems a little silly.

The paper also says,

On the regional scale there are, however, substantial disagreements in magnitude and pattern of temperature anomalies both between models and data, and also within the model ensemble. Therefore, we cannot expect precise predictions from current climate models.

In fact, models are very far from being perfect. They struggle to generate robust simulations of recent climate changes on regional scales, even when run at the highest resolutions available.

So, models don’t perform particularly well at the regional scale. Not that surprising. Even the highest resolutions are probably still unable to properly resolve these smaller scales and I assume that some of the relevant physics is parametrised, rather than self-consistently evolved by the models. I would add, however, that it’s unlikely that models have no value at these regional scale. I can go back 20 years and read papers in my field that present results from simulations that used resolutions that we would never consider using today. Today, we might understand such systems in much more detail than we did 20 years ago, but the results from these early simulations were not valueless. It’s a process of evolution; we don’t go from “wrong, ignore” to “right, accept”.

The paper does, however, say

Probably the most iconic and influential result arising from climate models is the prediction that, dependent on the rate of increase of CO2 emissions, global and annual mean temperature will rise by around 2–4∘C over the 21st century. We argue that this result is indeed credible, as are the supplementary predictions that the land will on average warm by around 50% more than the oceans, high latitudes more than the tropics, and that the hydrological cycle will generally intensify.

So, the overall warming, the variation between the land and oceans, and changes to the hydrological cycle are well represented by the models. These is probably because these largely represent the bits of physics (radiative transfer, heat contents of the different components of the climate system, water cycle) that we understand well.

So, some aspects of climate models are robust and can probably be trusted (overall warming, water cycle) others are less robust and should be treated with caution (regional and decadal predictions). What does this really mean? As far as I’m concerned it just means we have to be aware of these issues, take them into account, and work with what we have. As the paper itself says

Far more problematic, is that we are unwilling to wait 100 years before learning about climate models, and cannot wait before making today’s decisions.

I should add, however, that climate models are not the only evidence for climate change and so just because we can’t trust all aspects of today’s climate models doesn’t really mean that we have no reliable evidence for climate change or that we should simply decide to wait until we have climate models that are better at the representing the regional and decadal scales.

Advertisements
This entry was posted in Climate change, Climate sensitivity, Global warming, Science and tagged , , , , , , . Bookmark the permalink.

146 Responses to Can we trust climate models?

  1. BBD says:

    ATTP

    What does this really mean?

    It means, as you illustrate here, that there is a great deal of strawman argumentation about models by contrarians.

  2. uknowispeaksense says:

    Reblogged this on uknowispeaksense and commented:
    In my discussions with those you call “sceptics” and I call something else, I am yet to find one who can adequately explain the range shifts of tens of thousands of animal, plant, fungal and bacterial species that are consistent with accelerated and unprecedented warming. Perhaps this is why they like to focus on models because explaining away real world evidence is too difficult and more than a little inconvenient?

  3. Florence nee Fedup says:

    Models are only as good as the data put in. Sometimes that data does no6t act as predicted. Yes, then one goes back and looks at the model. I believe the scientist have done this., Yes, and where there has been disparity, more research identifies where the problem is., Yes, the evidence still comes down of the side of the science. Yes, carbon emissions lead to man made climate change,

    I am sure that there are many models. Many different scientist working on the problem.

    Even if the science proved wrong, which is highly unlikely, the globe and economy would benefitted by moving to renewals for power generation, Yes, expensive to transfer to, but cheaper in the long run.

    This government is closing down two technologies that take us well into the future. Technology where future jobs lie.

    Yes, moving to CEF and fibre to the home. One could call this mob Luddites, looking backwards, and not to the future. Are a few coal mines worth saving, to keep us in last century technology.

    ,

  4. Michael 2 says:

    “we are unwilling to wait 100 years before learning about climate models, and cannot wait before making today’s decisions.”

    Well then, don’t wait. You make today’s decisions for you, and I will make today’s decisions for me. Science has neither schedule, deadline or urgency.

    The number of people that would like to decide for me what I am going to do today is legion and starts with my own family, my boss, my church, the friendly neighborhood used car dealer, movie theaters and so on ad infinitum.

    Quite frankly running out of gasoline is vastly more certain than turning Earth into Venus (which, if it were possible, suggests turning Mars into Earth).

  5. Michael 2 says:

    uknowispeaksense says: “adequately explain the range shifts of tens of thousands of animal, plant, fungal and bacterial species that are consistent with accelerated and unprecedented warming.”

    I suppose it starts with being convinced of range shifts of tens of thousands … at all, for any reason. In the unlikely event you succeed there, I’ll just say, “End of LIA”. Next…

  6. Michael 2 says:

    “that there is a great deal of strawman argumentation about models by contrarians.”

    True. If I do not hold a model in my hands (ie, have access to it), then I can only speculate what it actually does, and then I can challenge my speculation. Sort of like playing solitaire.

    But since YOU don’t have the model either, what shall I call your faith in those models that you have not personally evaluated?

  7. Michael 2 says:

    I find it amusing, maybe even slightly annoying, when advocates of models backpedal saying the models were never intended to be able to predict regional phenomenon or decadal phenomenon.

    I don’t believe you. Of course the models were intended to predict such things. In fact, the models are designed to calculate exactly what happens in every grid cell on Earth, every minute of the day and night, for there is no other way to process physical interaction.

    The models SHOULD be able to zero in on any cell on any date in the future and tell you exactly what will be happening THAT DAY.

    If you cannot do that then you have not modeled the climate.

    And yes, it will take a great computer, (*) Deep Think, or at least NOAA’s new Gaea computer.

    * Douglas Adam’s supercomputer in Hitchhiker’s Guide to the Galaxy.

  8. Michael 2,
    Although I do disagree with quite a bit of what you’ve said, you do make some interesting comments. I have to finish cooking dinner so can’t respond further now. Maybe you could do me a favour though. Could you avoid making 4 or 5 somewhat unrelated comments in quick succession. It’s hard to have a coherent discussion – or know where to start – and it has a tendency to come across as thread-bombing.

  9. Patrick says:

    I find it interesting that this paper is claiming the most credible predictions made by climate models are the ones furthest in the future. This defies common sense. The further in the future you try to predict, the more uncertainty should increase. All manner of things could change in the climate system in 100 years time.

  10. jsam says:

    Common sense is not science.

  11. Patrick says:

    Okay, common science sense then. I am an electrical engineer familiar with complex dynamical systems, so I have a different common sense to most.

  12. Michael 2 says:

    Patrick – I think the point ATTP is trying to make is that the long term “smoothed result” is predictable while the daily changes are not. This is true when you treat the entire Earth as a system and apply thermodynamic principles to the system AS A WHOLE. Such a model isn’t even trying to inspect periodic phenomenon or regional phenomenon.

    The other approach is to calculate every single transaction of heat. That’s like making a computer program using assembly language. You understand those discrete transactions perfectly and you let the calculations reveal the future. It cannot fail IF you are thorough in your calculations and starting points.

    But that takes too much computation. A compromise is grid cells. Not a very good compromise as it lacks the perfection of computing everything or the simplicity of treating the Earth as a single system.

  13. BBD says:

    Michael 2

    Quite frankly running out of gasoline is vastly more certain than turning Earth into Venus

    Strawman. Nobody is arguing that.

    In the unlikely event you succeed there, I’ll just say, “End of LIA”. Next…

    Climate isn’t a bouncing ball. There’s no “recovery” from the LIA in the C20th. No physical mechanism.

    I find it amusing, maybe even slightly annoying, when advocates of models backpedal saying the models were never intended to be able to predict regional phenomenon or decadal phenomenon.

    Strawman. The regional projections are not as robust as the major global trends in temperature, hydrological cycle etc. The models are not perfect representations of the actual Earth climate system they are useful approximations of it sufficient to investigate the broad changes in response to centennial forcing by GHGs. Please read the head post.

  14. BBD says:

    Patrick

    All manner of things could change in the climate system in 100 years time.

    Such as?

  15. BBD says:

    I find it interesting that this paper is claiming the most credible predictions made by climate models are the ones furthest in the future. This defies common sense.

    Strawman. H&A wrote this:

    Thus it is clear that the models can currently only be relied upon for a broad picture of future climate changes.

  16. Eli Rabett says:

    The impressive thing about GCMs is that they get the global circulation right. Steve Easterbrook had a nice visualization of that. Show that to DY and the thousand clowns.

  17. Michael 2 writes: I find it interesting that this paper is claiming the most credible predictions made by climate models are the ones furthest in the future. This defies common sense.

    OK, by Michael’s common sense logic we should be more accurate predicting 1 coin flip than say 1 million.. More accurate predicting 10 than 100 or 1000. Me, I figure I’ll be more accurate predicting the overall percentage after 1 million.

    Does that mean I’m defying common sense? I don’t think so. It’s not an initial value problem versus a boundary value problem, but it has similarities. Michael needs to rethink *how* to apply common sense — or perhaps the skill set necessary to analyze the problem properly isn’t all that common.

  18. Michael 2,
    Given your response to Patrick, I think you kind of get this, but I’ll elaborate a little.

    Of course the models were intended to predict such things. In fact, the models are designed to calculate exactly what happens in every grid cell on Earth, every minute of the day and night, for there is no other way to process physical interaction.

    If you look at individual models then they produce decadal variability, but each model does name produce the same temporal variations. What gets presented (for example in IPCC reports) are ensemble averages which tend to therefore smooth out such variability. So if you ask the question “do individual models produce variability on decadal scales?”, the answer is “yes”. If, however, you ask the question “Can you take an ensemble of models and reliably predict the warming rate in the coming decade?” the answer is no.

    Eli,
    Yes, that’s exactly the kind of thing that they do get right.

  19. johnrussell40 says:

    Patrick. You write… “The further in the future you try to predict, the more uncertainty should increase.”

    You appear to be confusing weather and climate. When models predict weather, what you say holds true. When models are projecting climate, the opposite is true; though if projecting too far ahead other variables come into play which means the projection becomes more difficult. Note the difference between ‘projections’ which are created on the basis of specific scenarios at a given moment and can thus be modified by events—and consequently cannot turn out to be ‘right’ or ‘wrong’—and ‘predictions’ which are set in stone at the moment they’re disseminated.

    Within given bands of uncertainty we can have a high confidence in specific outcomes say by 2050 or 2100. However, if you want projections for 10 years hence there is too much natural variability to be able to extract the signal from the noise, which makes them of little use. For instance a large volcano could cool the Earth rapidly for 2 years, or an El Niño could have the opposite effect. The projection would therefore change wildly. But if we look, say, 30 years ahead, such random natural events tend to cancel each other out leaving a clear signal. Beyond 2100 other things start to come into play which makes projections increasingly problematic. But these ‘other things’ tend not to be natural, because natural, long-cycle variations, such as changing orbit and tilt, are well understood and built into the models. The largest unknown variables of course are the changing population, land use and human response to climate impacts. If we take action we could reduce CO2 and the worst impacts might not happen. If we don’t…, well, do I need to explain?

  20. hvw says:

    “However, the credibility of model outputs is clearly limited when we focus on the finer
    scales at which knowledge is desired by stakeholders.”

    This, put more bluntly, should be read, in my opinion, as ” (Regional) climate models are mostly useless to guide adaption measures”. This is, because in general, climate change adaption has to be planned and implemented on local to regional scales to deal with impacts that are specific to those spatial scales.

    It is quite telling that the “We accept the science, but we don’t want to forfeit the profits from emitting GHG” very much pushes the “adaption instead of mitigation” line of argument, but ignores the show-stopping problems of model imperfections here. The same people have no problems invoking “uncertainty monsters” when it comes to putting end of century global temperature predictions into doubt.

  21. hvw,
    That’s an interesting interpretation. I hadn’t thought of that, but you make a valid point.

  22. hvw says:

    ATTP,
    a particularly instructive study for me, when it comes to the dependence of predictability (of extremes in this case) on the scale of aggregation was presented by Fischer et al., 2013, “Robust spatially aggregated projections of climate extremes”.
    http://dx.doi.org/10.1038/NCLIMATE2051

    They provide a nice outreach-compatible analogy:

    “To provide an analogy, it is impossible to predict the time and location of the next traffic accident in a city. But there will be one somewhere, so it makes sense to have an ambulance ready. Higher speed limits will result in more accidents and will require more ambulances, even if it remains impossible to predict the locations of future accidents. Thereby some aggregated aspects are predictable even if the single events are not.”

  23. Bob Campbell says:

    Michael 2
    The Computer in Douglass Adams’ Hitchhiker’s Guide to the Galaxy was Deep Thought.

  24. Michael 2 says:

    Kevin O’Neill says: Michael 2 writes: “… This defies common sense.”

    I did not write those words. As others have written, common sense is almost irrelevant in science — although I suggest that “common to scientists” actually has value.

    “or perhaps the skill set necessary to analyze the problem properly isn’t all that common.”

    Bingo. I suspect you meant to insult but you hit the nail on the head for me and many. It would be interesting to explore what you mean by “proper” but that’s for another day.

    “OK, by Michael’s common sense logic we should be more accurate predicting 1 coin flip than say 1 million.”

    It isn’t my logic but I’ll try to rewrite what I understand. No one can predict the next — or the millionth — coin flip since each flip is independent of all previous flips. But if you propose that coin flips are dependent, that what the millionth coin flip turns up depends on the first (and every intervening), then yes, you can predict with precision not only the millionth, but the 10th or any other.

    Let us propose a simple rule for coin flipping: Each flip must be the opposite face of the preceding flip. Then the entire list of flips suddenly becomes predictable and predicated solely on the very first, the only flip permitted to be random.

    Since physics is supposed to be predictable, in theory you could develop a model that mimics the Earth precisely. I suspect however that such a thing would BE the Earth as anything less would require approximations and shortcuts (thanks to Douglas Adams for brilliantly and comically illuminating this problem).

    It can be done and nuclear physics has been modeling particle interactions for decades. The purpose of experiments is largely to validate the models, not really to discover anything. The Higgs Boson was theorized — the LHC merely exists to discover one and prove the model (the math and its application) to be correct. What then? I don’t know, but if Higgs Boson is validated, then a substantial portion of thinking and math is also validated and its probably good for something not yet obvious.

  25. Michael 2 says:

    “To provide an analogy, it is impossible to predict the time and location of the next traffic accident in a city. But there will be one somewhere, so it makes sense to have an ambulance ready. ”

    But what if “traffic accidents” are themselves a theoretical event that has never been observed and it is the makers of ambulances that instill fear in the public that they need a thing never before needed?

    I’m just playing devil’s advocate here to put some sensible bounds on these analogies. It is actually a pretty good analogy.

  26. Michael 2 says:

    Florence nee Fedup asked “Are a few coal mines worth saving, to keep us in last century technology.”

    I believe coal mines are intended to keep us in *coal* (for its various purposes; steel making and electricity generation comes to mind). Eventually this nation will figure out a better way to make electricity but coal is likely to be needed for steel making for as long as anyone makes steel.

  27. JasonB says:

    Michael 2:

    This is true when you treat the entire Earth as a system and apply thermodynamic principles to the system AS A WHOLE.

    True. Known as simple 1-dimensional Energy Budget Models, these already give us the upper and lower bounds for climate sensitivity, and have done for decades. It was on this basis that the 1979 Charney Report concluded “if carbon dioxide continues to increase, we find no reason to doubt that climate changes will result and no reason to believe that these changes will be negligible.” “Skeptics” lose sight of the fact that all the work since then has been about trying to understand the climate in more detail to see if there is such a reason; if the models can’t be trusted, then we still have no reason to believe that the changes will be negligible.

    Such a model isn’t even trying to inspect periodic phenomenon or regional phenomenon.

    Correct. Although this directly contradicts your earlier comment that “Of course the models were intended to predict such things.”

    True. If I do not hold a model in my hands (ie, have access to it), then I can only speculate what it actually does, and then I can challenge my speculation. Sort of like playing solitaire.

    But since YOU don’t have the model either, what shall I call your faith in those models that you have not personally evaluated?

    Just because you haven’t bothered personally downloading the source code and input data for the GCMs that are freely available online, why do you assume that nobody else has?

    I look forward to your personal evaluation of GISS GCM ModelE to start with.

    But that takes too much computation. A compromise is grid cells. Not a very good compromise as it lacks the perfection of computing everything or the simplicity of treating the Earth as a single system.

    Err… Just how do you think a physical simulation of the Earth would be implemented without breaking up space into discrete chunks? Even if every cell was a Plank length cubed, it would still be the same thing, just on a different scale. (Hell, for all we know, the “real world” is just such a simulation.)

  28. JasonB says:

    Patrick:

    I find it interesting that this paper is claiming the most credible predictions made by climate models are the ones furthest in the future. This defies common sense. The further in the future you try to predict, the more uncertainty should increase. All manner of things could change in the climate system in 100 years time. […] Okay, common science sense then. I am an electrical engineer familiar with complex dynamical systems, so I have a different common sense to most.

    OK, so what you’re saying is that if you have a noisy electric circuit with feedback, the voltage that may be present on that circuit at some point in the future cannot be bounded?

    I quite like the man walking the dog analogy. When I take my dog for a walk, he gets very excited and darts back and forth sniffing everything that catches his fancy. But his range of movement is bounded by the length of the lead, plus my arm, plus a bit extra when he catches me off guard and pulls me off-balance.

    So while I’m walking along the footpath, I cannot predict where he will be 20 metres along the path. But I know where I will be, and I know his rage of movement, so I can certainly bound his range of motion.

    My dog’s like weather, I’m like climate. An observer, watching my dog but somehow unable to see me, could quite easily see the path I’m taking and even take a good stab at predicting where I will be (and hence the range of motion for my dog) at some point in the future simply by taking a moving average of my dog’s location and projecting that into the future. My location minutes into the future is far more predictable than my dog’s location mere seconds into the future.

    Weather modelling is like trying to predict the dog’s location a short period of time into the future. It’s hard because you need to know exactly where he is right now and predict what will catch his attention.

    Climate modelling is like trying to predict my location a much longer time into the future. It’s difficult to do it at very short timescales because you can’t see me — you can only see my dog, and you need to average a lot of samples of his position to work out where I am. But if you can figure out why I’m taking the path I’m taking by looking at the path I’ve taken so far and working out the kinds of things that would cause me to change course, then it’s actually not hard to have a rough idea (cf. Energy Balance Models) and, with a lot more effort, narrow down the range some more (cf. GCMs).

    While the range of possible outcomes is still pretty large, it’s important to realise that no matter which model you use, that range does not include “just like today”, or even “only a little bit worse than today”. You don’t need a model to accurately predict exactly what injuries you will sustain by driving into a brick wall to have enough information to decide that braking would be a good idea.

  29. Eli Rabett says:

    ATTP: Yes, that’s exactly the kind of thing that they do get right.

    Your reply is the problem. That GCMs get the circulation right is a VERY big thing that provides confidence in their usefulness, something to be celebrated as a great achievement. As pointed out above you can get climate sensitivity from one dimensional models, you get circulation from GCMs, and you get sea level from basic equilibrium thermo. Those things are good enough to base policy on.

    The issue with regional modeling has always been the join to global scales at the boundaries, and it is not clear that that problem will be solved soon although the best places to look for progress are areas bounding the sea that are isolated by circulation from their continents (Did somebunny say California), where the topography can be simplified (yep).

    As to progress in computational climatography and weather forecasting Eli has great hopes for GPUs, which are as a substantial a step forward as the Beowulfs.

  30. Eli,
    Yes, my response was a little glib (that’ll teach me to try and respond to most comments 🙂 ). You make a good point; if all that climate models did was give us estimates of climate sensitivity, then they would simply be an extra test for something we get via other means. That they can get something like the global circulation right (and I remember being impressed by this when I watched Steve Easterbrook’s TEDx talk) does indicate that they’re more than simply providing a more complicated way of determining something that we essentially already know.

    As pointed out above you can get climate sensitivity from one dimensional models, you get circulation from GCMs, and you get sea level from basic equilibrium thermo. Those things are good enough to base policy on.

    Yes, I agree.

  31. Andrew Dodds says:

    Michael 2 –

    As a thought experiment, imagine that we are able to duplicate the entire solar system down to the quantum level. Furthermore, imagine we to this 10 times, so we have 10 ‘Model runs’. Each one of which starts out as a duplicate right down past the subatomic level. So our ‘model’ has the same resolution as reality.

    Even so, given a couple of decades maximum, you would expect significant divergence, because of the way such systems work; tiny differences resulting from basic quantum randomness will progressively escalate (The ‘Butterfly effect’, as it is known). To take an example.. tiny differences in the timings of solar particles interacting with the atmosphere will cause small changes in cloud formation; this will cause very slightly different circulation within the cloud, which will then slightly change the track of the cloud.. and so on, until your hurricane has a track differing by hundreds of miles.

    So even this modelling approach – which surely fulfills all your criteria – will not give accurate predictions decades out. You will NOT be able to measure one of these planets 20 years hence and know that the same measurement on one of the others will give the same result. Of course, the average of the ensemble will be a very good estimate of the average of the ‘real’ Earth.

  32. Andrew,
    Yes, I like the Solar System analogy (I may have used it myself in the past). As you say, it’s virtually impossible to run a simulation that would reproduce the Solar system and yet we would never regard such a situation as suggesting that something had been falsified. It’s really just an illustration that even if the underlying physics is well-understood, the complexities of certain systems means that we really can’t reproduce them exactly, but we can still use simulations to understand such systems.

  33. Pingback: The Climate Change Debate Thread - Page 4237

  34. I find it interesting that this paper is claiming the most credible predictions made by climate models are the ones furthest in the future. This defies common sense. The further in the future you try to predict, the more uncertainty should increase. All manner of things could change in the climate system in 100 years time.

    Indeed it does. And indeed further into the future uncertainty should increase.

    Actually, model predictions for the near future are less uncertain than for the far future. The problem is that these predictions for the near future just aren’t useful. For example, using models, I can predict that 2015 will be between -0.5 and +0.5 degrees of 2014. But anyone can do that, a quick look at temperature records will tell you that this is highly likely.

    Now look at the further future: under certain assumptions, models predict that 2095 will be between +2.5 and +4.5 degrees of 2014. Although the uncertainty is larger than for the near future, this prediction is actually (potentially) useful: there is no way to get this result by looking at just the temperature records of the past.

    When evaluating a model, you only want to look at the results that cannot be obtained by some simple procedure. So only the statement about the far future counts as an actual prediction. For a statement about the near future to count as a prediction, it must be more specific. The current models cannot provide this specificity, so these statements are less credible.

    Note: I made up the specific numbers in this post, but I don’t think using the actual numbers would change much.

  35. Raymond Arritt says:

    Interesting discussion.

    Perhaps a better example than a coin flip is loaded dice.

    With fair dice, we know the odds: the most common roll will be a 7, the least common will be 2 and 12, and so on. We don’t know what the next roll will be, or even the average of a few rolls, but we know what the averages should be if we roll the dice a number or times. We’ll take this as the current climate.

    Now let’s load the dice. The normal way of loading a die doesn’t guarantee what every roll will be because that would be too obvious; instead, it changes the odds. (Don’t ask how I know these things…) If we were to keep track of the rolls we’d eventually figure out that something has changed about the dice because the odds are off.

    Conversely, if we know that the dice are loaded, we can get a good idea of how the odds of the various rolls (7, 2, 12 and so on) should change. We could write a mathematical model of this by knowing how the physical characteristics of the dice had been altered. Our model wouldn’t tell us the next roll, or even the average of a smalll number of rolls. But it will give us a pretty good idea how the averages will change over a fairly large number of rolls.

    What we’re doing when we add CO2 to the atmosphere is loading the dice. We can observe the climate over a long time (large number of rolls) and get an idea of how things are changing. We can also model what should happen, because we have a good idea of what loading the dice will do based on physical principles of the greenhouse effect developed over the past two centuries. Our model doesn’t tell us what the next year (roll of the dice) will be. It doesn’t even tell us much for certain about how things will change over a short period like the so-called “hiatus” that some are interested in (i.e., a small number of rolls). But we have a good general idea of how the statistics are changing.

    Like all analogies this is imperfect but maybe it will be helpful for understanding.

  36. Bouke,
    Your comment here is interetsing,

    Now look at the further future: under certain assumptions, models predict that 2095 will be between +2.5 and +4.5 degrees of 2014. Although the uncertainty is larger than for the near future, this prediction is actually (potentially) useful: there is no way to get this result by looking at just the temperature records of the past.

    Something that I noticed in the Hargreaves & Annan paper was a comment that for decadal predictions, empirical models do better than full climate models. Fine, but part of me just goes “so what”. An empirical model may well be useful if all you want to do is make some kind of decision about the near future (and I would argue that the near future isn’t really the issue) but it tells you nothing about what’s actually happening. You also don’t know if it will continue to do better. We can show that empirical models outperform GCMs for decadal predictions using past data, but the next decade may be completely different. That’s not to say the empirical models have no value, simply that them doing better at something than GCMs isn’t – in itself – all that relevant.

    Raymond,
    Yes, that’s a good analogy that I’ve seen used before.

  37. Raymond Arritt says:

    “Since physics is supposed to be predictable, in theory you could develop a model that mimics the Earth precisely.”

    Your premise is faulty (ref Heisenberg, Poincare, Lorenz, and many others).

  38. Florence nee Fedup says:

    Have not noticed steel mills on every street corner., How many in the whole country, Not sure there are not cleaner technology available. l know there has been great gains in the technology used in the pr5oduction of aluminium. Factories cheaper to build. Cleaner and cheaper technology. I see your comment as little more than a diversion.,

  39. hvw says:

    ATTP,
    “That’s not to say the empirical models have no value, simply that them doing better at something than GCMs isn’t – in itself – all that relevant.”

    It appears that for some people it is a special point in the development of our knowledge when a dynamical model starts performing better than an empirical baseline, that is when it starts to have skill, in other words when our understanding of the physics starts actually improving predictions.

    I can relate to that but don’t know exactly why. Perhaps because at that point science becomes useful, if you regard the construction of an empirical baseline prediction as pre-scientific statistics exercise.

  40. John Hartz says:

    ATTP: Given their statements, certain of the critics of GCMs posting comments on this thread seem to lack a basic understanding of what the Earth’s climate system is and is not. To those who fall into this category, I say, go back and do your homework. Read the latest IPCC technical reports for starters and be sure to review the glossaries of terms and acronyms.

  41. John Hartz says:

    ATTP: I encourage you to buff up your OP and publish it on Skeptical Scince as a guest post. Your “new and improved” version could address the issues raised by commenters on this thread and incorporate more references to basic information about GCMs. I am willing to help you out on this.

  42. dhogaza says:

    Michael 2:

    “I suppose it starts with being convinced of range shifts of tens of thousands … at all, for any reason. In the unlikely event you succeed there, I’ll just say, “End of LIA”. Next…”

    It is rare to run across someone who so publicly boasts about one’s ignorance. Just as the source for at least one GCM is available, so your claim that you can’t evaluate models because they’re not available for inspection is false, so is the data publicly available regarding the shift towards higher latitudes of a wide range of organisms. So are papers which soundly establish that the earth won’t enter a runaway greenhouse state ala Venus, so there’s no excuse for your ignorance in regard to this, either.

    So, your motivation for continued ignorance?

    “Well then, don’t wait. You make today’s decisions for you, and I will make today’s decisions for me. Science has neither schedule, deadline or urgency.

    The number of people that would like to decide for me what I am going to do today is legion and starts with my own family, my boss, my church, the friendly neighborhood used car dealer, movie theaters and so on ad infinitum.”

    Ideology.

    The worshipping of continued ignorance in order to avoid confronting the limitations of one’s ideology has a long and not particularly honorable history. Galileo, for instance. “Jewish science” in Germany in the 1930s, for another.

  43. John Hartz says:

    I believe that the discussion of climate and ocean circulation models contained in the following article to be particualrly informative and directly relevant to the issues raised in the OP.

    Apparent pause in global warming blamed on ‘lousy’ data by Stuart Clark, The Guardian, June 13, 2014

  44. AnOilMan says:

    I think this presentation covers the subject rather well;

    Modelers are all independent of one another, they all share and review each others code, and competitively try to produce more accurate results.

    No software is done this way in the commercial world. This would be like Apple, Microsoft, and Adobe sharing source code as a means of producing a better product.

    The other thing is that details and refinements from of scientific research are fed into the the development teams. This is useful in that it helps prove or disprove current research.

    The only comparable product on earth was the space shuttle, and NASA had three different flight control systems built using utterly different design methodologies. The idea being that the same bug would not be in each different program. The flight system then uses a majority vote on all decisions.

    Some years ago I was looking at a hardware code testing system. It measured whether lines of code\branches were actually executed. Using military grade testing the Canadian air traffic control system had 20% code coverage. After using the hardware testing system, and adjusting test procedures, they hit a healthy 60% code coverage. What they really needed to do was share code with other air traffic systems and , review each other ‘s code.

    Michael 2: All you had to do was look. Here’s where you can find Climate Model Source Code for you to hold in your hands;
    http://www.easterbrook.ca/steve/2009/06/getting-the-source-code-for-climate-models/

  45. John H.,
    I believe Victor Venema was not particularly complementary about that article. I think (although he could be wrong) he saw it as one group trying to claim that their data was better and more reliable than other available data. That’s not to say that the basics of what it was presenting doesn’t have merit, though.

  46. John Hartz says:

    Until reading the Guardian article that I cited above, I had never come across anything about Essential Climate Variables (ECVs) that have been identified by climate modelers. Does anyone know where a comprehensive listing of EVCs can be found?

  47. Nobodyknows says:

    “Despite all climate models producing similar magnitudes of water vapor
    feedback [Randall et al., 2007], the simulated water vapor variabilities have large
    discrepancies with observations [e.g. Pierce et al., 2006], and large spreads in the relation of
    water vapor with sea surface temperature (SST) and/or clouds [Su et al., 2006a]. The
    uncertainties in convective parameterizations and cloud microphysics in climate models lead
    to uncertainties in the accuracies of simulations of water vapor and clouds and corresponding
    uncertainties in climate predictions.”

  48. For those who are interested, I think Nobodyknows’ quote is from this paper.

  49. dhogaza says:

    Nobodyknows seems to have stumbled across one reason why there’s such a large uncertainty range (2x factor) in the computation of the equilibrium response to a doubling of CO2.

    Why he or she thinks this is important is a mystery …

  50. dhogaza says:

    The paper Nobodyknows selectively cites says, among much more:

    “The uncertainties in convective parameterizations and cloud microphysics in climate models lead to uncertainties in the accuracies of simulations of water vapor and clouds and corresponding uncertainties in climate predictions. Chapter 8 of the IPCC 2007 report [Randall et al., 2007] concludes that, “cloud feedbacks remain the largest source of uncertainty in climate sensitivity estimates.” Improving the accuracy of cloud and water vapor simulations by climate models is thus of critical importance [e.g. Cess et al., 1996; Soden and Held, 2006; Bony et al., 2006; Waliser et al., 2009].”

    Yes. This is known. The paper is mostly asking “how much have models improved in this regard since 2007?” and the answer is “not much”, which is unsurprising given that the range given for equilibrium sensitivity is essentially the same in the latest IPCC report and the previous one.

    Nobodyknows: did you have a point? I’m guessing everyone here knows that accurately modeling cloud feedbacks are the largest challenge facing modelers’ attempts to compute the equilibrium climate response to increased CO2.

  51. John Hartz says:

    My professional career was spent in the transportation sector. Although I am by no means a “model-head”, I do/did have a basic understanding of how transportation forecasting models were constructed and applied. In those days, I would state:

    “Models cannot replicate the real world. Rather, they can only simulate it.”

    Is this statement valid and, if so, can it be applied to GCMs?

  52. John Hartz says:

    ATTP: Given the topic, I predict that this thread will set a new record for number of comments posted on given article published to dater on this website.

  53. AnOilMan says:

    John Hartz: And many industries depend on the quality of simulation, although they tend to be short term predictors. Near term Wind, and Solar supply prediction is more accurate than electrical grid demand.

    Years ago there was a competition between the climate modelers about the regional warming effects from Saddam Hussein lighting the Kuwait oil fields. This was an interesting event in that its effectively the first climate change man made experiment. If I recall at the time, 1 year later, the results were about 1/3 of the models spot on, 1/3 a bit squiffy, and 1/3 bat ass wrong.

    Does anyone else remember this?

  54. BBD says:

    Pretend the models are the *only* source of our scientific understanding of climate.

    Attack the models.

    Claim that “climate science” is ill-founded and uncertain.

    BAU profits.

    * * *

    It’s a false framing. Intellectual dishonesty. Unscientific. Not the way things really are.

    A tired, tedious, worn thin contrarian meme.

    #yesbutpaleoclimate

  55. Michael 2 says:

    Thanks to AnOilMan for the link to model source codes. It appears that links to the three freely available models are broken and the site is from 2009, ancient history in this realm, but I am and was delighted to have a starting point.

    Evidently most of the models are written in FORTRAN which I suspect (and hope) has evolved since I last used it in the 1960’s. Back then it wasn’t even remotely modularized making it very difficult to read code and understand what it is doing. The French code is freely viewable but, of course, largely in French.

  56. BBD says:

    Michael 2

    See above.

    Your concerns about the models are as valuable as those raised by David Young. They do not address paleoclimate variability.

  57. Michael 2 says:

    dhogaza says: “It is rare to run across someone who so publicly boasts about one’s ignorance.”

    Thank you. I am reminded of a line from Avatar: “It is hard to fill a cup that is already full”. I would much rather be thought of as a person whose cup can still be filled. There is no shame in it.

  58. Windchaser says:

    Evidently most of the models are written in FORTRAN which I suspect (and hope) has evolved since I last used it in the 1960′s.

    Considerably. I’ve never even seen any code older than F77 (fortran 1977), which was still considerably worse than F90 (Fortran 1990), which finally included modules and derived types. And Fortran 2003 utilizes type-bound procedures, so the polymorphic abilities of the language have been greatly improved, while still abstracting most of the memory issues away from the user, making it easier and faster for scientists’ use than, say, C++.

    It’s still a very good coding language, particularly for scientific use. Comparable to Matlab, I’d say.

  59. Good grief, Fortran’s better than Matlab 🙂

  60. BBD says:

    Incidentally, Michael 2, a second commenter apart from me has now remarked on your evidently weak grasp of physical climatology.

    How are you going to ‘evaluate’ a climate model if you don’t understand the basics of physical climatology?

    From where I sit, you are making big, but utterly implausible claims about your skill set and abilities.

  61. AnOilMan says:

    Michael 2: Language doesn’t make code bad. Not by a long shot. Code is good if and only if its reviewed and tested. I suspect that FORTRAN would be popular because the code is as old as the science, and any programmer worth his salt would rather use code that is good rather than write new code and hope they didn’t add bugs.

  62. Nobodyknows says:

    Atmospheric feedback is more than water vapor feedback. Water vapor feedback is more than cloud feedback. What is of interest is if models can simulate temperatures in higher atmosphere, and then say something of changes in TOA imbalance. How can we understand the earth energy budget better? Can we know more by using over 20 different models to answer the same question?

  63. Nobodyknows,
    I think what you’re highlighting is the reason why what is presented is ensemble averages. What they illustrate is the range of uncertainty. It’s not quite the same as a statistical uncertainty (i.e., running the same model many times with small random changes) but the range that they present is an illustration of how the uncertainty about things like feedbacks influences the projections (and, yes, they’re projections, not predictions).

  64. BBD says:

    To be honest, Nobodyknows, I think you are confused. Why not watch that Easterbrook video clip AOM linked above?

  65. AnOilMan says:

    BBD: ’cause in the games these guys play, its all about spreading FUD, Fear Uncertainty, Doubt. Often its by people who do not understand what they are talking about.

  66. John Hartz says:

    Nobodyknows: Given your need to know absolutely everything before you proceed to take action, I assume that you never drive a car bcause you cannot know with 100% certainty that you will not get into a crash that causes you serious bodily harm amd/or death.

  67. John Hartz says:

    The reason I predict that this comment thread will set a record is that 97% of the folk inhabiting Deniersville believe that they know something about climate models that no one else has ever thought of.

  68. AnOilMan says:

    John Hartz: A few weeks ago I was watching Dirty Harry, when the neighbors car spontaneously exploded. (true story)

    I’m sure there’s a stat for that, but I’m equally sure that the science on this couldn’t possibly be resolved. After all new cars are coming out all the time, and technology for cars is changing all the time.

    Interestingly cars are full of computers, for which you are not allowed to see the source code, and we must assume is safe.
    http://www.nytimes.com/2010/02/05/technology/05electronics.html?_r=0

  69. Michael 2 says:

    BBD says: “How are you going to ‘evaluate’ a climate model if you don’t understand the basics of physical climatology?”

    You have it backwards. I will use the model to build my understanding of physical climatology. I can read code and understand what it is doing easier than I can “grok” a bunch of integrals.

    I have absolutely no illusion or delusion that I would spot an error that no one else has ever seen, in a language I haven’t used since the 1960’s.

    But I think more importantly it is a test of character. Your willingness to share the program reveals your pride in your creation and faith that it has been done well.

  70. Michael 2 says:

    Too funny!

    John Hartz says: “…believe that they know something about climate models that no one else has ever thought of.”

    Me: “I have absolutely no illusion or delusion that I would spot an error”

    Evidently I am not a proper denier. My cup is not full.

  71. Michael 2,
    I think, though, that what BBD is pointing out that climate models are built on the basis of physical climatology. You may learn something by working through a code (although I would doubt it) but you would learn more if you learned the physics associated with our climate first, before delving into the complexities of a climate model.

  72. AnOilMan says:

    You need to grok the integrals.

    There is no substitute for knowing the integrals. One does not write software, then guess the formula. Period.

  73. Michael 2 says:

    Excellent analogy, Raymond Arritt, who wrote “Perhaps a better example than a coin flip is loaded dice.” I will keep your comment in my library. A chaotic system is highly susceptible to “loading” the dice.

  74. John Hartz says:

    Michael 2: Your refusal to research the physics and the maths of the Earth’s climate system tells me that you are not to be taken seriously.

  75. John Hartz says:

    If somone wants to learn the physics, chemistry, and maths of the Earth’s climate system, The Science of Doom website is a good place to visit.

    http://scienceofdoom.com/.

  76. Michael 2 says:

    All — sorry for so many comments but there’s been some really good ones today including a few questions to me that I want to answer. I’ve saved half a dozen explanations of climate vs weather, my favorite being the wandering dog analogy although even it assumes that the dog’s owner is himself not also wandering and is thus predictable.

    Jason writes: “Just because you haven’t bothered personally downloading the source code and input data for the GCMs that are freely available online, why do you assume that nobody else has?”

    GCM’s are not freely available online (I’ll download one the moment I can actually find one that is indeed freely downloadable), and it might be in a language I can “grok”, but my point is more to the initiating cell data. How important that is I do not know so it is more of a point of argumentation.

  77. dhogaza says:

    Michael 2:

    “Evidently most of the models are written in FORTRAN which I suspect (and hope) has evolved since I last used it in the 1960′s. Back then it wasn’t even remotely modularized”

    Subroutines and functions were introduced into FORTRAN in 1958. Perhaps you should’ve kept your compiler up-to-date …

  78. dhogaza says:

    Michael 2:

    ” I would much rather be thought of as a person whose cup can still be filled. There is no shame in it.”

    The shame comes from your unwillingness to fill it.

    For instance, you complain that the reference given you regarding GCM sources are broken as the paper is old.

    It took me 5 seconds in google to find the GISS Model E home page:

    http://www.giss.nasa.gov/tools/modelE/

    Finding the source from that page will be left as an exercise …

  79. dhogaza says:

    Apparently only the AR4 version of Model E is available in the source browser thus far, though the AR5 version is forthcoming.

    Michael 2:

    “my point is more to the initiating cell data”

    Model E home page:

    “Boundary and initial conditions for the AR4 version can be downloaded from fixed.tar.gz (191 MB). This is a large amount of data due to things like transient 3-D aerosol concentrations etc. A wider selection of input data (encompassing many different configurations, but mainly for a more up-to-date codebase are available here. There are more variants of this data available internally, so if you do not find the configuration you’d like, let us know and we may be able to help you.”

    You were saying ???

  80. Eli Rabett says:

    Really amusing. It would please M2 no end to hear that COBOL is still being used by banks.

  81. dhogaza says:

    Michael 2:

    Actually snapshots of the code have been made available (tarball form) for Model E Version 2 as used to calculate results for AR5. They simply haven’t frozen a version and set it up with the source browser …

  82. John Mashey says:

    1) As always, George Box: A;l models are wrong, some are useful.

    2) Some people have little or no experience with computer=based models, but a subset of them, often without the science or programming background (what’s this f90 stuff? why isn’t it Jave?) seem sure that climate models are useless.

    3) More subtle is the behavior of folks who have a lot of experience with some class of models and/or software, but overgeneralize to others for some reason or other. As an example, see this discussion of common ways that people in some disciplines fall into this. Also see Gavin Schmidt’s FAQ on climate models, as a complement to Steve Easterbrook’s nice talk.

  83. John Mashey says:

    Argh, that was “All models are wrong..”

  84. BBD says:

    Michael 2

    Your attitude gives the lie to your claim of merely seeking after knowledge. Seekers after knowledge don’t say things like this:

    In the unlikely event you succeed there [demonstrating poleward range shifts in numerous species in response to C20th warming], I’ll just say, “End of LIA”. Next…

    And when the wrongness of this statement was brought to your attention, you did not ask for an explanation as to why it was incorrect.

    You are not seeking after knowledge. So enough of this “my cup isn’t full” fake humility stuff. It’s an insult to the intelligence and good will of other commenters.

  85. John Hartz says:

    Another excellent resource for anyone who wants to learn about GCMs is the NCAR/UCAR website. I recommend starting with the webpage, About CESM.

    CESM = Comunnity Earth System Model

  86. Windchaser says:

    Subroutines and functions were introduced into FORTRAN in 1958. Perhaps you should’ve kept your compiler up-to-date …

    Sure, but modules weren’t introduced until Fortran 90. Modules let you package variables, parameters, and procedures together, making the code significantly easier to understand. And it lets you set privacy options for variables, and provides type checking for functions, and forms the basis of Fortran OOP.

    Anyways. /threadhijack.

  87. John Hartz says:

    Another webiste that is loaded with quality information about GCMs is that of NOA’s Geophysical Fluids Dynamic Laboratory (GFDL). I recommend starting with the webpage,Earth System Models

    BTW, i cannot help but wonder if David Young has ever visited the GFDL webiste. i’ll pose this question to him on the comment thread to “Climate Cultists”

  88. Michael 2 says:

    Jason B wrote “Just how do you think a physical simulation of the Earth would be implemented without breaking up space into discrete chunks?”

    Just brainstorming here — object oriented. Different kinds of objects would have different physical dimensions and could overlap other objects spatially. One such object could be a convection cell or heat plume rising. The earth would be gridded somewhat, but instead of little trapezoids it would flow with biome boundaries and you would specify a density of certain heat transporting phenomenon (dust devils for instance) and not worry so much about these things crossing boundaries.

    In this sense it would resemble somewhat CGI earth-creating visualizations and animations, with autonomous behavior of these objects.

  89. dhogaza says:

    Windchaser:

    “Sure, but modules weren’t introduced until Fortran 90. Modules let you package variables, parameters, and procedures together, making the code significantly easier to understand.”

    Note that Michael 2 said “remotely modularized”. Subroutines and functions allow for modularization into subprograms. The rest boils down to syntactic sugar. Important, but semantically equivalent to the simpler subroutine form of modularization (which is all that the instruction sets of the vast majority of processor designs have ever made available to the compiler writer – which would be me, BTW). Separate compilation made possible ad-hoc packaging of logical pieces of code that gave many of the benefits of sweeter (syntactic *sugar*, remember) solutions to the problem of the organization of very large programs.

    With no subroutines or an equivalent mechanism for code reuse, modularization isn’t possible, and an important semantic feature of (most) processor designs are locked away, unavailable to the programmer using the language.

    So it is entirely correct to point out that FORTRAN II in 1958 introduced modularization into the language.

    Michael 2 is wrong about everything (TM).

    Don’t forget it 🙂 🙂

  90. AnOilMan says:

    John Mashey: Java? Crazy talk! Given that they need a super computer to execute code written in FORTRAN, using Java would imply they need 20 super computers. (100 for Python.) Of course rewriting the code would also add a lot of bugs, and remove desired features.

    BBD: Without using the dreaded ‘T’ word, spreading nasty effect is all about tone, and its intentional;
    http://www.desmog.ca/2013/03/05/incivility-trolls-and-nasty-effect

  91. Michael 2 says:

    Many thanks to John Hartz. I asked and received. Code and data and Fortran, oh my!
    http://www.mom-ocean.org/web/docs/project/user_guide

  92. AnOilMan says:

    Michael 2: OO design comes with a horrendous performance hit. That stuff about garbage collection is a serious issue. On top of that, rewriting code makes it unproven and adds a lot of bugs. The only upside is that the number of new bugs you’d write are about 1/5 the number of bugs created when using a lower level language like C. OO concepts were exceedingly new when the first Models were put into use.

  93. Michael 2 says:

    BBD says: “Seekers after knowledge don’t say things like this…”

    Ah, the No Truth Seeker fallacy 🙂

    I write what needs to be written to elicit desired responses and are calibrated somewhat for the person to whom I am responding and to the question.

  94. John Hartz says:

    Maichael 2: I presume you know that “MOM” stands for Modular Ocean Model. By itself, MOM is not a GCM.

  95. BBD says:

    Michael 2

    That’s just an ocean model you’ve got there. You’ll be needing rather more than that for an AOGCM (the clue is in the name).

  96. BBD says:

    Michael 2

    [Mod: Yep, you guessed right] This comment will be moderated, but not, I hope, before you read it.

  97. BBD says:

    John

    We crossed there.

  98. BBD says:

    Michael 2

    I write what needs to be written to elicit desired responses and are calibrated somewhat for the person to whom I am responding and to the question.

    You pretend to knowledge you do not posses and you are demonstrably not trying to improve your understanding of the basics of physical climatology.

  99. Michael 2 says:

    BBD says: “you did not ask for an explanation as to why it was incorrect.”

    I am trying to be sensitive to ATTP’s desire to stay on topic and not dominate this or any other thread.

    John Hartz: I cited that page to show my appreciation to your work here by demonstrating that I had followed your link.

  100. Steve Bloom says:

    Michael 2’s is using a version of the “god in the gaps” argumentation made famous by anti-evolution ideologues. Don’t like those results/projections? Find the largest area of uncertainty, focus on that and claim that it refutes the result/projection itself. Easy!

    Speaking of model uncertainty, something I’ve thought for a while is that it would be much more effective with the public (including policy makers) to state it in terms of temp +/- time rather than the more usual time +/- temp. The latter has the drawback of making it easy to imagine that maybe we only need to worry about the bottom of the range, whereas a better focus for concern would be the inevitability of the upper part of the range. What I’m suggesting is already done somewhat, and is typical with specific consequences, e,g, Arctic sea ice loss, but mostly what’s heard is time +/- temp.

    I think there’s also an implication of the usual time +/- temp formulation that the stated time (commonly 2100) represents the end of our proper concern. Something like “on our present course, we will reach +3C sometime between 2070 and 2130” presents less room for that.

    But maybe I’m missing some obvious reason for sticking with time +/- temp.

  101. BBD says:

    Michael 2

    I am trying to be sensitive to ATTP’s desire to stay on topic and not dominate this or any other thread.

    You fool nobody here.

  102. Steve Bloom wrote:

    But maybe I’m missing some obvious reason for sticking with time +/- temp.

    I don’t really see any gain of scientific information from transforming the statistics in the way you propose it. It only would be for the purpose to make it easier for the part of the public or for politicians who are too lazy to use their brain matter. Not doing so is one less step of post-prosessing the output from the model simulations, ergo, less work for overworked climate modelers who already deal with too large amounts of data.

  103. Kdk33 says:

    Too funny.

    Can’t predict over any timescale we could use for validation. Can’t predict anything spatially. All it can say is “warmer” and “at about the rate we’ve always said”, which is what it was told to say in the first place.

    In other words, a complete boondoggle. No predictive skill whatsoever, but perfectly non-falsifiable.

    It’s worse than I thought.

  104. Michael 2 says:

    dhogaza says: “It took me 5 seconds in google to find the GISS Model E home page”

    Chuckle. You illustrate my point with BBD. Until your boast, I didn’t know such a thing existed. Now I too can find it. Thank you.

  105. diessoli says:

    Here’s a ‘simple’ GCM written specifically to help students understand how models are build.
    It’s fairly simple to compile and run, written in F90, open source and fairly well documented
    http://www.mi.uni-hamburg.de/Planet-Simul.216.0.html?&L=3

    D.

  106. mrdodge says:

    @Michael 2,

    Unless you are really up on the maths and the physics then reading GISS model E, CESM and other publicly available models is likely to be quite challenging as you are not likely to understand what the intent of a particular piece of code is or understand the various parameterisations etc etc.

    I tried this myself and gave up pretty quickly. A better way to approach the goal is get a grounding in the field – PierreHumberts “Principals of Planetary Climate” is a great start with a companion website with lots of simplified python models for various aspects. You can also try Henderson& Sellers Climate Modelling Primer, which while somewhat dated lays out the basics of climate modelling and comes with a pretty complete set of samples in VB. You might want also to consider EdGCM http://edgcm.columbia.edu/download-edgcm/, which is an old gcm adapted for educational purposes and packages up the data handling aspects for you so you dont drown in techno crap issues. There is a 30 day free trial.

    I would also second the suggestion to go to science of doom. Check out the series on Visualising Atmospheric Radiation for instance, which gives a simplified matlab model but more importantly a detailed discussion of what it is doing and why.

    Hope that helps!

  107. Michael 2 says:

    mrdodge “Unless you are really up on the maths and the physics then reading GISS model E, CESM and other publicly available models is likely to be quite challenging”

    I have no doubt. It’s been 10 years since my most recent class in calculus so I could easily be overwhelmed. Thank you for the book recommendations.

  108. Michael 2 says:

    BBD: “This comment will be moderated, but not, I hope, before you read it.”

    Interestingly, I have never known for sure how this works but yes, evidently I receive all messages prior to moderation. I have received a ton of useful links and book suggestions today so I bid y’all farewell until we meet again.

  109. mrdodge says:

    @diessoli

    Hey – plasim looks great. thanks for the link!

  110. Michael 2 says:

    John Hartz says: “Your refusal to research the physics and the maths of the Earth’s climate system tells me that you are not to be taken seriously.”

    I am not here to be taken seriously. YOU are here to be taken seriously. Between work, public service and family raising I have little time for this research. Besides which, what is it for? I am a citizen and I wish to be informed on topics of considerable importance to this nation and world. It is just good citizenship. Whether my math is up to the task remains to be seen. If not, hallelujah, a reason to increase my skills.

  111. Kdk33,

    which is what it was told to say in the first place.

    Ooohhh, a conspiracy, is it?

  112. JasonB says:

    Michael 2:

    I’ve saved half a dozen explanations of climate vs weather, my favorite being the wandering dog analogy although even it assumes that the dog’s owner is himself not also wandering and is thus predictable.

    Yes, to be a good analogy, we have to assume the dog’s owner only changes course for a reason, just like the climate (which is why your earlier “recovery from the LIA” explanation for rising temperatures is really just a fancy way of saying “magic”).

    GCM’s are not freely available online (I’ll download one the moment I can actually find one that is indeed freely downloadable), and it might be in a language I can “grok”, but my point is more to the initiating cell data. How important that is I do not know so it is more of a point of argumentation.

    I’d like to point out that when you wrote that response, I had already provided you with the link to GISS’s ModelE which, as dhogaza later pointed out, freely provides both the source code (both the version used for AR5 and the latest version under development — follow the “snapshots” link) and the initialisation data.

    I suppose your next point will be that they need to provide you with a programmer to explain it all for you and a computer to run it on.

    Here’s a tip: you might find it easier to understand the workings of the GCMs if, in addition to learning the background physics, you actually read the papers that the various groups publish explaining how they work. That is likely to be far more illuminating than staring at source code.

    Jason B wrote “Just how do you think a physical simulation of the Earth would be implemented without breaking up space into discrete chunks?”

    Just brainstorming here — object oriented. Different kinds of objects would have different physical dimensions and could overlap other objects spatially. One such object could be a convection cell or heat plume rising. The earth would be gridded somewhat, but instead of little trapezoids it would flow with biome boundaries and you would specify a density of certain heat transporting phenomenon (dust devils for instance) and not worry so much about these things crossing boundaries.

    Just what difference do you think it would make if you were to break the world up into cells and, for each cell, record what’s in it and how it interacts with its neighbours, vs break the world up into entities of some sort and, for each entity, store which cell(s) it is in and how it interacts with other entities?

    (BTW, even in your case space will still be broken up into discrete chunks, because precision is always finite; additionally, computational limitations will mean defining entities/objects at a scale many orders of magnitude larger than atoms, which means cells will potentially contain many entities…)

    There’s no such thing as a free lunch.

    You might also want to ask how fluid dynamics problems are tackled in general, and not just in the climate modelling world. It’s unlikely that your brainstorming is suddenly going to overturn decades of experience in people who’ve dedicated their lives to tackling this problem.

  113. JasonB says:

    I wrote:

    I’d like to point out that when you wrote that response, I had already provided you with the link to GISS’s ModelE which, as dhogaza later pointed out, freely provides both the source code (both the version used for AR5 and the latest version under development — follow the “snapshots” link) and the initialisation data.

    I’d also like to add that I provided the link in the very next sentence, directly below the bit you quoted in your reply telling me that “GCM’s are not freely available online”.

    Michael 2:

    I am not here to be taken seriously. YOU are here to be taken seriously. Between work, public service and family raising I have little time for this research.

    Then how can you conclude that you can safely ignore what others are telling you? You certainly seem to have a lot of time to share your ill-informed views with others.

  114. BBD says:

    Michael 2

    I have received a ton of useful links and book suggestions today so I bid y’all farewell until we meet again.

    And you are demonstrably not trying to improve your understanding of the basics of physical climatology. When we meet again, why don’t you ask why your claim that C20th warming was “recovery” from the LIA is nonsense?

    You might learn something useful. But you don’t want to learn anything useful because the process would puncture your denialist bubble. Hence the notable lack of genuine intellectual curiosity. As I said, you fool nobody here.

  115. Andrew Dodds says:

    Michael 2 –

    Regarding your approach to modelling, I think you have it backwards. If you have a good model, then large scale features like heat plumes should emerge without specific coding. Having discrete overlapping objects may give you just a few issues with mass conservation..

    The holy grail in any modelling of physical systems is purity -you start with just the starting parameters (i.e. the current state of the planet), apply only the laws of physics to this starting state, and therefore reach a new state that has all the dynamic features of reality.

    This is hard. Really quite hard.

    And the more you try to simplify things by adding features that you ‘know should be there’ the more you devalue the model. For example. if I force my model to have an average of 10 hurricanes per year in the Atlantic basin, then I can’t say anything about hurricanes by looking at my model.

    Re Eli above – so if you can get a model to circulate the atmosphere in the same general pattern as reality, then you’ve already done a tremendous job.

    But to be honest, if you want a picture of the future under global warming and you don’t trust computer models, you are better off looking at paleoclimatology. Specifically, look up the Pliocene climate.

  116. BBD says:

    Andrew Dodds

    But to be honest, if you want a picture of the future under global warming and you don’t trust computer models, you are better off looking at paleoclimatology. Specifically, look up the Pliocene climate.

    Quite. I’ve been trying to point this out rather a lot recently. Oddly, those desperately concerned about “the models” totally – and I have to conclude deliberately – blank me.

    Known paleoclimate variability demonstrates that the climate system is moderately sensitive to radiative perturbation. “The models” are not the primary source for this knowledge. And we don’t need “the models” to show us that the last time GAT was ~2C warmer it was a different world (MPWP).

  117. John Hartz says:

    All things considered, Michael 2’s visit did generate lots of quality information about GCMs and related matters. I have persoanlly benefited from it. Thanks to everyone who contributed.

  118. John H.,
    Indeed, should always bear that in mind. Also, I think Michael should give some thought to BBD’s point about the LIA. Understanding why implying that “we’re just recovering from the LIA” doesn’t make physical sense would be of benefit.

  119. John Hartz says:

    ATTP: Looks like you just conceived your next OP.

  120. John Hartz says:

    Micahel 2’s visit prompts me to ask… .

    Is Dodgeball a subset of Climateball, or is it the other way around?

  121. I think playing Dodgeball is allowed under the rules of Climateball.

  122. dhogaza says:

    Michael 2:

    “Chuckle. You illustrate my point with BBD. Until your boast, I didn’t know such a thing existed.”

    No, you claimed it did NOT exist. That’s very different.

    Changing your story, as you so often do here, adds to the general notion that nothing you say can be taken seriously.

  123. dhogaza says:

    Michael 2:

    Specifially, you claimed “GCM’s are not freely available online”.

    Which is very different than your modified statement, “I didn’t know such a thing existed”.

  124. dhogaza says:

    Kdk33:

    “All it can say is “warmer” and “at about the rate we’ve always said” … No predictive skill whatsoever”

    I doubt you see the contradiction in your words, but I do and I bet others do, too …

    Thanks for agreeing, though, that the models appear to robustly tell us is that denialist arguments that the equilibrium sensitivity to a doubling of CO2 is <= 1C are very, very wrong.

    (not that we need models to tell us that, as BBD frequently reminds us.)

  125. dhogaza says:

    JasonB:

    “Here’s a tip: you might find it easier to understand the workings of the GCMs if, in addition to learning the background physics, you actually read the papers that the various groups publish explaining how they work. That is likely to be far more illuminating than staring at source code.”

    Yes, which is one reason I always point to the Model E home page before providing a direct link to source snapshots. The Model E home page includes references to some background papers, and a bunch of documentation, all of which should probably be read before attempting to understand the source (not to mention boning up on how modeling of this sort, in general, works, which one doesn’t expect to be documented by the GISS team).

  126. AnOilMan says:

    John Hartz: Thank goodness you’re finally getting in the spirit of things. I’ve recently learned a few new argument techniques…

    Nit Galloping, not even making a useful statement but filling the air with nit picking to make it look like there is something wrong.

    Teflon Galloping, nothing substantial seems to stick, and new subjects are brought up, often combined with Nit Galloping.

    IMO M2 is pretty much pretending to be interested while simultaneously demonstrating an in-ability to use Google, or a remedial grasp of the material. The result is a full thread of ‘discussion’ with nothing actually being discussed. The other version of this technique is to show up, claim to be concerned… offer to look at information provided… show up a day later and claim, “I looked at it, but (repeat original concern)”.

    Note that at no point does this require any knowledge to tie up a thread like this. I point to the fact that there ever any particular piece of evidence or document offered. Its like they don’t like ‘the whole vibe of it’…

  127. AnOilMan says:

    dhogaza: Polymorphic Teflon Gallop?

  128. John Hartz says:

    An OilMan: Another tactic employed by Michael 2 and his ilk is to pick out on or two opponents and ‘butter them up” so to speak. This gives the appearance of being reasonable and responsive.
    Smoke and mirros. Smoke and mirrors.

  129. BBD says:

    John

    As I said, he fools nobody. Least of all me. And among the insincerities I like least is the attempt to cloak denialist argument in fake reasonableness.

  130. AnOilMan says:

    I don’t think you two understand.

    They are not trying talk to us. They are trying to fill comment forums to make it appear as though there is some sort of controversial discussion going on. There isn’t. Either that or they are incredibly incredibly dim.

    http://thechive.com/2011/05/04/next-time-you-listen-to-a-debate-keep-these-words-in-mind-video/

    “I’m not after you, I’m after them.”
    –> Nick Naylor

  131. BBD says:

    It’s risky to speculate about motivation, OilMan. No evidence, no case, remember?

  132. AnOilMan says:

    That leaves “incredibly incredibly dim”… 🙂 And I see no evidence of that. They are almost always articulate.

  133. BBD says:

    It’s possible to be articulate without being a paid shill or even having any agenda beyond pushing one’s personal ideology at the expense of science.

    Like you, I know there are paid shills out there, but I lack the means to identify them, so I try not to speculate much on this topic.

  134. nobodyknows says:

    Another quiz: Who wrote this? “even areas of substantial agreement among models may not imply more confidence that projections are correct, as common errors or deficiencies in model parameterizations may provide false confidence in the robustness of future projections.”

  135. BBD says:

    Yes.

    But.

    Paleoclimate.

    It’s really not hard to understand. Paleoclimate behaviour demonstrates that the climate system is moderately sensitive to radiative perturbation.

    We do not need models to inform us of this.

    So these incessant concerns about “the models” are redundant, beside the point, irrelevant.

    I do hope this is now clear, nobodyknows. I’d hate to have to repeat it yet again.

    People might get irritated and impatient.

  136. BBD says:

    And I really don’t appreciate the spamming of stuff from the Idso misinformation machine.

    Please recollect the head post: regional projections are not robust compared to global projections.

    Maloney et al. (2014) North American climate in CMIP5 experiments: Part III: Assessment of Twenty-First-Century Projections concerns itself with about 2% of global surface area, IIRC.

    Perhaps you haven’t bothered to read the headpost? It’s never too late. Well, almost never.


  137. Michael 2 says:
    June 15, 2014 at 4:40 pm

    The models SHOULD be able to zero in on any cell on any date in the future and tell you exactly what will be happening THAT DAY.

    If you cannot do that then you have not modeled the climate.

    The climate phenomena that have quasi-oscillatory behavior such as ENSO and QBO are likely more deterministic than most realize. We are getting there in understanding how to isolate the determinism.

    Take QBO, the quasi-biennial oscillation of stratospheric winds. What is responsible for the ~28 month period? Is this discerned by running a GCM variant, or can we simply calculate it based on lunar forcing cycles:
    http://contextearth.com/2014/06/17/the-qbom/

  138. John Mashey says:

    It is worth checking AR5, Ch 12., in which much attention is paid to categorizing regions where models mostly agree on significant effects.
    The statements “regional models are not reliable projections, in general” is not the same as “models show various levels of agreement on various metrics in various regions.”

    For instance, although I haven’t checked with great care, I think it is fairly robust that some regions will get more rainfall and some less, with the uncertainty in the middle.
    For instance, I think most models say the US SouthWest will get drier, i.e., including Los Angeles.
    I think most models say the Pacific NorthWest gets wetter.
    Obvious, going from South to North, there’s an an area n the middle where it is uncertain, i.e., like in SF Bay Area, Models disagree.
    I expect the models all agree that in a warming climate, whatever precipitation falls in the Sierras will have a higher percentage of rain than snow, compared to historical values.
    I think such regional projections area already good enough to be useful.

    AS another example, although I don’t know this one as well, TX is big enough to have quite differing conditions. I think the Western part is expected to get drier, but that becomes unclear in East Texas which I understand tends to be meteorologically complex.

    On some of the graphs, I noticed that the North Atlantic seems not well predicted, unsurprisingly.

  139. John Hartz says:

    John Mashey: Are you sure that GCMs are capable of distinquising between rain and snow?

  140. Steve Bloom says:

    They wouldn’t need to for the Sierras, John. As temps go up, so does the snow line.

  141. AnOilMan says:

    John Hartz: I think they track precipitation, and snow is considered precipitation. So an increased snow pack, is a sign of climate change.

    Here’s Canada.
    http://www.ec.gc.ca/adsc-cmda/default.asp?lang=En&n=8C7AB86B-1

    Interesting… Western Canada had a 100 year flood last year, and it was a low precipitation year for Canada.

  142. hvw says:

    John Mashey,

    I think it is quite challenging to get a good idea about what the models say, which what certainty, even if you have AR5 at hand, pretty much best imaginable effort at communicating this info. “Is getting wetter” mostly depends on “when?”, “under which scenario?”, and “in which season?”.

    I agree that GCMs might be able to say something robust on the spatial scales you refer to. With regard to the regions you mention one could look at the “Atlas-Annex”: http://www.climatechange2013.org/images/report/WG1AR5_AnnexI_FINAL.pdf
    pages 20-27

    In my interpretation, that looks less clear cut than what you write …

    “I think such regional projections area already good enough to be useful.”
    It would be lovely to hear about concrete adaption measures that are informed by such projections.

  143. Steve Bloom says:

    “It would be lovely to hear about concrete adaption measures that are informed by such projections.”

    In California? Well, it’s hard to tell since right now we’re dealing with a drought that’s likely caused by anthropogenic and natural factors, and where paleoclimatic history shows that purely natural droughts can get very bad indeed. What can be said is that the consensus view that we will have more frequent and worse droughts moving into the future gets essentially no pushback, and this seems to have translated into a much broader acceptance of the general scientific consensus on climate change (and support for policy) than exists in most other places in the U.S. (Which fact BTW is probably very instructive relative to the discussion about the process by which the public accepts the scientific consensus and the need for action.) It’s probably still a little early to see how that awareness translates into long-term planning. One thing that’s getting a lot of attention right now is increased water use by the recent (prior to this drought) shift to more water-intensive (and profitable) tree crops in the Central Valley, and the attendant heightened impact on groundwater. The outcome of that focus will be instructive.

  144. John Mashey says:

    Water management in CA is a huge effort, with myriads of programs, see CA SWRB.
    By coincidence I’m sitting in a Silicon Valley Energy Summit @ Stanford, in a session on drought management. The SWRB speaker just finished.
    Expected warming and lessening snowpack, as per models , is assumed by plans, efficiency measures, where reservoirs might be built, canals, etc.

    Likewise, expected sea level rise is baked into town planning around the SF Bay Area. I went to an all-day symposium for local governments ~2008.

  145. verytallguy says:

    JC under a post of the same title:

    For equilibrium climate sensitivity, at a very likely level, I would put the range between 0.5-4C. Very likely confidence level implies 10% chance that the true sensitivity lies outside this range (I make no assumptions about shape of the distribution). Note my very likely range is shifted 0.5C below the IPCC’s likely range. Note, this has evolved since my previous statement on this, I think i previously said 0-10C at the very likely level. That said, I am not sure how useful the concept of equilibrium climate sensitivity is.

    In terms of a prediction for 2100: I would put the range 0-2C, likely confidence level

    Notes:
    1) She has the IPCC range wrong
    2) the 2100 figure and her sensitivity are inconsistent
    3) 0.5 is batshit crazy

  146. Pingback: Another Week in the Ecological Crisis, June 22, 2014 – A Few Things Ill Considered

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s