Engineering the software for understanding climate change

Since Judith Curry has a guest post about global climate models and the laws of physics, I thought it would be worth posting this recently released video of a talk about climate modelling (see below). It’s by Steve Easterbrook, who is a Professor in the Department of Computer Science at the University of Toronto.

One problem with blogosphere critiques of climate models is that it often seems to come from those who might have some relevant expertise, but who have never actually run a climate model, or spent any significant amount of time talking with those who have. They also seem to base their critique on things they’ve read on the internet and – as a result – assume climate modellers do not understand some pretty fundamental things; ignoring that online material about climate models, aimed at the general public, will clearly not include the kind of details that you will find in the scientific literature. Additionally, they assume that climate modelling should be conducted in a manner similar to what they themselves might have experienced, ignoring that what might work in one field, might not in another.

What Steve Easterbrook did was spend a lot of time at the UK’s Met Office, interacting with – and learning from – those who do run, and develop, climate models. He found that there are reasons why climate modellers might behave differently to software engineers, or those doing computational modelling in industry. Climate models are scientific instruments. The goal is not to design a new climate, or produce some specific product; the goal is to understand our climate, and how it might respond to changes. He also discussed how they continually test the models and how there are very few defects, per thousand lines of code. His argument for this being that in such a code, it can be fairly obvious when there is a problem, and therefore you can find, and fix, defects more easily than might be the case in other codes.

I don’t think I need to say much more. The talk is a little long, but it’s certainly worth watching. Steve Easterbrook’s blog is also very good, and has a number of posts about climate models.

References:
The talk is – I think – from 2011, but has only just been released. As Victor Venema points in the comments on youtube, the paper on which it is based is here.

This entry was posted in ClimateBall, Global warming, Judith Curry, Research, Science and tagged , , , , , , . Bookmark the permalink.

49 Responses to Engineering the software for understanding climate change

  1. Willard says:

    Vintage 2016:

    Large, complex codes such as earth system models are in a constant state of development, requiring frequent software quality assurance. The recently developed Community Earth System Model (CESM) Ensemble Consistency Test (CESM-ECT) provides an objective measure of statistical consistency for new CESM simulation runs, which has greatly facilitated error detection and rapid feedback for model users and developers. CESM-ECT determines consistency based on an ensemble of simulations that represent the same earth system model. Its statistical distribution embodies the natural variability of the model. Clearly the composition of the employed ensemble is critical to CESM-ECT’s effectiveness. In this work we examine whether the composition of the CESM-ECT ensemble is adequate for characterizing the variability of a consistent climate. To this end, we introduce minimal code changes into CESM that should pass the CESM-ECT, and we evaluate the composition of the CESM-ECT ensemble in this context. We suggest an improved ensemble composition that better captures the accepted variability induced by code changes, compiler changes, and optimizations, thus more precisely facilitating the detection of errors in the CESM hardware or software stack as well as enabling more in-depth code optimization and the adoption of new technologies.

    http://www.sciencedirect.com/science/article/pii/S1877050916309759

  2. A more detailed look into the astonishingly low bug density of climate models revealed that the bug density was higher, but still much lower than normal:
    J. Pipitone and S. Easterbrook
    Assessing climate model software quality: a defect density analysis of three models
    http://www.geosci-model-dev.net/5/1009/2012/

  3. Magma says:

    It’s a good talk if you have an hour to spare, but it dates from early 2009 and is based on a study Easterbrook conducted in the summer of 2008 at the Met Office.

    That said, it’s instructive how few (i.e., none) skeptics, including the engineers, ever try to examine the nuts and bolts of GCMs even though several have source code that is either freely accessible or available via a simple agreement. There’s probably a reason Spencer and Christy are so reluctant to release their code on MSU/AMSU microwave emission –> atmospheric temperature code, namely it’s probably buggy, poorly written crap. (Personally I suspect Curry couldn’t write code to calculate a running average.)

    Note that Victor Venema has elsewhere mentioned that he’s been contacted by skeptics about data and software for temperature record homogenization, and when he informs them that the code and data are available they go away without actually bothering to ask for them.

  4. Magma says:

    The “astonishingly low bug density” of climate model software is — naturally — just a small cog in the Vast Scientific Conspiracy machine.

  5. ATTP – I found this very interesting and made some notes of points that made an impact on me:

    1. The Earth System models (this goes well beyond only GCMs) are large code bases that have grown linearly over decades, which is to be contrasted with commercial software (e.g. ERP systems like SAP) that tend to grow asymptotically. SE’s hypothesis is intriguing: that because the ‘coders’ are domain experts, they can continue to grow complexity at a linear rate over time.

    2. There is a substantial element of shared code base between the Weather people and Climate people and they have constrasting (even conflicting) goals: the former want predictions on a relatively short timeframe, wherease the latter aren’t interested in predictions (shock horror) but in refining their understanding of the science at multiple spatial and temporal scales; they want to enhance the ‘skill’ of the models to enable different ‘experiments’ to be performed (what if this, what if that).

    3. The low error rate is in part due to the complexity of the models that are inherently conservative. Significant errors will tend to ‘blow up’ some aspect of the model predictions (which are often reviewed via a 2×2 graphical view of the results that modellers can peruse with great skill).

    4. The inherently complex, tightly coupled sub-models (GCM, Ocean Model, etc.) make it probematic to adopt the customary approach in commercial software of ‘plug and play’. There might even be a rule (hypothesis) that this is fundamentally impossible to achieve. Yet they switched their ocean sub-system model to a French team who were trying to make their model ‘interoperable’ (a synonym for plug and play maybe), and there is a tension there with the belief of the MetOfficce team that this is unrealistic, if not immpossible.

    5. There is a final point. The fact that there are so many institutions carrying out similar modelling, sllows for comparisons between models and this enables ‘model skill’ and errors to be identified.

    Fascinating stuff.

    I asked on Twitter if SM had made a similar study of IAMs and apprently not yet – on the ToDo list. Yet despite, no doubt, the diligence of the IAM modellers, it is a concern that there are so few in number, without the presures to deliver commercial outcomes (ie. weather predictions of metoffice, also risk assessments for other clients such as insurance industry), and without the multiplicity of models for cross comparison. Yet these IAMs are what policy makers use to decide on the relative risks / costs of action and inaction, be it mitigation or adaptation.

    Strange then that the climate modellers get 99% of the heat, when it appears they are doing such a brilliant job! (by any standards)

  6. Nick Stokes says:

    ” Significant errors will tend to ‘blow up’ some aspect of the model predictions”
    Yes, that is the reason for low bug density. You can’t afford errors – they blow up the whole thing. People sometimes claim that GCMs are rigged to produce some favoured outcome. My response is that you can’t rig CFD. It’s very complex, and the only thing that keeps it on track is correct physics. If you lose that, you’re gone.

  7. Steven Mosher says:

    “There’s probably a reason Spencer and Christy are so reluctant to release their code on MSU/AMSU microwave emission –> atmospheric temperature code, namely it’s probably buggy, poorly written crap.”

    [Mod: unnecessary] The Code is released as part of the CDR.

    It’s not very pretty ( less pretty than RSS ) but please, you have no clue about their code
    and you spoke without having any facts.

    If you want to look at the code, it is posted, [Mod: unnecessary]

  8. Steven, people have asked Spencer and Christy for the code and never got an answer. No wonder many people thought it was not published. Do you know since when the code is available and where it was announced that is was?
    http://scienceblogs.com/stoat/2015/04/26/now-we-know-why-uah-v6-is-so-late/

    Did you manage to get it running? Does not look like any quality I have ever seen and I thought I had seen something. Gotta love spaghetti. To cite myself: “Even if they find the same trends as the surface temperatures, this does not sound like code I would base policy on.”

    Is the code of the UAHv6 dataset available? It is used a lot at mitigation sceptical blogs and is quite different from version 5. With no code and no publication, its prominence in a community that claims to find transparency important is intriguing.

  9. Magma says:

    [Chill, please. V&V for climate modulz, please. No more “but Spencer.” Bud Spencer would be OK. -W]

  10. anoilman says:

    I like the bit about the modelers going back and forth with the physicists even generating new understanding in physics. This is of course one purpose of modeling.

    Models are a tool to understanding, not the basis for it.

    All the best engineers model their stuff before they implement it of course. I bet if you check way way back someone built a model for UAH satellite data processing before spending millions building the satellite, tossing a satellite up there and hoping it works.

  11. Richard E.,
    Thanks, you’ve highlighted some of the other things that I too found interesting.

    AoM,

    Models are a tool to understanding, not the basis for it.

    Indeed.

    Steven,
    I must admit that I hadn’t realised that they had released their code.

  12. Andrew Dodds says:

    Having done both physics based modelling and commercial software..

    The obvious thing is that they are drastically different in terms of inputs; a model will have a sharply restricted set of inputs, and generally restricted to a starting set. A commercial application will have continuous inputs and outputs, giving rise of a whole class of defects (‘When I type X, computer goes bloop’) that you can’t get in a model.

    And for a model, there is the nice bit that the outputs are constrained by physics, whereas in more commercial applications the outputs are effectively arbitrary (defined by specifications, often coming in at 2nd or 3rd hand).

    So modelers have it easy; inputs are simple and desired outputs well constrained; there’s just a bit of complexity in the middle (ahem).

  13. I thought the video was well worth watching as well. One of the things that I found interesting was the point that the best people to write the software is the scientists themselves. It is hard to write good quality code if you don’t understand what the code is actually going to do, so if you brought in specialist programmers then the bottleneck would become discussions where the scientists have to explain to the programmers about the physics required for the particular job at hand (I am quite familiar with that problem ;o). Programmers that are (really) good at maths and physics are few and far between, so I think having the scientists write the code and teaching them about software engineering (which appears to be what has happened) is probably the best approach.

    I liked the point about the common mis-perception about what the modellers actually wanted to do with there models as well.

  14. Steven Mosher / Willard thanks for the pointer to the code (there is also an algorithmic description, which is of more interest to me than the code itself) and the URL.

  15. anoilman says: “Models are a tool to understanding, not the basis for it.

    Exactly.

    anoilman says: “All the best engineers model their stuff before they implement it of course. I bet if you check way way back someone built a model for UAH satellite data processing before spending millions building the satellite, tossing a satellite up there and hoping it works.

    These microwave satellites were launched to get global patterns of humidity for use in weather prediction. Not for temperature and certainly not for temperature changes to study climate change. Had they wanted to use them for climatology they would at least have made sure the orbits were stable and had taken better care with calibration.

  16. Magma says:

    William Braswell’s contribution to the UAH code base (public release 5.4) is clean, clearly commented Fortran. The same cannot be said about the rest… calling it spaghetti is being kind.

  17. BBD says:

    An objective approach to the validity of the UAH model might be to compare it to RSS, and then to surface temperature products. Should the UAH model diverge somewhat from everything else, that might suggest that it remains a work in progress.

  18. Chubbs says:

    anoilman says: “Models are a tool to understanding, not the basis for it.”

    Yes, the main benefit is answering what-if questions which has allowed steady advancement in climate science despite limited field observations. There are still major limitations in climate models however, perhaps the two biggest are coupling with ice sheets and carbon stores. [Snip. -W]

  19. hvwaldow says:

    I did some review and porting of little env. sci. models. One-scientist desktop stuff, totally different scale. Yet I found a similar phenomenon. Even though written by unexperienced programmers, with disregard to established industry practices thought to be required for good quality code, and actual shitty quality code in terms of readability, style, (anti-)patterns used etc., that code was remarkably bug-free.

    My hypothesis is that scientists, in the course of using the model for what they are actually paid for, do an extremely thorough code validation as a side-effect. It’s an experimentation device. It’s run through the largest part of the input parameter space. Every facet of the output is looked at very closely. Every part of the output that is not expected or somehow stands out is scrutinized — and as domain experts the scientist have the best-trained eye for that imaginable. And the output you expect, and want to report as a result, is equally triple checked to make sure you don’t base you next paper on an artifact.

  20. It is important to remember that most of the scientific code that has been written was written purely for the person that wrote it and was not intended to be used by anybody else. Most of it was also intended to be used for the current project before moving on to something else. There is less need to write understandable, maintainable code that nobody else is likely to read or use after the end of the project. Of course this is changing (if for no other reason that giving away code is a good way of getting those in your research field to take up your ideas), but until someone invents a time machine, we are stuck with a lot of unreadable, unmaintain{ed,able} code. However the code needs work as intended, so it will have been thoroughly tested. All software is built to a budget, and that includes scientific software, if academics are pressurized to be productive (papers + grants), what is the incentive for them to produce beautiful code, rather than merely correct code? If society wants pretty code, they need to pay for it and ensure that the system doesn’t discourage researchers from producing it.

  21. Willard says:

    > It’s an experimentation device.

    I surmise that the community’s quasi-experimental runs validate the code as a side effect of their main objectives of testing some hypothesis.

  22. Magma says:

    @dikranmarsupial The software in question was written as part of a NASA-funded program run by UAH ESSC. Standards should be higher than ad hoc personal code.

    @hvwaldow The ever-shifting output results of the UAH code between poorly documented versions strongly suggest that “extremely thorough code validation” procedures were not in place.

  23. David Hodge says:

    From curiosity I actually had a look at both UAH and RSS code bases. I certainly don’t think that the UAH code is a rats nest of spaghetti ( I cant comment on the correctness of the code of course) and I have seen much much worse fortran , which still managed to produce correct results. I could , with minimal effort, understand what each section of code was meant to be doing (reading files, validating data, normalising observations, computing anomalies etc etc) I think some of the criticism above is a bit too harsh, particularly in light of dikran’s comment above.

  24. BBD says:

    which still managed to produce correct results

    …which brings us to the various problems with UAH. Even before the latest beta of the UAH LT product reduced the warming trend still further relative to all other data sets, satellite or terrestrial.

  25. lerpo says:

    Curiously the latest UAH 6.x has quicker warming prior to 1998 relative to 5.6 and RSS. It’s only after the peak in 1998 that the warming slows in UAH 6.

  26. Angech says:

    Magma
    Code writing is an art or science in itself.
    People who study science do not necessarily study computer programming, nor should they have to.
    That is not to say that one precludes the other.
    Dikran, ATTP, Brandon, VV sound as though they know a fair bit about computer coding.
    Nonetheless I would suspect they normally use programmes designed and coded for their purposes by proper computer programmers to whom they give instructions on what needs to be done to their data.
    Some of whom would know little about the subject matter of their programmes but understand enough numeric to make the input work.
    Re UAH, Christie et al may be computer whizzes or not . Computing skills would certainly have helped in the advancement of their careers.
    They would surely have used programmes developed for the satellites by NASA and the space programme. If using their own skills they must have been amongst the best in the world at what they are doing purely to be doing it.
    It is amusing to watch people criticise use of computer programmes on the one hand when dealing with complex but known about problems like satellite drift, and one the other praise climate model computer programmes with self praised minimal errors when dealing with a lot more complex variables than a little orbital drift.
    A contrarian point of view is a great way to freshen up ones mind provided it does not set.

  27. BBD says:

    It is amusing to watch people criticise use of computer programmes on the one hand when dealing with complex but known about problems like satellite drift,

    There appear to be problems with the UAH LT product, created and curated by contrarians. It’s amusing to watch contrarians deny this.

  28. Bob Loblaw says:

    Nonetheless I would suspect they normally use programmes designed and coded for their purposes by proper computer programmers to whom they give instructions on what needs to be done to their data.

    As someone who has a PhD in an area of atmospheric science, and many years of coding experience (but little formal training), I would argue that the amount of time it takes to tell a “programmer” what needs to be done is typically far in excess of the amount of time it takes me to get working code myself. Often, the issues that crop up as you test the first snippets of code are not programming issues, but physics issues. The coding issues that need to be solved are usually much simpler problems than the science issues.

    How many professional programs are going to know the right way(s) to deal with the question of “please write some code to give me hourly average wind direction from this sequence of 3600 1-second readings”. I’ll give you the first hint: a reading of 1 degree (1 degree east of north) and a reading 359 degrees (1 degree west of north) do not average out to south winds (180 degrees). That is a mistake I have personally seen done by a “professional programmer” – and one that I would consider to be an excellent programmer. “Just following specifications.”

  29. angech,

    Not that it really matters for this discussion (in which you’ll note I’m not participating heavily), coding is a hobby that I turned into a marketable skill because of weak or non-existent IT support. Business didn’t know how blind and inefficient they were until I came along. [thumps chest proudly]

    Though I write code in a professional capacity, I don’t consider myself a professional programmer because I lack the formal training. This becomes painfully obvious on the (fortunately) rare occasions that my code matters enough for Real Programmers to take interest in my spaghetti. It’s accurate to the penny (accountants are picky about that sort of thing), fast (I don’t like waiting) and pretty (I live and die by code beautifiers), but it’s … organized … like my brain is, and I’m often the only person who understands how it’s structured. It turns out that my best use case is writing non-production code for one-time data conversion processes. Weird niche, but it pays well.

    Short story long, I have somewhat of a soft-spot for non-professional programmer scientists writing c0dez for Teh Modulz and the like. Their strength is knowing best what the code is supposed to do.

  30. Angech says:

    Bob, no one is perfect. People make mistakes.
    The example you give is of one person making a claimed oversight.
    Now if you had said you had seen many professionals making this sort of oversight you would have a leg to stand on..
    BBD when did they become contrarians ? Before or after they wrote the code?
    Are you suggesting they wrote Contrarian cod?
    My point stands, we both judge the codes by how the outcomes appeal to us.Too funny I hear someone in the background saying.
    Brandon, you write professional code and it works so you are a professional coder. As you say you are the only one who knows the intricacies of your product.
    Climate models programmes are a bit like the computer chip.
    We all use computers. We all think we know how they work but the guys who put it together originally and sequentially are the only ones who truly know.

  31. Angech,

    Too funny I hear someone in the background saying.

    I was just thinking this morning, “Where’s Mosher?” and then I remembered he’s having some surgery.

    you write professional code and it works so you are a professional coder.

    Only in the sense that it executes as required and I get paid for it. Real hackers have been known to say that a kluge is a crock that works. I know of what (or whom) they speak.

    As you say you are the only one who knows the intricacies of your product.

    Which is a process more than anything, and it only (hopefully) has to work once. So in that respect (and many many others) I don’t have that in common with climate model scientist programmers — they’re writing “production” code in a sense that I typically don’t.

  32. Andrew Dodds says:

    Angech –

    Technically, since computers themselves do a lot of the detailed design work on modern CPUs, you could say that no one ‘knows’ exactly how computers work from electron-to-screen. And that’s before we start to use machine learning/AI as a matter of routine.

    But that doesn’t really matter; what matters is that they work in a well characterized way. In a similar vein, I don’t need to know the exact ways in which a climate model works at the code level ; but I am very interested to see if it can reproduce the detailed features of the climate. If it can do that then I may place some weight in any predictions it might make. Although I’d be dubious about ever using one for primary evidence.

  33. angech wrote “Brandon, you write professional code and it works so you are a professional coder. ”

    No, if the best you can say of a programmer is that their code works, they are an unprofessional coder. (there is a bit more to it than that! ;o)

  34. Angech wrote “Nonetheless I would suspect they normally use programmes designed and coded for their purposes by proper computer programmers to whom they give instructions on what needs to be done to their data.”

    Well I didn’t write linux, vi or MATLAB, but if you mean the code I use to perform my research (mostly written in MATLAB) then no, I write it myself (I am a “proper computer programmer” – I even teach programming). My experience is similar to Bob’s, it is generally much quicker for me to write the code than it is for me to explain to somebody else how to do it, so Easterbrook’s talk had some resonance for me.

    Perhaps it would be better to stop making assumptions, given that you hit rate is so low.

  35. JonA says:

    >…Programmers that are (really) good at maths and physics are few and far between…

    I’d strongly dispute that. They’re working on computer game engines (or malware); which is
    to the detriment of climate science if you ask me – perhaps you should consider paying
    more? :-).

  36. Jon,
    Oh, I can just imagine what would be said if there was a suggestion that we should consider paying climate scientists much more money.

  37. “perhaps you should consider paying more?”

    that would require academia to be better funded, I’m all for that! ;o)

  38. Willard says:

    > I can just imagine what would be said if there was a suggestion that we should consider paying climate scientists much more money.

    [W]: The atmosphere is not the same size than a nuclear powerplant, Dan. Teh stoopid modulz are not mission critical. Besides, V&V cost money.

    [Dan Hughes, sidestepping the money issue]: The size of the physical domain does not introduce any limitations relative to fundamental quality.

    [W]: Of course size matters in V&V. Maybe it’s a vocabulary thing.

    [DH]: I do not see that the size of the physical domain is addressed in that paper.

    [W]: It’s right next to where the author admits he’s beating his dead horse with a stick, Dan. The first sentence ought to be enough:

    It is becoming increasingly difficult to deliver quality hardware under the constraints of resource and time to market.

    The bit where IBM admits using verification to find bugs more than to check for the correctness of their hardware may also be of interest. Logic gates are a bit less complex than watery processes and all that jazz.

    That said, it’s not as if modulz were never V&Ved. It still has a cost. It still is quite coarse compared to nuclear stations or motherboards.

    [DH]: [Crickets.]

  39. BBD says:

    BBD when did they become contrarians ? Before or after they wrote the code?

    Yes. You can read up on the history yourself, easily enough. Pay particular attention to the litany of errors with the UAH TLT product – always introducing a spurious cool bias and never corrected until other people pointed out the errors in the methodology.

    Are you suggesting they wrote Contrarian cod?

    I think it is possible that something fishy is going on, yes.

  40. BBD says:

    ‘Yes [before they wrote the code]. ‘

  41. Gavin Schmidt’s Ted Talk is imho very fine “The emergent patterns of climate change”

    You can’t understand climate change in pieces, says climate scientist Gavin Schmidt. It’s the whole, or it’s nothing. In this illuminating talk, he explains how he studies the big picture of climate change with mesmerizing models that illustrate the endlessly complex interactions of small-scale environmental events.

    For me, this is the best answer to those who wish to use what they do or don’t know about science to carp at the people who are busy doing and learning at the highest level.

  42. Willard says:

    Did someone mentioned UAH?

    A bet has been lost because of it a almost two years ago:

    Late 2009, in the run-up to the international climate conference in Copenhagen, PBL climate researcher Bart Strengers had an online discussion with climate sceptic Hans Labohm on the website of the Dutch news station NOS (in Dutch). This discussion, which was later also published as a PBL report, ended in a wager. Strengers wagered that the mean global temperature over the 2010–2014 period would be higher than the mean over 2000 to 2009. Hans Labohm believed there would be no warming and perhaps even a cooling; for example due to reduced solar activity.

    At the request of Labohm, it was decided to use the UAH satellite temperature data set on the lower troposphere (TLT) (roughly the lowest 5 km of the atmosphere). These data sets are compiled by the University of Alabama in Huntsville. Satellites are used to measure radiation in the atmosphere, after which the temperature of the various layers of the atmosphere is derived using a complex algorithm.

    According to the UAH today, temperatures appear to have been an average 0.1 °C warmer over the past five years than over the 10 years before that. Thus, Strengers has won the wager. The stakes: a good bottle of wine.

    https://ourchangingclimate.wordpress.com/2015/01/23/climate-researcher-bart-strengers-wins-wager-with-climate-sceptic-hans-labohm/

  43. Bob Loblaw says:

    Angech wrote:

    Bob, no one is perfect. People make mistakes.
    The example you give is of one person making a claimed oversight.
    Now if you had said you had seen many professionals making this sort of oversight you would have a leg to stand on..

    You either missed the point, or are trying really hard to avoid it. The programmer did not make a mistake: the programmer implemented the software according to specifications (“average wind direction”). To the programmer, the data is just a string of numbers with no physical meaning, and every string of numbers gets averaged exactly the same way. The programmer does not have the background to know that “wind direction” is a special case.

    The difficulty in much scientific software is not the coding. It is developing the specifications. It is not implementing the algorithm, it is knowing what algorithm to choose (or developing a new one). It’s not finding an efficient way to code a solution, it’s understanding the physics well enough to be able to know what problem needs solving. A “professional programmer” in many cases is only going to slow the process down (read “inefficient”) because the person that knows what questions to ask spends more time writing specifications than it takes to write code. I already know exactly what I need the program to do, and I can code it myself. Why spend more time writing pseudo-code or “specifications” for someone else to do it?

    It’s like the old joke about the expert being called in to figure out why the industrial plant has shut itself down. After jotting a few notes on a pad, he tells the manager to adjust something, and everything works again. He sends in an invoice for $10,000. The manager is irate, and demands an itemized invoice. The itemized invoice comes back:

    1. Pencil, $0.25
    2. Pad of paper, $1.75.
    3. Knowing what to do with pad and pencil, $9.998.00

    Hiring a stenographer to write on the pad of paper is not an improvement, because it’s a very small part of the problem.

  44. assman says:

    What is amazing to me about Global warming believers is how credulous you are. First GCMs manage to make progress on an essentially unsolved problem…the solution to the Navier-Stokes equation. Which supposedly you can’t even numerically approximate. But somehow they did it.

    Then magically they manage to produce code with fewer defects per thousand lines than any code we know of and get this without using any normal tools from software engineering: no code audits, no professional programs, no version control, no style guidelines, no testing, no unit testing.

    And you believe this?

  45. Dikran Marsupial says:

    Assman says “And you believe this?” I believe (not without evidence) that someone didn’t watch the video all the way through ;o)

  46. The Very Reverend Jebediah Hypotenuse says:

    What is amazing to me about “assman” is how he still thinks it’s all about teh modulz…

    Let’s see:
    CO2 is now at 400 ppm and increasing.
    CO2 is a greenhouse gas.
    Ice sheets are melting.
    Sea-level is rising.
    Glaciers are melting.
    Global temperature records are being broken.
    Species are retreating to the poles or going extinct.

    But, no.
    That’s not right.
    It’s really all about the output of some magic computer programs that no one has ever checked for content.

  47. Which supposedly you can’t even numerically approximate.

    Given that one can clearly numerically evolve the Navier Stokes equations, this seems wrong.

    Then magically they manage to produce code with fewer defects per thousand lines than any code we know of and get this without using any normal tools from software engineering: no code audits, no professional programs, no version control, no style guidelines, no testing, no unit testing.

    It’s not magic. As Steve Easterbrook explained, this kind of modelling involves using equations that are based on fundamental conservation laws. Hence, there are tests that can be done to determine if there are problems with the code, which may lead to these codes having fewer defects per thousand lines than other codes.

    And you believe this?

    I’ve seen nothing that makes me doubt it in some major way. This doesn’t, of course, mean that I think climate models are correct, don’t have any problems, and couldn’t be substantially improved.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s