Came across a very interesting paper on the art and science of climate model tuning. Based on the comments here, it appears that some are interpreting this as confirming their claims that climate models are tuned to give preferred results. However, I think it is a good deal subtler than that. As the abstract says
Tuning is an essential aspect of climate modeling with its own scientific issues, which is probably not advertised enough outside the community of model developers.
and the paper
concludes with a series of recommendations to make the process of climate model tuning more transparent.
Basically, model tuning is crucial and inevitable, but it would be much improved if the process was more transparent. I don’t want to go into too much detail, because the paper is actually quite readable and I’d encourage those who are interested to read it themselves.
What I will say, however, is that tuning is a key part of climate modelling; the system is too complex to model all aspects from first principles. The fundamental physics is well understood, but some processes require sub-grid models or parametrisations. These parameters are typically constrained in some way (for example, by physical calculations, or observations) but some are better constrained than others. The goal of tuning is then to minimise some difference between the model output and selected observations and theories. Although there are a number of different observations/theories that could be used for tuning, something I had not realised is that there is a
dominant shared target for coupled climate models: the climate system should reach a mean equilibrium temperature close to observations when energy received from the sun is close to its real value (340 W/m2).
The bit of the paper that I found most interesting was the section on Tuning to 20th century warming. The suggestion is that even though ECS is an emergent property of models and the match to the 20th century is typically used to evaluate models, there is an indication that some tuning to fit the 20th century is probable. This is largely because it’s been noted that high sensitivity models tend to have smaller total forcing, while low sensitivity models have larger forcing. Hence, there is less spread in historical warming than might be expected.
The other comment I found interesting was that internal variability could produce a variation of ± 0.1K on centennial timescales. Since we only have observations of one realisation, the models do not need to be closer than ± 0.1K to well represent our climate. Matching too closely might, in fact, suggest over-tuning. I also think that this relates to something I discussed in an earlier post.
Given that there are some indications of tuning to match the historical record, what was suggested is that one could construct outlier low- and high-sensitivity models and then run these in pre-historic climates to see if one can rule out some of the more extreme values. This seems like a particularly interesting possibility.
Anyway, I’ve ended up saying more than I intended. I think the basic idea of the paper is very good; being more transparent about how models are tuned will be very valuable, as it will not only make clear what is being done, but also provides the possibility that it will make clearer what role this parameter tuning is having on the model results. I would, however, certainly be interested in getting other people’s views about this, in particular it would be good to get Michael Tobis’s views, as he has commented before on how we could work to improve climate models.