I wrote a post about model tuning that discussed a paper that argued for more transparency in how climate models are tuned. Gavin Schmidt, and colleagues, have now published a paper that discusses the Practice and philosophy of climate model tuning across six US modeling centers. The paper is a bit long, but it’s well-written and easy to read, so I would encourage you to do so (if interested) and I’ll try to not say too much.
Probably a key point is why you need to tune these models in the first place. Well, they’re certainly based on basic physics, but they’re sufficiently complex that you can’t model everything from anything close to first-principles. This means that some processes are parametrised and, in some cases, the parameters are not well constrained. This requires that you then tune these parameters so that the model then matches some pre-defined emergent constraints.
A common claim, however, is that they’re then tuned so as to either match the 20th century warming or to produce specific climate sensitivities. These, however, are not amongst the emergent constraints used for model tuning. As the paper says
None of the models described here use the temperature trend over the historical period directly as a tuning target, nor are any of the models tuned to set climate sensitivity to some preexisting assumption.
Most of them do, however, tune for a radiative imbalance, either during pre-industrial times (PI) or present day (PD), or tune for aerosol forcing, or aerosol indirect effect. A summary of the tuning criteria in the 6 different US models is shown in the Table below.
However, analysis of the CMIP3 ensemble (Kiehl, 2007; Knutti, 2008) suggested that there may have been some kind of implicit tuning related to aerosol forcing and climate sensitivity among a subset of models with models with higher sensitivity having a tendency to have higher (more negative) aerosol forcing
The correlation is, however, rather low and this is less evident for CMIP5.
Having started this, I’ve also just noticed that James has a post in which he suggests that even though groups certainly don’t re-run their models, and tune parameters, until they get a good fit to the 20th century, some have certainly made adjustments/updates if they know that the fit is poor.
I guess the basic message is that this is complicated and although there certainly isn’t any explicit tuning to the 20th century trend or to some specific climate sensivity, subjective choices and expert judgement can have an impact on these emergent constraints. Having said that, what they explicitly tune to – in many cases, a radiative imbalance – seems quite reasonable to me since this is a key factor that indicates the net amount of energy being accrued by the system.
The paper ends with what seems like quite a sensible suggestion:
we recommend that all future model description papers …. include a list of tuned-for targets and monitored diagnostics and describe clearly  their use of historical trends and imbalances in the development process.
As I said at the beginning, if you want to know any more, it’s probably best to read the paper (another link below).
Tuning to the global mean temperature record, by Isaac Held.
Practice and philosophy of climate model tuning across six US modeling centers, by James Annan.
Practice and philosophy of climate model tuning across six US modeling centers, by Schmidt et al.