Pat Frank has a guest post on WUWT about breaking the ‘pal review’ glass ceiling in climate modeling. It’s essentially about a paper of his that he has been trying to get published and that has now been rejected 6 times. As you can imagine, this means that there is some kind of massive conspiracy preventing him from publishing his ground breaking work that would fundamentally damage the underpinnings of climate modelling.
I’m going to briefly try and explain it again (mainly based on a part of Patrick Brown’s video, which I will include again at the end of this post). You could consider a simple climate model as being a combination of incoming, and outgoing, fluxes. The key ones would be the incoming short-wavelength flux, the outgoing short-wavelength flux (both clear-sky, and cloud), the outgoing long-wavelength flux (also both clear-sky and cloud) and a flux into the deep ocean. How the temperature changes will then depend on the net flux and the heat capacity, (i.e., how much energy it takes to increase the temperature by some amount). This is illustrated in the equation below.
So, what has Pat Frank done? He’s considered one of the terms in the above equation (the term) and found that there is a discrepancy between what climate models suggest it should be and what it is observed to be, with some having quite a large discrepancy (although, the multi-model mean is actually quite close to the observations). It turns out that the root-mean-square-error between models and observations is about ± 4 Wm-2. Pat Frank assumes that this error should then be propagated at every time step so as to determine the uncertainty in the temperature projection. This then produces an uncertainty that grows with time, becoming very large within only a few decades.
There are a number of ways to explain why this is wrong. One is simply that you should really consider the uncertainties on all of the terms, not just one. A more crucial one, though, is that the error in the cloud longwavelength forcing is really a base-state error, not a response error. We don’t expect it to vary randomly at every timestep, with a standard deviation of 4Wm-2; it is simply that some models have estimates for the longwavelength forcing that is quite a bit different to what it is observed to be.
So, what is the impact of this potential discrepancy? Consider the equation above, and imagine that all the terms are close to what we would expect from observations. Consider running the model from some initial state and assume that the incoming short-wavelength flux, and the atmospheric compostion, are constant. Also, bear in mind that some of the fluxes depend on temperature, . If we run the simulation long enough, we’d expect the system to settle to an equilibrium state in which all the fluxes balance, and in which the temperature is constant (i.e., ).
Now, consider rerunning the simulation, but with a slightly different longwavelength cloud forcing. Again, if we run it long enough, it will settle to an equilibrium state, in which the fluxes balance, and the temperature is constant. However, since the longwavelength cloud forcing is different, some of the other fluxes will also be different, and the equilibrium temperature will, consequently, also be different. There will be an offset, compared to the first simulation, but it won’t grow with time simply because one simulation had a different longwavelength cloud forcing compared to the other.
So, that there is a discrepancy between the longwavelength cloud forcing and observations does not mean that this implies an error that should be propagated at every timestep (as Pat Frank claims). It mainly implies an offset, in the sense that the magnitude of this discrepancy will impact the equilibrium state to which the models will tend. Anyway, I’ve said more than I intended. Patrick Brown’s view – which addresses Pat Frank’s error propagation suggestion – is below, and goes into this in much more detail than I’ve done here.