To the surprise of no one, Nic Lewis has found many serious problems with the recently published Marvel et al. paper (also discussed here and here). Even less of a surprise is Andrew Montford lapping up Nic’s claims with glee. As an aside, I had intended to have a Andrew Montford is a …. post today. Not because I specifically want to encourage name calling, but because Andrew, and his regular commenters, seem to so value their right to say whatever they like on his site, that I thought I might return the favour. It might have to wait for another time, or maybe I won’t bother (the latter, I expect).
So, back to Nic Lewis’s critique of Marvel et al. Let’s be clear, critiquing other studies is an entirely reasonable thing to do. It’s probable that no single study is completely correct, or completely wrong. On the other hand, scientific research is really about gaining understanding, not simply finding things to criticise in other people’s work. Even though some of what Nic Lewis says may be valid, overall it simply comes across as the rather standard pedantic nitpics that are the hallmark of blog science. Auditing isn’t really part of the standard scientific method.
I think, however, that I’ve got a little ahead of myself, so let’s go back a step. The key issue is that we have a number of methods for estimating climate sensitivity, one of which tends to suggest that climate sensitivity may be lower than most other methods suggest. This method is the observationally-based method that Nic Lewis seems to favour. Marvel et al. was really an attempt to explain this discrepancy, rather than being some kind of independent climate sensitivity estimate. What Marvel et al. show is that the response to a change in forcing is not the same for all forcings. Observationally-based estimates typically assume that the responses are the same. If the efficacy differs for different forcings, then this will influence estimates from methods that assume that it is the same for all forcings. Marvel et al. suggest that ignoring forcing efficacy means that observationally-based estimates tend to under-estimate climate sensitivity, and that this may explain some of the discrepancy between this method and the other methods.
Does this means that there aren’t problems with Marvel et al.? Of course not, but that there might be does not change that forcing efficacy is something that should be considered when estimating climate sensitivity. Let’s bear in mind that there are also other potential issues with observationally-based methods too. Something I’ve tried to point out to Nic Lewis before, is that if he really thinks that equilibrium climate sensitivity (ECS) is probably below 2oC, someone is eventually going to have to explain the associated physical processes. Our current understanding is that the ECS is probably greater than 2oC. Appealing to statistical technicalities is not really good enough. The goal should be to understand reality, not rigidly apply some kind of statistical method.
In a similar sense, the ratio of Nic Lewis’s best estimate for the transient climate response (TCR) to his best estimate for the ECS is about 0.85, considerably greater than what more complex models suggest. Such a large ratio would suggest that the system is almost always close to equilibrium, largely at odds with the large heat content of the oceans (unless the oceans can equilibrate very slowly for a very long time). Essentially, all of the methods have potential issues, and understanding their strengths and weaknesses, and why there are discrepancies between methods, is an important part of advancing scientific knowledge. Also, this discussion of discrepancies misses another key point; there is still quite a large overlap between the different methods. Some methods suggest that very low values are extremely unlikely, others suggest very high values are extremely unlikely, but none of them suggest that an ECS between 2oC and 3oC is very unlikely.