Naomi Oreskes has a recent article in the New York Times called Playing Dumb on Climate Change. The article discusses statistical significance and, in particular, Type I (false positive) and Type II (false negative) errors. The basic point being made is that scientists would typically aim to avoid Type I errors (don’t make a scientific claim unless you’re pretty confident that you’re correct) while in a risk assessment scenario one might be more conscious of making a Type II error (don’t claim there isn’t a chance of something severe happening when it’s possible that there is).
So, the basic argument that Naomi Oreskes seems to be making is that scientists should be willing to speak out about the risks of climate change even if they aren’t confident (in a statistical sense) about what will actually happen. Personally, I thought the argument seemed perfectly reasonable. It does, however, seem to have some others rather up in arms. Most of the argument against this seems to be claims that Naomi Oreskes doesn’t understand statistics, but it’s hard not to interpret the criticisms as being based on a desire to encourage scientists not to speak out about the risks associated with climate change.
However, I thought I might make some additional comments that are based on the possible – but maybe unlikely – chance that much of the disagreement is cultural (in an academic/research sense, rather than societal sense). To me, and possibly to most physical scientists, statistics is simply a tool. It allows you to extract information from datasets and, as discussed in Naomi Oreskes’ article, to gain some idea of how confident one can be in one’s analysis.
However, statistics is not the be all and end all of data analysis and, in some cases, isn’t even really used. I don’t (and this may seem obvious 😉 ) use formal statistical analysis very often. Much of what I do is to consider a particular physical system and to try and understand how it will evolve under certain conditions. If I want to understand which conditions are most likely to match reality, then using statistics becomes an important part of the analysis, but I don’t need to use formal statistics if all I want to know is what happens if something changes.
However, even in the case where one is using formal statistics, one still has to be careful of how to interpret the results. If a statistical analysis suggests that a model is consistent with observations, but you know that the model violates one of the fundamental laws of physics, you would reject that model despite the statistical analysis. On the other hand, if you’re considering a simple system where you’re confident that you understand the underlying physics well, you wouldn’t reject your model if the statistical analysis suggested that the model was inconsistent with the data – you’d probably check, or improve, the data.
I guess what I’m getting at is that in the physical sciences you have much more than just statistics; you also have the laws of physics which you can use together with, and in the absence of, statistics. Of course, I’m not trying to suggest that statistics isn’t important or useful, simply that we can be confident about our understanding of a physical system without necessarily needing to resort to, or rely on, statistical tests.
I had a brief discussion about this with Michael Tobis in the comments on his post about Naomi Oreskes’s article. I thought I’d end with something that Michael said that I’ve pondered myself and that may be relevant to this whole discussion.
The obsession with “the attribution question” has been driven by (political) denialism using (statistical) frequentism as a weapon. Reason is fundamentally Bayesian, and frequentism should be considered just a weird corner of Bayesian thought.