I came across an interesting paper about the replication crisis that I thought I would briefly discuss (H/T Neuroskeptic). The paper in question is Reproducibility research: a minority opinion. It’s not open access, but I have found what I think is an early draft copy.
The background is basically that there have been a number cases in which people have been unable to replicate, or reproduce, some earlier scientific/research study. A suggested solution is that researchers should make everything available, so that others can check their results. Some have even suggested that this is a key aspect of the scientific method/process. The new paper takes a rather dissenting position and, in my view, makes some interesting, and valid, arguments.
For starters, a key aspect of science is basically to test hypotheses. Our confidence in a result increases as more and more research groups produce consistent/convergent results, ideally doing so using different methods and, in some cases, different data sets. We don’t really gain confidence if we get the same result by exactly repeating what others have done, using what they’ve provided. There’s nothing wrong this, and there may be scenarios under which this would be important (for example, if a single study is likely to play a dominant role in determing some decision), but this isn’t really a key aspect of science.
Similarly, we often talk about the scientific method, but it’s not really a well-defined process. There are certainly aspects that we’d probably all agree on, there are certainly philosophical descriptions of a scientific method, but there isn’t some kind of rigid set of rules. There are always likely to be exceptions to any set of rules, and I do think we should be careful of thinking in terms of some kind of checklist. We shouldn’t really trust something simply because it ticked all the boxes. Similarly, we shouldn’t simply dismiss something because it doesn’t. Again, we gain confidence when results from different groups, using different methods, converge on a consistent interpretation.
The paper also discussed the issue of misconduct. It suggested that misconduct is not really new, that it’s not responsible for this reproducibility crisis (which I would agree with), and that what’s proposed may not really be a solution. This isn’t to suggest that we shouldn’t take misconduct seriously and suitably deal with it when aware of it, but just suggests that it isn’t really new and that it impacts science less than one might expect; it is typically uncovered, especially if what is suggested is of particular interest.
When it comes to public confidence in science, the paper says
it would seem that any crisis of confidence would best be addressed by better informing the general public of the way science works
which I think is an important point. The idea is that people’s confidence is more impacted by apparent failures, than by explicit misconduct. It’s important, therefore, to make clear that science isn’t some kind of perfect process in which each step incrementally adds to our understanding in some kind of linear fashion. We get things wrong, we go down dead ends, we try things that end up not working. We can even spend some time accepting something that later turns out to be wrong. In a sense, we learn from our mistakes, as well as from our successes. However, over time we still expect to converge towards a reasonable understanding of whatever it is that is being studied.
As usual, I’ve said more than I intended, and there is still more that could be said. I certainly have no real problem with people making everything associated with their research available. There may well be some issues with doing so (waste of resources, and some searching for errors) but I don’t think any are sufficient to strongly argue against this. However, I don’t really think that it is necessarily required, and I don’t really see it as some key part of the scientific method. What’s key is that there is enough information to allow others to test the same basic hypothesis; this does not necessarily require providing every single thing associated with some research result. There may well be cases where it is more important to do so than in others, but I’m not convinced that it should become the norm. Others may well disagree.