Stephan Lewandoesky and Dorothy Bishop (whose blog I used to read quite a lot, but haven’t for a while) have published a comment in Nature about Research Integrity, arguing that we shouldn’t let transparency damage science. It’s a complex issue, but I think they make some interesting points and a number of the usual suspect kindly turned up in the comments to illustrate some of what they were trying to suggest. The same usual suspect were most put out when some of their comments were later deleted.
The key issue, in my view, is that everything necessary for the results of a study to be evaluated and reproduced should be made available. However, that is not the same as making every single thing associated with a particular study available to anyone who asks; it should simply be possible for someone else to check and reproduce what’s been done before.
I can’t speak for other fields, but in my own field most data is either publicly available, or will soon be publicly available. Most methods and techniques are well understood and there are often resources available so that you don’t necessarily have to write your own analysis codes. Most computational models are also publicly available, or something equivalent is publicly available. So, if someone wants to check a published result they simply have to get their hands dirty and do some work. That doesn’t mean that the authors of the original work shouldn’t answer questions, or clarify things; it simply means that they shouldn’t be expected to simply hand over everything that they’ve done simply because someone asks for it. If anything, if someone is incapable of redoing the work themselves, then they probably aren’t in a position to critique it in the first place.
I am, however, certainly not suggesting that researchers shouldn’t hand over more than is necessary. There’s no real reason to not be reasonable and in many cases the requests are, themselves, entirely reasonable. On the other hand, scientific understanding progresses via people actually doing research (whether it’s new, or an attempt to check another result) not people sifting through other people’s work looking for possible errors.
Of course, this is my view based on my own experiences and what is the norm in my own field. It may well be different in other fields and may be different in other circumstances. Maybe when human subjects are involved, or when the results are particularly societally/politically relevant, we should expect more. On the other hand, if research is fundamentally about gaining understanding, maybe we should simply trust the scientific method. We shouldn’t trust scientific results simply because they’re published by people who we trust and regard as being experts in their field. We also shouldn’t distrust scientific results simply because we don’t trust those who did the work, or because we don’t like the result. We start to trust a scientific result when it has been replicated and reproduced sufficiently. That requires doing actual work, not simply checking what others have done so as to try and find mistakes.