My previous post on research integrity was motivated by Stephan Lewandowsky and Dorothy Bishop’s article on transparency in science. This appears to have ended up being a rather more controversial topic than I was expecting, so I thought I would add one more post about this. This isn’t really to try and make it less controversial, mind you, it’s just a few thoughts I’ve had since writing the last one. If anything, it’ll probably make it worse 😉
When I refer to science (or research, in general) I really mean something similar to what Eli was referring to as normal science. I’m thinking of the process by which we gain understanding of some system, be it a physical system – like the universe or our planet’s climate – or something more societal. It can be a rather messy process and, as Michael Tobis points out here, there really shouldn’t be some expectation to have access to all the mistakes, background discussions, and dead ends that took place before doing what was ultimately published. It’s not only that this is not really relevant, but scientists must be free to do stupid things out of the public eye.
Transparency should only really apply to what is actually published. However, here’s where I think there is also a subtlety. A key part of the scientific method is that we only start to trust some scientific result when it’s been tested and checked by others; we don’t simply trust it because it looks reasonable, we can’t find any obvious errors, and because those who did it appear trustworthy. In this context, transparency should be something that aids the scientific method, not – IMO – something that we should see as a way of making results more trustworthy. There’s nothing fundamentally wrong with delving into the details of what others have done, but there’s no real substitute for actually doing something independent to see if the original result stands up to further scrutiny. This involves collecting more data, doing more analyses, running improved and updated models,……
Our overall understanding of a topic is therefore very unlikely to be based on a single study, but on a collection of research that has tended towards a consistent picture. There isn’t even some definitive rule as to when we should regard our understanding as robust, and when not; it’s generally a slow process of acceptance by the community. Transparency is clearly an important part of this whole process, but it’s not some kind of panacea. We should be careful of assuming that we can trust a result simply because the authors have been completely transparent, or dismissing something just because the authors have not released all that others think they should.
I should stress, however, that I’m really talking here about normal science; the process of discovery. If, however, a single piece of research is likely to heavily influence some political – or societal – decision, then the position may be very different. We may then want to really delve into the details of that study to ensure that there are no obvious errors, or reasons why we should do more before making any decision. I’m also not suggesting that normal science shouldn’t be transparent; I’m just suggesting that we need to recognise that the overall scientific method is important and that transparency is simply an important part of the standard scientific process. It shouldn’t be some kind of blunt instrument for bashing some and lauding others.