Michael Tobis has posted a recent article about who decides what is true? He addresses an interesting issue; when you work within a discipline, you typically know what is regarded as credible and what isn’t. Explaining this to those on the outside, though, can be very difficult. Given that alternative ideas are rarely simply dismissed, it can be quite easy for some to promote views that seem plausible to those on the outside, but that are regarded as probably flawed to those on who work in the field. How to address this in a world where it is important to know what is true and what isn’t, is a complex and difficult issue.
I’ll let you read Michael’s post to find out more, but this gives me a segue into discussing another related topic. There’s a new book called the rightful place of science: science on the verge, which Judith Curry discusses here. My immediate reaction was rather negative, but I went through this presentation and it makes a lot of good points.
There clearly is a publish or perish mentality; people who don’t publish enough will not be able to build long-term careers. We incentivise behaviour that is not ideal in a scientific environment; researchers are rewarded for results that appear to have high-impact, even if the results are over-hyped. Research has also become very complex and so it is easy to make mistakes and also to try and over-simplify results when communicating with the public. In some fields it turns out that many previous studies are not reproducible. These are clearly genuine issues that would worth addressing but, as the presentation says, No single party is solely responsible, and no single solution will suffice.
However, even though I think there are a number of genuine issues that we could be addressing, is there really a crisis? I do think that there is a tendency to reward the wrong kind of behaviour, but that doesn’t mean that there isn’t good science/research being done. The solution is also very complex; academics have a responsibility to not over-hype their research results, but employers and funders also need to recognise that they need to find other ways to judge the value and quality of research and researchers. However, we also live in a world where we want value for money, so want to be able to quantify the value of the research that is being funded.
There may also be issues with replication in some fields, but as this article argues science does progress through failures and we should be careful to assume that this issue is indicative of some kind of major crisis. Some of these research studies are very complex and the lack of reproducibility may indicate the complexity of the system being studied, rather than some indication of research mis-practice.
So, although I think they do highlight some genuine issues, I’m not convinced what they present is really indicative of some kind of crisis. The authors of this book are also – as far as I’m aware – not outside/independent observers; they’re researchers and academics themselves. What they’re presenting here is not some kind of independent report; it’s their own research. They themselves are susceptible to the same biases and incentives as all other researchers. That they discuss how many research results are exaggerated, and yet seem unaware of the irony of publishing a book called “science on the verge”, may suggest that they haven’t quite recognised this.
Update: Something I hadn’t realised is that the authors of one of the chapters in this book co-organised the 2011 Lisbon Workshop on Reconciliation in the Climate Change Debate. It was an interesting group of attendees and was covered on blogs and in New Scientist. It also included an episode in which Gavin Schmidt’s decision not to attend caused quite a stir.