Eric Steig linked, yesterday, to a couple of old articles – by Steve Easterbrook – about the Climategate saga, one in Think Progress and the other a debate with George Monbiot. I agreed with much (possibly all) of what Steve Easterbrook was saying. The whole saga largely illustrates that even bright people can misunderstand how fundamental science works. To illustrate this further, Matt Ridley published – yesterday – another one of his polemics (reproduced here) in which he manages to insult most climate scientists and then whines when they criticise him on Twitter in response. I can’t face going through his article in detail, but it finishes with
In all the millions of scientific careers in Britain over the past few decades, outside medical science there has never been a case of a scientist convicted of malpractice. Not one. Maybe that is because — unlike the police, the church and politics — scientists are all pure as the driven snow. Or maybe it is because science as an institution, like so many other institutions, does not police itself properly.
which just illustrates that his understanding of how science works is woefully poor.
So, what’s the problem with policing science? Firstly, there are already rules in place. If you’re developing something for use, you have to ensure that it safe. There are rules about ethics that need to be followed, and committing scientific fraud is a serious offense. However, I don’t think that this is what people are referring to when they mention policing science. What I think they mean is that scientists should be punished if they get something wrong, use a method that turns out to be unsuitable, or make some kind of fundamental mistake; if so, this is just ridiculous.
Why? Well, there are a number of reasons. Firstly, we want scientists to take risks. We want them to tackle difficult problems and try to understand things that we don’t yet understand. We want them to use data that might not be perfect, use techniques that are not fully developed, and use models that are not validated or verified. Otherwise, they’re essentially studying things that we already understand, rather than things we don’t yet understand. Why would they possibly tackle a difficult, as yet unsolved problem, if they then ran they risk of punishment if they made a mistake or got it wrong?
There’s also another fundamental reason why this is unworkable. Who would do it? I think there are probably only a handful of people in the world who understand the techniques I use well enough to critique them in detail. I suspect this is true in many cases where people are doing fundamental research. They’re also, mainly, scientific competitors. How can people who aren’t independent, be involved in policing other scientists. You might think that you could have some group of independent people who could police scientists, but it’s not clear how this makes any sense. If they had the ability and skill to understand cutting edge scientific techniques, we’d be wasting an awful lot of money paying them not to use their skills to do research. We’d be better off simply having more scientists, than paying a different group to have all the necessary skills, but to not actually use them.
Apart from things like ethics and fraud, there’s also a really good reason why we don’t need to actually police scientists; it’s largely self-correcting. We don’t trust something simply because someone clever, who appears to be trustworthy, does something interesting. We trust it when many people in different institutions, and different countries, produce consistent results using many different methods and techniques. Scientists may make many mistakes and follow many dead ends, but ultimately it’s all part of the scientific process and demonising scientists for these errors would be extremely counterproductive. If anything, making mistakes and getting things wrong is how we learn. If we know what to do in advance, it wouldn’t really be research.
In my view, there are – however – some real issues with science today, but this relates more to the system in which science operates, than with the scientists themselves. Some of what is incentivised encourages bad behaviour and promotes hype over good science. If people wanted to improve science, they could police the system, rather than the scientists themselves. Of course, I suspect that most who criticise scientists would like to see more measures of success and ways to quantify the value of science. In my view, it’s exactly that that is causing most of the problems that we have today.