I’ve been reading a paper by Daniel Sarewitz, that was being highlighted by Jane Flegal on Twitter. The paper is called Of Cold Mice and Isotopes Or Should We Do Less Science? There’s quite a lot that could be said about the article, but since I’m trying to keep posts reasonably short, I thought I would comment on one thing.
The article says:
People who care about the quality and legitimacy of science could start insisting at every chance that science conducted and invoked in the post-normal context is not science. Post-normal science is easy to spot. When experts continue to disagree; when advocates continue to use science to advance value-based agendas and to accuse those they disagree with of misusing science; when decision makers don’t take action on urgent issues but call for more research; when action means that there will be winners and losers; when the quality of the science cannot be measured against any agreed-upon end-point—then, no matter how sophisticated the math or complex the scientific instruments, no matter how pure of motive and careful of method the scientists, it’s NOT science, and we should all say so.
Of course, the process of making decisions is not science. Also, the relationship between science and decision making is extremely complex; there isn’t a simple, linear process in which scientific information leads directly to an obvious outcome. However, I have a real problem with the idea that we should regard science conducted in the post-normal context as not science. The validity of some scientific research should not depend on its broader relevance.
The other problem is that anytime science suggests something inconvenient, all you need to do is find some experts who disagree, highlight the value-based agendas, point out that decision makers aren’t taking action, claim there will be winners and losers, and fail to agree upon any end point that would measure the quality of the science. If you can do this, then we’re all meant to say that this is NOT science.
This seems like a cop-out to me. What would be far more useful would be ways for us to assess the credibility of the underlying scientific information. We could develop methods for determining when the expert disagreement is actually significant, rather than it simply being a small minority who refuse to accept the most recent scientific evidence. We could even try to determine if value-based agendas have influenced the scienific research process in some substantive way. All of this would seem useful. Simplistic scenarios under which we’re meant to stress that some science is NOT science, does not seem particularly useful.
Also, why should we judge science on the basis of whether or not decision makers are taking action and if there will be winners and losers? Decision making is complex and shouldn’t really influence how we value the underlying information. Additionally, why should the quality depend on some agreed-upon end-point? There may be some truth to this when it comes to applied research, but a key aspect of fundamental research is that we can’t know the outcome in advance.
Of course, there may well be subtleties that I don’t understand, but if I am reading this right, then I disagree quite strongly with what is being suggested. Rather than helping society better understand how to utilise science in the decision making process, it seems to be providing a mechanism for avoiding making difficult decisions, or for validating information that could be severely lacking in credibility. I fail to see how this could be regarded as progress.