Doing science

This whole furore about Karl et al. has got me thinking more about how we actually conduct science/research. There is, I think, a perception that doing science/research involves following a fairly well defined set of procedures about which there should be little disagreement. The reality is that it is much more complex, with there being quite large differences between disciplines and sometimes even within disciplines. It often seems that much of the criticism about climate science comes from those who have some kind of relevant experience, and who then seem to think that everything should happen as they think it should; without considering that what works in their area, may not in another.

For example, I was at a meeting a few weeks ago at which one of the speakers pointed out that cosmology/astronomy was one of the few research areas that is primarily observational; climate science being one of the others. You can’t really do experiments. There is no control (we’re studying a single universe, or a single planet). We can’t go back and redo observations if they aren’t as good as we would like. Observations are often beset by problems that were not anticipated and that you can do little about. Understanding and interpreting observations requires models of various levels of complexity. All these factors mean that the details of how research is undertaken in these areas might be quite different to how it would take place in another. This doesn’t mean that the underlying scientific philosophy is different, just that the details of how it is undertaken might differ from what would happen in other areas.

In some cases, it is possible to develop a well-defined observational and analysis strategy, but in many cases it is not. Either you’re trying to use some data to do something that was not anticipated when the data was collected, or something unanticipated happens when the data is being collected that then requires some kind of correction. You might argue that in such circumstances there should be a process that is checked and authorised, but who should do this? Also, scientists ultimately want to do research, publish their findings, and let it be scrutinised by the scientific community. Following a well-defined procedure to the letter doesn’t somehow validate research results, and not doing so doesn’t somehow invalidate them. Our understanding increases when more and more studies (ideally by different groups of researchers) return results that are largely consistent; it isn’t really based on a sense that the research obeyed some well-defined procedure.

Something else to bear in mind, is that research is carried out by humans, and not by robots. Not only are they typically trying to solve problems that are perceived as of interest, they would also like others to be interested in what they have done. They try to write their papers in a way that highlights what might be of interest. There’s nothing wrong with this at all; we’re not funding research so that people can do things that are boring, and there’s no point in doing something interesting if people don’t notice.

However, there are certainly cases where researchers are regarded as having hyped their work too much (and some where they may not have hyped it enough). There are – in my view – even valid criticisms of the manner in which Karl et al. framed their results. However, precisely defining the correct framing is probably not possible, and that some might object does not necessarily mean that it was wrong. I’m, of course, not suggesting that everything that is done does not deserve criticism, or that there aren’t cases where it’s obviously deserved. However, there are many where it’s not clear, and where the critic may simply not have sufficient understanding to make the claims that they’re making.

At the end of the day, research is never easy and rarely works as expected; if it did, the answer would probably be obvious. It can, of course, be perfectly reasonable to criticise how research is done, and how it’s presented. However, this would ideally be in the interests of improving our overall understanding, not undermining it.

Advertisements
This entry was posted in ethics, Research, Science, The philosophy of science, The scientific method and tagged , , , , . Bookmark the permalink.

9 Responses to Doing science

  1. Magma says:

    In my experience, those who espouse a rigid ‘cookie-cutter’ view of how scientific research must be carried out are usually not scientists, or if they are, not very good ones.

    Note that the above does NOT include those who hold themselves and their colleagues to rigorous standards of experimental design, statistical analysis, hypothesis testing, etc.

  2. Wise words on the diversity of methods. The phrase ‘scientific method’ does nevertheless describe a meta-process, true of all science: hypothezing based on prior knowledge and theoretical frames; designing experiments; conducting them and recording results in notebooks or elsewhere; analysing the results based on theoretical frames and new hypotheses; publishing papers summarizing all of the above.

    The thing that I think many people do not realise is the extent to which computer models are now ubiquitous in gene research, astronomy, etc. So while you cannot physically build a replica Venus, or Betelgeuse, … but you sort of can! Andreas Wagner has said that “Computers are the microscopes of the 21st Century”, and there is no field of science where that isn’t to an extent true. The meta-process doesn’t really change, does it?

    On hyping a paper (and having a catchy title in order to make it into Nature, maybe!) is that so new? There must be a lot of pressure on scientists looking for ‘impact’ – not to change the science, but – to hype up the headlines. In such cases, it may be no surprise that journalists read the headline and not the paper, especially if they already have an angle they want to push.

  3. Why hasn’t there been a big stink about Spencer and Christy’s UAH tropospheric satellite dataset? It’s still beta version 6.x and has been used for almost 18months now. Their paper explaining the underlying changes has still not been published.

    RSS updated their satellite data set to Version 4 last year. They waited until after their paper has been published to release the data set. The old Version 3 (which is closer to UAH 6.x beta) runs colder because of various issues outlined in their paper.

    Yet Judith Curry, Ted Cruz etc run around claiming “The satellite data is the best data we’ve got!” (referring to UAH). Despite the satellite data having much higher error range than land based temperature data.

  4. Richard,

    On hyping a paper (and having a catchy title in order to make it into Nature, maybe!) is that so new? There must be a lot of pressure on scientists looking for ‘impact’ – not to change the science, but – to hype up the headlines.

    It’s not new and there is pressure. However, I do think researchers/academics have a responsibility to resist this, at least in the sense of avoiding hyping it simply to get the clicks. A classic is implying that you’ve found a habitable planet, when anyone working in the field will almost certainly realise that it is not. It can be difficult, though. I tried very hard on a recent press release to use the term “rocky planet” rather than “Earth-like” and had succeeded in doing that throughout the text of the press release. Only at the last minute did someone point out that there was still the term Earth-like in the title (which I duly changed, but could easily have missed).

  5. Ceist,

    Why hasn’t there been a big stink about Spencer and Christy’s UAH tropospheric satellite dataset? It’s still beta version 6.x and has been used for almost 18months now. Their paper explaining the underlying changes has still not been published.

    I’m guessing that it’s because most of the fuss about this kind of thing is normally on blogs or in tabloids and serious scientists really can't be bothered to do that kind of thing.

  6. paulski0 says:

    Ceist,

    I’m not sure there is a problem with Spencer and Christy’s behaviour with regards v6, though it’s possible I’ve missed some things. It can take a long time to finish and publish a dataset. Spencer is publishing the non peer-reviewed beta dataset on his personal blog, but I don’t see anything necessarily wrong with that.

    I don’t even really care about Curry presenting v6beta to Congress. RSS TLTv3.3 basically looks similar and the picture doesn’t really change significantly compared to the peer-reviewed v5.6. Though, Curry would have to disagree with that if she wanted to be consistent, given her statement that the small differences highlighted by Karl et al. have ‘major policy implications’.

    What is reprehensible is Curry calling for wide-ranging investigations and FOIA spamming based on accusations (the ones they’ve backpedaled to anyway), which even if they were true (that doesn’t seem to be the case) would only suggest verifiably far less problematic behaviour than her own according to the standards she has set out.

  7. Pingback: Bates scappa dal Serengeti - Ocasapiens - Blog - Repubblica.it

  8. I’m only a reader of this field. But it, like many, often has significant papers (and books) which never make it into the limelight at all. The public view of climate science seems to be that it is some arcane priesthood, rather than being, at least in parts, very approachable. I enjoy the work of David Archer and Ray Pierrehumbert in that regard, and, read as a history, their joint Warming Papers anthology is great. But, even separately, there’s the Institute of Physics’ The Discovery of Global Warming. I don’t know why people — and journalists — don’t pay more attention. Could it be they don’t want to do the work entailed in the homeworks of, e.g., Principles of Planetary Climate?

    Papers which seem to be missed in discussions are those regarding atmospheric lifetime of fossil fuel CO2 (with which Archer and Solomon are associated), and the enormous costs of reversing Carbon contributions to the climate system, even if technology improves so it is 1000x cheaper than estimated at present, and that assumes emissions are zeroed. To me, at least, these are major policy things lost in discussions of whether or not the last 20 years have warmed or not. I mean, seriously, that’s a side show. To me, entertaining the possibility of really serious damage and then realizing that, once it is done and once the stuff is emitted, there is no turning around, would at least pique policy interest.

    One way, of course, is “shouting from the mountaintop“ but, then, that can incur political wrath.

    From my perspective as a statistician, scientific fields tend to be conservative in their approaches, both in technique and in their sociology. Oftentimes, there are, for instance, completely legitimate statistical techniques used in one field which are unfamiliar in others, and the referees are uncomfortable approving results obtained using them in contrast to methods which have long been published, both because of concerns about validity and about comparability. Eventually, after a few methods papers make it into publication, the field changes. There’s nothing inherently wrong that this happens. It’s sometimes disappointing to see less effective methods used.

  9. Willard says:

    The future of ClimateBall – automating concern thankfulness:

    More time for science.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s