Transparency

My previous post on research integrity was motivated by Stephan Lewandowsky and Dorothy Bishop’s article on transparency in science. This appears to have ended up being a rather more controversial topic than I was expecting, so I thought I would add one more post about this. This isn’t really to try and make it less controversial, mind you, it’s just a few thoughts I’ve had since writing the last one. If anything, it’ll probably make it worse 😉

When I refer to science (or research, in general) I really mean something similar to what Eli was referring to as normal science. I’m thinking of the process by which we gain understanding of some system, be it a physical system – like the universe or our planet’s climate – or something more societal. It can be a rather messy process and, as Michael Tobis points out here, there really shouldn’t be some expectation to have access to all the mistakes, background discussions, and dead ends that took place before doing what was ultimately published. It’s not only that this is not really relevant, but scientists must be free to do stupid things out of the public eye.

Transparency should only really apply to what is actually published. However, here’s where I think there is also a subtlety. A key part of the scientific method is that we only start to trust some scientific result when it’s been tested and checked by others; we don’t simply trust it because it looks reasonable, we can’t find any obvious errors, and because those who did it appear trustworthy. In this context, transparency should be something that aids the scientific method, not – IMO – something that we should see as a way of making results more trustworthy. There’s nothing fundamentally wrong with delving into the details of what others have done, but there’s no real substitute for actually doing something independent to see if the original result stands up to further scrutiny. This involves collecting more data, doing more analyses, running improved and updated models,……

Our overall understanding of a topic is therefore very unlikely to be based on a single study, but on a collection of research that has tended towards a consistent picture. There isn’t even some definitive rule as to when we should regard our understanding as robust, and when not; it’s generally a slow process of acceptance by the community. Transparency is clearly an important part of this whole process, but it’s not some kind of panacea. We should be careful of assuming that we can trust a result simply because the authors have been completely transparent, or dismissing something just because the authors have not released all that others think they should.

I should stress, however, that I’m really talking here about normal science; the process of discovery. If, however, a single piece of research is likely to heavily influence some political – or societal – decision, then the position may be very different. We may then want to really delve into the details of that study to ensure that there are no obvious errors, or reasons why we should do more before making any decision. I’m also not suggesting that normal science shouldn’t be transparent; I’m just suggesting that we need to recognise that the overall scientific method is important and that transparency is simply an important part of the standard scientific process. It shouldn’t be some kind of blunt instrument for bashing some and lauding others.

This entry was posted in ClimateBall, Science, Universities and tagged , , , , , . Bookmark the permalink.

84 Responses to Transparency

  1. Harry Twinotter says:

    In my opinion there is value in, say, another team replicating or validating the results of a study without following exactly the same method.

    This then gives scope for replication or validation using a different method.

  2. It can be a rather messy process and, as Michael Tobis points out here, there really shouldn’t be some expectation to have access to all the mistakes, background discussions, and dead ends that took place before doing what was ultimately published. It’s not only that this is not really relevant, but scientists must be free to do stupid things out of the public eye.

    Scientists should be free to do stupid things because it is a creative process. If done right, there is not much routine, but you are continually learning. Your are working on the edges of what we know and you do not know in advance whether what you do will work, even if you did it perfectly the first time, which would be inefficient if it is a dead alley. It is the final paper that should be good science, how you come up with the ideas is irrelevant, Feierabend would say: anything goes.

    Also a paper should not be expected to be perfect the first time. The sentiment, which seem to come from politicians and political activists, that everything should be perfect the first time would seriously hamper scientific progress.

    Like mentioned by Physics above, there may be exceptions to that when a single paper/study is very important. I would see that mainly in large double-blind medical/biological studies, which are so expensive that there is almost a monopoly and replicating it and building on it is very difficult. Such studies are also not constrained much by theory; in biology nearly every reaction is possible somehow. At the same time, such studies can have large regulatory consequences and the experimental set-up is pretty straight forward. That is something that may be better executed by engineers according to ISO norms, rather than by contrarian creative scientists who have the tendency of doing things different just for the fun of it.

  3. RickA says:

    I generally agree with you.

    However, in normal science replication is a huge problem.

    See this study (reference 1), which showed that only 36% of 100 studies in psychological science from three high-ranking psychology journals could be replicated.

    https://en.wikipedia.org/wiki/Replication_crisis

    While this study was about social Psychology, I am sure the problem extends to other fields (perhaps climate science).

    What is happening here is that confirmation bias is tainting the results.

    10 trials are run and (made up example) say 5 comport with the hypothesis and 5 do not. Only the five which comport are reported.

    Someone else goes out and does the study over again (from scratch) and guess what – they find that the result is either not shown or the result shown was more than a 33% lower than reported.

    Given this huge problem, it would be a good idea for future papers in every field to report all trials, warts and all, and not just the ones which agree with the hypothesis.

  4. RickA says:

    sorry – it was reference 27 in the wiki:

    Collaboration, Open Science (2015-08-28). “Estimating the reproducibility of psychological”. Science 349 (6251): aac4716. doi:10.1126/science.aac4716. ISSN 0036-8075. PMID 26315443.

  5. Rick A,
    There are clearly problems in all fields, but I don’t think that your example necessarily changes the point. If someone presents a study that, it turns out, cannot be replicated, you don’t actually need to replicate it using their own data to discover that; you simply need to try and do an equivalent study. That’s essentially my point. The scientific method is about testing and trying to replicate studies using more data/observations and different analysis techniques.

    Given this huge problem, it would be a good idea for future papers in every field to report all trials, warts and all, and not just the ones which agree with the hypothesis.

    Firstly, I’m not necessarily convinced it’s a huge problem and what you’re suggesting is not quite as trivial as you seem to suggest. Let’s say I’m developing a code. I’m not going to publish all the testing while I’m implementing the various routines. I would publish any standard tests, though. Let’s say I run a simulation that crashes or does something that doesn’t make physical sense, and I find an obvious error. I’m not going to publish that.

    On the other hand, if I decide to run a suite of simulations to test some hypothesis, then I can’t simply drop some because they don’t match what I expected. Also, if I do find an obvious error halfway through running some suite of simulations, then I have to throw all of them away and start again. Consistency is crucial and you do need to define what is testing and what is production, but I don’t think your example from psychology is really suggesting that there are huge swathes of papers in all fields in which people are knowingly leaving out inconvenient results.

  6. Chris says:

    I like your summary a lot ATTP and agree with the overall sentiment. I like the fact that normal science can tolerate quite a bit of error, non-reproducibility, bad methodology and even fraud since ultimately the underlying realities that we’re trying to uncover are revealed (a la Feyerabend as Victor mentioned!), and that’s what the majority of guys and gals doing science are after. Your distinction between normal science and science with more immediate implications is well made – we absolutely shouldn’t tolerate bad methodology or fraud if the latter results in, for example, a false “link” between MMR vaccine and autism spectrum disorders……..but we don’t care too much if poor science and credulous publishing results in a short-lived notion that some microorganisms can utilise arsenic in place of phosphorus in their DNA.

    Related discussions about scientific reproducibility can be confused by not recognising that the situation (with respect to replications) differs hugely depending on whether one is discussing psychological sciences, science with a high reliance on computation/coding, medicine, or the physical sciences, and I disagree with the inference of RickA above that the problems re reproducibility in Psychology applies e.g. to climate science. It seems to me that the major data sets and findings of climate science (surface temperature series, ice core data and analyses, atmospheric and ocean dissolved gas measurements, even paleoproxy reconstructions and so on) are eminently reproducible. Competent scientists recognise where there may be problems with data measures and analyses (e.g. tropospheric temperatures from satellite MSU or the uncertainties associated with particular paleoproxies) and these are discussed in the literature so that IMHO it’s not so difficult for the interested non-expert to identify where the strongly-supported consensus positions lie.

  7. There is a converse problem to the one underlined by RickA: the possibility that a replication is a bit too much perfect.

    Although Wegman had said that “We have been able to reproduce the results of McIntyre and McKitrick (2005b)”, the PC in Fig 4.1 was identical to one in MM05b. Since the noise is randomly generated, this could not have happened from a proper re-run of the code. Somehow, the graph was produced from MM05 computed results.

    http://www.moyhu.blogspot.com/2011/06/effect-of-selection-in-wegman-report.html

    One does not simply replicate someone else’s study by copy-pasting the results.

    ***

    The concept of simulation also puts an important limitation to the ideal of replication.

  8. guthrie says:

    The helpful thing about a lot of physical science is that even if you fake your results, someone will attempt to take them further and the universe will cut off any attempts to do so because the first results were faked.
    Obviously this doesn’t happen so much in some sorts of psychology which can be a bit woolier and you can set up the frame to suit yourself.

  9. Brandon Gates says:

    Anders,

    There’s nothing fundamentally wrong with delving into the details of what others have done, but there’s no real substitute for actually doing something independent to see if the original result stands up to further scrutiny. This involves collecting more data, doing more analyses, running improved and updated models,……

    Since yesterday, Red Teaming has been topical again on my side of the pond:

    Click to access HHRG-114-SY-WState-JChristy-20160202.pdf

    We know from Climategate emails and many other sources that the IPCC has had problems with those who take different positions on climate change than what the IPCC promotes. There is another way to deal with this however. Since the IPCC activity and climate research in general is funded by U.S.taxpayers, then I propose that five to ten percent of the funds be allocated to a group of well-credentialed scientists to produce an assessment that expresses legitimate, alternative hypotheses that have been (in their view) marginalized, misrepresented or ignored in previous IPCC reports (and thus the EPA Endangerment Finding and National Climate Assessments).

    Such activities are often called “Red Team” reports and are widely used in government and industry. Decisions regarding funding for “Red Teams” should not be placed in the hands of the current “establishment” but in panels populated by credentialed scientists who have experience in examining these issues. Some efforts along this line have arisen from the private sector (i.e. The Non-governmental International Panel on Climate Change at http://nipccreport.org/and Michaels (2012) ADDENDUM:Global Climate Change Impacts in the United States). I believe policymakers, with the public’s purse, should actively support the assembling all of the information that is vital to addressing this murky and wicked science, since the public will ultimately pay the cost of any legislation alleged to deal with climate.

    On the one hand, Dr. Christy is spouting pure and unadulterated horse crap; 5-10 percent of the US climate budget to write an “assessment” expressing “alternative hypotheses” is a ludicrous amount of money for doing nothing more than essentially collating the “alternative hypotheses” that the people he’s thinking of have already written and published elsewhere. What he really should be asking is for those hypotheses to be put to the test in rigorous fashion on the US taxpayers’ dime.

    So, on the other hand, let’s get some serious research for our serious amount of money instead of paying them to “express” themselves … something which they’re already abundantly capable of doing without me having to foot the bill for it. I don’t want an “assessment” report, I want published papers in top-tier journals. And most importantly, I want a model which conforms to CMIP5 specifications but uses whatever real physics they formalize from some conglomeration of the bits and pieces already floating around out there. That model is the main benchmark used to determine future rounds of funding. They get a reasonable guaranteed initial period of funding, say 5 years. If their model shows comparable skill to the “establishment” CMIP5 ensemble, they get more funding. If their model shows significantly more skill, they get a group Nobel Prize and become the new “establishment”.

    It might turn out to be nothing more than an expensive form of political theatre. On the other hand, it would force them to stop flapping their lips and do real science, something which has been known to actually work: http://berkeleyearth.org/

    I’m confident “Team Blue” wins either way, especially since one of my other conditions is that some multiple of their 5 year funding (I would say >= 1) gets allocated to some combination of carbon taxes, renewables R&D, loan guarantees for renewables deployments, other like subsidies, etc.

  10. dikranmarsupial says:

    One thing that doesn’t seem to be appreciated in the general public is that publication in a peer reviewed journal is not the last step in acceptance by the research community, it is just the first. Scientists do not trust arguments because they appear in a peer reviewed journal, it just means that they have passed a basic sanity check (that is applied by human beings and so itself is not 100%). Of course if it is a predatory open access journal then it doesn’t even have that, and if the paper is outside the scope of the journal in which it is published, then the sanity check is unlikely to be very robust (as the “peers” are unlikely to be “peers” of the appropriate field).

    Instead arguments are accepted after they have been incorporated in other research (not necessarily by direct replication) and found useful. This is often indicated by a paper having a good rate of citation (relative to the topic, citation rates are very variable across fields). However even then that doesn’t mean that the argument is sound (I have tried out an idea by a very eminent researcher, where the paper has several hundred citations, but it doesn’t actually work in practice).

    It isn’t that big a problem that some percentage of papers can’t be replicated, if the argument is correct but the work can’t be directly replicated, then it will be indirectly validated by work where the basic idea is used and found to work. If it isn’t correct, then it will be largely ignored by the scientific community. That doesn’t mean that we don’t need to make our work replicable, we should, but we shouldn’t be hyperbolic over the scale of the issue.

    The real problem is the general public thinking that peer-reviewed papers are guaranteed to be correct. They aren’t, but if you are not an expert in the field, then peer-reviewed papers at least have the benefit of having been sanity checked by someone who it, rather than “some bloke/blokess of the WWW”.

  11. dikranmarsupial says:

    5-10% of the budget to support his 3% of the scientists – nice try! ;o)

    Raising the NIPCC isn’t the best move, given they uncritically review the work of Essenhigh, where the fundamental flaw is fairly obvious and has been widely discussed.

  12. Dikran,
    Indeed, what you say about peer-reviewed papers is quite right. Peer-review is simply a sanity check and we expect some fraction to be essentially wrong. That doesn’t really matter if we continue to probe these topics and improve understanding. It could be an issue if undue weight is given to single studies.

  13. snarkrates says:

    Rick A. cites a problematic study in psychology and the comes out with this whopper: “While this study was about social Psychology, I am sure the problem extends to other fields (perhaps climate science).”

    I call bullshit. This is a complete non sequitur! Science is about measuring, controlling and bounding errors. Each field of science is susceptible to different errors. As long as you are doing that effectively, you are doing science.

    Dikran Marsupial’s point above is critical–the ultimate test of any work is whether your peers in the same and related (and even far flung) fields find it useful. That means it has to make understandable things that formerly were obscure. If it does that, it is probably an approximation of truth. Truth is not always beauty. It is always useful.

  14. “Transparency should only really apply to what is actually published.”

    Well, no. Someone may collect N+M observations, and publish results based on N observations.

  15. Well, no. Someone may collect N+M observations, and publish results based on N observations.

    That depends entirely on the setup. If you do survey astronomy, then the idea is that you survey – as deeply as you can – some part, or all of, the sky. Given that, people can then choose to extract data from that survey. What you’re suggesting – I think – is someone who chooses to extract some data from a survey and then later decides to simply ignore some of what they’ve extracted, without justifying it and without making this clear in the paper. That is clearly bad practice. However, most people assume that others are behaving honestly (it is hard sometimes with certain people though). An assumption – without good reason – that they haven’t done so, would really then qualify as a fishing expedition. The idea that one could resolve this by insisting that every single thing they did was available is bizarre. For example, can we have all the communications between you and others associated with the GWPF?

  16. dikranmarsupial says:

    Or they may collect data with N unique observations with non contiguous unique IDs (from which it would be unwise to confidently infer the existence of another M observations as the only requirement of unique IDs is that they are unique) and publish results based on N observations. A good way to show that the putative additional M observations actually existed would be to suggest some candidates. ;o)

  17. The presumption of honesty is clearly problematic, which is why the medical sciences are now moving towards pre-registration of experiments. Other experimental sciences tend to follow medicine’s lead.

  18. “For example, can we have all the communications between you and others associated with the GWPF?”

    I do not archive my email, but if you FOI U Sussex, you can see all the email I sent to anyone of your choosing.

  19. Marco says:

    “The presumption of honesty is clearly problematic, which is why the medical sciences are now moving towards pre-registration of experiments.”

    http://deevybee.blogspot.dk/2013/07/why-we-need-pre-registration.html
    Some may find it interesting that the word “honesty” does not appear.

  20. The presumption of honesty is clearly problematic

    Not for reasonable people, it isn’t.

    which is why the medical sciences are now moving towards pre-registration of experiments. Other experimental sciences tend to follow medicine’s lead.

    I think you’re probably using a rather nuanced meaning of the word “experiment” here. There are clearly scenarios where knowing the original protocol and knowing that it was followed is very important. Pre-registration may well be crucial in some cases. There are others where it is not. For example, if someone downloads a sample of abstracts from a public (or easily accessible) database, all you really need to know is the search term. If they’ve also provided a list of all the abstracts that were downloaded then you can easily check that what they claim to have downloaded is consistent with what those search terms return.

    There may of course be differences if the database is updated between the time of the original search and the newer search. However, it wouldn’t be hard to establish if there is some kind of discrepancy between what they claim to have downloaded and what the database search actually returns.

    I do not archive my email, but if you FOI U Sussex, you can see all the email I sent to anyone of your choosing.

    I have absolutely no interest in doing so. I’ll leave fishing expeditions to others.

    Dikran,

    A good way to show that the putative additional M observations actually existed would be to suggest some candidates.

    Well, yes, but not if you’re pretty certain that they don’t actually exist.

  21. Richard,
    Do you just work in an environment where being dishonest is the norm?

  22. dikranmarsupial says:

    BTW Richard, you may find that while you don’t archive your email, that doesn’t necessarily mean that your institution doesn’t. FWIW would not support any such fishing trip, what matters is whether the papers are sound.

    I think it is more important that there is transparency about things like the sensitivity of the conclusions to a single datapoint, especially when that was the datapoint contributed by the author. ;o)

  23. Michael says:

    Tol is clueless – the idea of pre-registration make sense for a certain type of research, ie clinical trials in medicine, and absolutely none in others.

  24. Michael says:

    dikran,

    That was a gremlin.

  25. Michael,
    Actually, I think that was one of the ones that wasn’t a Gremlin 😉

  26. I have personal experience of only a handful of cases of dishonesty, a tiny fraction of the many encounters I have had.

    That is not the issue, though. A properly designed system is robust against dishonesty. Preregistration prevents the suppression of unwelcome results, and it takes away the concerns of those who, rightly or wrongly, think that this is common practice.

  27. Indeed, Dikran, I do not archive my email because someone else does that for me.

  28. Richard,
    It almost seems as though you just don’t really understand the scientific method. If a particular result has been tested and checked by many researchers using independent, or semi-independent, methods and datasets, and the results all appear consistent, then we have confidence in that result. That’s the key point, in my view. We trust the overall method. There may well be cases when pre-registration would be very important, others where it would be inappropriate or an utter waste of time. I can just imagine the joy at the addition of another layer of bureaucracy to academic life.

    Preregistration prevents the suppression of unwelcome results, and it takes away the concerns of those who, rightly or wrongly, think that this is common practice.

    It might also help if people who know that it isn’t a common practice, didn’t go around on blogs saying things that make it seem that it is.

  29. dikranmarsupial says:

    In many fields, the system is already sufficiently robust against dishonesty (e.g. replication exposes the dishonesty). There is a cost associated with additional measures such as pre-registration, which means that they are often of insufficient benefit to be worthwhile, given the resulting limitation in scientific progress that they would also impose. This is why they are suggested for some fields, but are obviously inappropriate for others. Micheal is is quite right. It also raises the problem that many interesting results are obtained via serendipity, which makes them rather hard to pre-register.

    Of course cunctators might want to push for pre-registration precisely to delay scientific progress that is going in a direction that they don’t like! ;o)

  30. verytallguy says:

    Richard Tol,

    I have personal experience of only a handful of cases of dishonesty

    I presume you’re referring to the compilation of material like this for the GWPF?

    Attempting to imply dishonestly that GWPF reports form peer reviewed literature:
    http://www.thegwpf.org/how-gwpf-reports-are-peer-reviewed/

    Attempting to airbrush out the history of global warming in their logo:

    etc

  31. Andrew dodds says:

    @tol

    For the N/M case, this is exactly why replication beats repetition. The person trying to replicate uses the N+M set, finds problems, and the original research is discredited.

    Pre reg is useful for clinical and psychological research, especially when a lot of money rides on the outcome. Which is not generally the case in climate research.

  32. Mr Tol could make a small step towards more transparency by telling everyone who that dark funders behind his GW Policy Foundation are.

  33. RickA says:

    Andrew:

    More money rides on climate research than clinical and psychological.

    Climate research advocates are proposing things (like carbon taxes) which will make everything more expensive (food, fuel, energy).

    Last I heard we are talking 1 trillion per year for the next 80 years.

    That is pretty serious money.

    Plus people are trying to save the world.

    These two tensions lead to confirmation bias on both sides (one to exaggerate and one to minimize).

    It does not seem unreasonable in these charged circumstances to require that when a climate science paper is published, that all data be archived with publication and all code used to process the data (or spreadsheets) be archived along with it.

    Frankly, this should be the rule in every field (in my opinion).

  34. RickA,

    Climate research advocates are proposing things (like carbon taxes) which will make everything more expensive (food, fuel, energy).

    Climate science research doesn’t really propose this. In a sense climate economics does, but mostly in the form of estimates for what it should be if it were to be implemented.

    It does not seem unreasonable in these charged circumstances to require that when a climate science paper is published, that all data be archived with publication and all code used to process the data (or spreadsheets) be archived along with it.

    A good deal of it is already available, you just need to actually look. Also, if someone can’t plot a graph, or do a basic calculation, without having the spreadsheet, or code, used by others, I’m not sure how giving it to them really helps. That’s not necessarily an argument against it all being available, but just a suggestion that it’s not clear how it would really make any difference.

  35. Marco says:

    “Preregistration prevents the suppression of unwelcome results, and it takes away the concerns of those who, rightly or wrongly, think that this is common practice.”

    Errr…no. The idea of preregistration is that you can get an ‘in principle acceptance’ of your paper summarizing the results, regardless of whether the results are positive or negative.

    There is also another type of pre-registration: that of clinical trials and the intended protocol. This differs a lot between countries, with the Netherlands having an optional registration. In the US it is mandatory to pre-register a clinical trial, but there is no hard requirement that the results of that trial are published – only if the drug, biological product, or device studied in the trial was approved, licensed, or cleared by the FDA for any use. Thus, pre-registration does not prevent the suppression of results, whether unwelcome or not, even in the case of mandatory registration of clinical trials.

    There is also no one that can demand that the “in principle accepted” trial is completed and that the data is reported. It may not look good, but e.g.a company could still say to the authors that the paper cannot be published and do so because the results are unwelcome, but use other excuses.

    What it does hopefully do is to give a better chance to papers that ultimately come with a negative result, and reduce the attempts to find something positive to report, just to get the paper published.

  36. Marco says:

    “Pre reg is useful for clinical and psychological research, especially when a lot of money rides on the outcome.”

    Actually, no. One of the main complaints and reasons people started to look into pre-registration in relation to publishing is publication bias: it is easier to publish something with a positive result than something with a negative result. As a result papers sometimes involve modifications just to point out something positive, ‘hiding’ the overall negative result. This has nothing to do with financial reasons.

    There is a problem that companies may publish only the positive results, leaving the negative hidden from public scrutiny (but those *are* seen by the FDA and EMA, etc), and as a result metastudies can give an overly positive view of the treatment. But also companies run into the publication bias of journals. It is too easy to blame money.

  37. RickA says:

    Marco:

    I am sure publication bias has an impact here – so I agree with you.

    Publication bias, confirmation bias – bias of all kinds are a problem.

    I am sure some science isn’t even written up for submission to a publication also.

    Say a group decides to replicate a study.

    They attempt it and are unsuccessful.

    They may just drop it and say – well maybe we didn’t understand the method properly.

    Even if they wrote it up it may not get published (due to publication bias) – but they might not even write it up.

    Yet – if 10 groups failed replication – that has implications for the original published study – but we may never know about that.

    I don’t know the solution – but it would be nice to get all those attempts to replicate (successful, failed or just showing a smaller effect) recorded somehow. Maybe just a database, not a paper?

  38. RickA,
    You seem to be ignoring impact. If someone publishes a study with a claim that is clearly going to have a high impact if true, there is an incentive to publish if you do a study that contradicts that original claim. The idea that there could be all sorts of studies out there that contradict core parts of AGW, but just haven’t been published because the authors couldn’t bother, is bizarre. The same is presumably true across many fields. Anyone who could publish something that contradicts a high-impact result from an earlier study would clearly want to do so.

  39. Marco says:

    RickA, as ATTP already pointed out, climate scientists are not proposing to spend huge amounts of money, and besides that, the vast majority will suffer equally of any actions taken.

    This is in stark contrast to the financial incentives common in clinical trials: a company wants to get its investments back, so anything that sells their drug or device is good, while anything that doesn’t, isn’t. And let’s not forget the many scientists who do clinical trials and own patents on the drug or device. They also have a direct financial incentive to ‘pimp up’ the results.
    There will be some climate science-related scientists who have similar CoIs, but the majority are much more likely to want to deny AGW, as they will possibly be financially affected by any actions taken to counteract AGW.

    And when it comes to that trillion dollar a year…I assume you got that number from Bjørn Lomborg? If so, Lomborg proposes various forms of adaptation, *which will also cost a lot of money*! He claims it is cheaper, but that relies solely on the idea that investments will lead to breakthroughs that will make alternative energy sources cheaper and then we also reduce CO2 emissions…and more (through some hopeful magic) than with the current plans. Maybe he is right, but for now he is gambling with our future.

  40. dikranmarsupial says:

    RickA the most important form of publication bias is that papers that have flaws are less likely to get published in a reputable journal. I don’t think there actually is a bias against “skeptic” papers in climate journals, good work by skeptics does get published and attracts interest (e.g. Nic Lewis’ work on climate sensitivity), but the sad truth is that much of the skeptic climate research simply isn’t very good (e.g. Salby’s carbon cycle arguments). If your paper is only good enough to convince those that agree with you, it isn’t ready for publication yet.

    Now I as to your other issue. I quite often re-implement machine learning algorithms that are given in papers that seem interesting, and sometimes they work and sometimes they don’t. However I have only written one comment paper pointing out the flaws and it was a largely negative experience. I have found it not worth the bother, because sciences’ traditional method for dealing with this problem (everyone just ignores it and it doesn’t get cited much) is satisfactory. I have however written three other comments papers (two being climate related), and the reason it is worth doing in that case is not for the benefit of science, but because all three got some interest in the media.

    This is only a bid deal if you think publication in a peer-reviewed journal is some guarantee that it is correct, but that is a misunderstanding of scientific publication. It is not the big deal you think it is.

  41. izen says:

    Total transparency is an unachievable utopian goal while any of the data, methods or results have a economic cost or benefit.

    Pre-registration in the Clinical field is still voluntary and partial. It is an attempt to limit the tendency for positive results to be published while neutral and negative results are relegated to the back of the desk drawer.

    When research results can have immediate and profound effects on the share price and existence/success of a company it is not surprising that bias can creep in. There are very few instances where basic climate research can have any comparable effect on the financial position of a business or institution. Perhaps this is why despite highly motivated efforts by some, very little bias derived from funding sources has been detected in mainstream climate research. The only example I can think of would be W Soon providing ‘product’ in return for funding he was required to fail to declare.

    Perhaps the danger is rather greater of such biases impinging on the declared result when the financial or economic implications of climate research are the subject of any study. If the financial implications of research is significant then greater transparency may avoid papers being published that still contain sign mistakes (even if the do not alter the conclusions) or undetected gremlins.

  42. RickA says:

    ATTP:

    I was actually thinking about medical research when I wrote my last comment – but it wasn’t very clear.

    Just saw this article:

    Do scientists need audits?

    This scientist says yes.

    Again – this proposal is for medical research (at this point)

    Science is important and if we can change the culture to make published studies more reproducible, mitigate against bias confirmation and so forth, that is all to the good.

    I know Steve McIntyre isn’t all that popular here – but knowing he (and others) will be reviewing climate papers has had a salutary effect on climate science papers (in my opinion).

    I am not even sure very many people disagree with his main point – that data used in published studies should be archived promptly with or shortly after publication. We shouldn’t have to wait for scientists to die to have data archived – agreed?

  43. RickA says:

    Marco:

    Yes – my number did come from Lomborg.

    But whether it is a trillion per year or less (or more) – my point is we are talking about serious money here.

    I am not saying we shouldn’t have a gas tax, or a carbon tax, or mitigate or adapt – I am merely pointing out that whatever the “plan” is (if one ever gets passed by Congress), real money will be at stake – and that makes people pay attention.

    More so than the Drake equation and speculation about aliens (for example).

  44. dikranmarsupial says:

    “but knowing he (and others) will be reviewing climate papers has had a salutary effect on climate science papers (in my opinion).”

    I suspect it (the auditing) actually has had almost zero effect, and if you think otherwise I suspect you fundamentally misunderstand the (completely correct) mindset of the vast majority of scientists. Most scientists are already trying to produce the best and most robust work they can, simply because they care about scientific truth and know that publishing things that are wrong is not in their long term interests. I suspect if Steve McIntyre had instead spent his time developing his own methodology for the same problem, he would have had a far greater impact.

    BTW, if you want climate data to be archived, then an excellent start would be for national met offices and other generators of data not to be required to maximise their revenue. Not all of the problems are due to the scientists. If you want archiving, it has a cost, who precisely is going to pay for it?

  45. Marco says:

    “I am merely pointing out that whatever the “plan” is (if one ever gets passed by Congress), real money will be at stake – and that makes people pay attention.”

    Well, that was actually *not* what you were “merely pointing out”. You said “These two tensions lead to confirmation bias on both sides (one to exaggerate and one to minimize)”, at the very least implying that climate scientists have a bias (to exaggerate) because of the large amount of money involved.

    Maybe that’s not what you meant, but I have a hard time finding a more benign explanation for your comment.

  46. RickA says:

    Everybody has confirmation bias – even climate scientists.

    The bias is to look for results which agree with the hypothesis (whatever it is).

    So, yes I was implying that climate scientists have confirmation bias and so do skeptics (and everyone else).

    Of course that does not mean that every paper is actually flawed due to confirmation bias – just that that everybody has to be aware of their own tendency to look for results which agree with their hypothesis (and to watch out for discounting the ones which do not agree).

    The study I cited to earlier (which is for medical research) was fascinating because even when the effect was shown (it did replicate), the effect found in the study was 33% less than reported in the published study (in the replication study). It looks like the authors of the original studies (unconsciously) cherry picked the best trials to report, which exaggerated the effect reported.

    Now it is entirely possible that the replication authors cherry picked the worst trials to report, thereby negatively exaggerating – I don’t know. But it is an interesting finding on human nature which all scientists need to take into account, as well as the consumers of science.

  47. snarkrates says:

    What people seem to be ignoring is that bias has been part of the scientific method from the beginning. The thing is that biases compete. One might have a confirmation bias that data should follow a particular theory–but certainly it would be much more interesting, and the paper would be more important if the data didn’t conform. Now some researchers will be more conservative and will be more prone to confirmation bias. Some will jump the gun and report faster-than-light particles. Some will carefully confirm the discrepancy between theory and experiment.

    Science is not an individual, but rather a collective, activity. That’s why it works.

    Science doesn’t presume scientists are honest. It just makes it very clear that if you aren’t honest you will get caught, and the community will be merciless when you do.

    It is pointless to insist that scientists be perfect. So, instead, scientific methodology has to be sufficiently robust to produce reliable understanding despite human foibles–and sometimes even because of them.

  48. Hyperactive Hydrologist says:

    Academia is highly competitive with I believe less than 5% of PhD student going on to have a successful career as an academic. For this reason any scientist who isn’t extremely rigorous will not last long as a scientist.

    I would also argue that scientist are often very conservative with their results. This has shown to be that case with both Arctic sea ice predictions and sea level rise and I predict it will also be the case for extreme rainfall and flooding, although it may take a few more decades to be certain. Remember extraordinary claims require extraordinary evidence and scientists will struggle to publish studies without a robust methodology and a reasonable degree of certainty in the results.

  49. dikranmarsupial says:

    Scientists generally don’t like being shown to be wrong*. The best way to avoid this is to try and be your own harshest critic and to actively search out the weaknesses in your argument and acknowledge the evidence that argues against your position. Of course, being only human, we sometimes fail in this respect, but as skarkrates suggest, science has evolves mechanisms (such as peer review) and cultures to combat thinks like confirmation bias.

    *Of course the thing they like less than being shown to be wrong is being wrong and not being shown to be wrong as it means your research career has probably headed off into a dead end and you won’t get anywhere.

  50. dikranmarsupial says:

    HH “I would also argue that scientist are often very conservative with their results. ”

    I tell my students to try and publish the way a good chess player plays chess. A good chess player doesn’t play to maximise his maximum advantage, but to minimise his opponents maximum advantage (i.e. he expects “best play” for his opponent, and hence chooses the line of play for which his opponent has the weakest response). So when you write papers, the thing to do is to try and anticipate an “opponents” (e.g. the reviewers) criticisms and address them before you submit the paper, and instead of making the strongest claim that the data can support, you make the strongest claim that can’t be refuted by an “opponent”. As I said, we are all only human and don’t always manage to anticipate criticism (or even spot basic errors), but scientists are trained to do this (or pick it up as they go along if they have to learn the hard way) as it is the best way to succeed in research.

  51. Marco says:

    “So, yes I was implying that climate scientists have confirmation bias and so do skeptics (and everyone else).”

    Again, in my reading you implied that climate scientists have confirmation bias *because of financial incentives*. Did you mean to imply that, yes or no?

  52. Chris says:

    RickA – your persistent reference to the medical trial literature (which has its own significant issues that others have discussed on this thread) and then insinuating some sort of problem with confirmation bias in climate science (I assume you think there is a problem else why do you keep insinuating it!)…rather misses the point. If you think there is problematic confirmation bias in climate science then show us!

    I don’t believe there is problematic confirmation bias since all of the main data sets (apart from tropospheric temperatures with unresolved problems and the difficult issue of climate sensitivity although the latter is probably well-bounded, and well-discussed issues with paleoproxies) seem to be eminently reproducible (including the paleoproxy data though the possibility is always there that these data are fooling us to some extent in a maneer that we could discuss).

    Of course there is astonishing confirmation bias on the fringes of climate science. It’s difficult to explain the astonishing, repeated mis-analyses of tropospheric temperature data by Spencer and Christy between the early 1990’s and 2005 (1/3 of their entire careers) without reference to confirmation bias. But this was sorted by competent scientists with expertise in MSU analysis and so despite continuing uncertainties “normal” science as ATTP defines this has addressed this. Likewise one can’t understand the remarkable wrong-headed analyses of Dr. Lindzen (on tropsopheric water vapor in response to global warming and on radiative feedbacks) without reference to confirmation bias. Again these mis-analyses were identified and clarified (normal science) by scientists with an interest in getting to the reality of the phenomena (you can find these corrective papers using Google Scholar or ask me if you wish). Likewise the mis-analyses by Essenhigh on CO2 residence times (read dikranmarsupial’s nice corrective paper highlighting ludicrous flaws), the mis-analyses of forcings related to interglacial-glacial transitions by Chylek, the mis-analyses of limate response times by Schwartz, the silly paper on influence of ENSO on temperatures by Carter (sadly deceased) MacLean and de Freitas (??? – can’t be bothered checking authors) and so on..

    So there is confirmation bias amongst the tedious fringe of climate science — but climate science itself? Give us some examples Rick.

  53. The best example is BEST, Chris:

    https://en.wikipedia.org/wiki/Berkeley_Earth

    When you get a Koch-funded research group who confirms what otters say, you know there’s a bias somewhere.

    ***

    Is bias bias a thing?

  54. snarkrates says:

    Willard: “Is bias bias a thing?”

    It is now.

  55. If bias bias is a thing, then bias bias bias is a thing too:

    [S]ince the fallacy fallacy is itself a fallacy, it cannot be used to label an argument’s conclusion as false without committing it in the process. “You have used the fallacy fallacy, therefore you are wrong” is a much an example as “you have used an ad hominem, therefore you are wrong” would be. In addition:

    A fallacy is an argument that doesn’t follow proper rules of logic.

    A fallacy fallacy happens because true statements can be defended through fallacious arguments. Merely proving that an argument is fallacious does not prove that the whole entire position that it defends is immediately false.

    A fallacy fallacy fallacy then, is the claim that disproving particular arguments or versions of a position is irrelevant to disproving the position itself. While fallacious reasoning in support of a position does not, in itself, provide absolute proof that the position is false, it does mean that the person making the argument has failed to present any case for it to be true.

    http://rationalwiki.org/wiki/Fallacy_fallacy

    Whoever reads about Cohen’s Law may lose less debates.

  56. anoilman says:

    snarkrates: Are you catching on to what Willard is concerned about yet?

  57. dikranmarsupial says:

    Looking up Cohens’s Law, I found Pommers Law “A person’s mind can be changed by reading information on the internet. The nature of this change will be: From having no opinion to having a wrong opinion.” ;o)

    I just wish we could have a “discussion” rather than a “debate” occasionally.

  58. snarkrates says:

    An Oilman,
    Nope. I don’t speak obfuscation.

  59. RickA says:

    Marco asks “Again, in my reading you implied that climate scientists have confirmation bias *because of financial incentives*. Did you mean to imply that, yes or no?”

    No.

  60. John Mashey says:

    dikran:
    Once upon a time, I created a somewhat-whimsical scale for knowledge, in part with idea of offering concise descriptions for the level of expertise required to read a specific book or article.

    I tried to extend this to “negative” knowledge, to cover “wrong” knowledge … but this is much harder to represent, since:
    a) There are people who actually know a lot, but promulgate numerous wrong ideas.
    b) There are people who don’t know much right or wrong, and fortunately, some know that.
    c) There are people who mostly believe and repeat wrong things, filling blogs endlessly. Some may diligently pursue details from good papers … but then misinterpret them.

    University course prerequisites give clues about the hierarchies of real knowledge, although even there it’s probably more like a DAG than a strict hierarchy.

    Negative knowledge is much hard to represent 🙂

  61. > I don’t speak obfuscation.

    Yet the ad hom mode might very well be the very first obfuscatory technique.

  62. RickA says:

    Chris:

    I tried – but my comment with my examples seems to be hung up in moderation.

  63. Brandon Gates says:

    dikranmarsupial,

    5-10% of the budget to support his 3% of the scientists – nice try! ;o)

    Heh, hadn’t thought of it that way. Even that small of a figure is out of proportion to what he (vaguely) proposes to do with it. It might be sufficient for a Red Team to do what I’m proposing.

    Raising the NIPCC isn’t the best move, given they uncritically review the work of Essenhigh, where the fundamental flaw is fairly obvious and has been widely discussed.

    For that and a slew of other examples, I definitely wouldn’t want to hand a Red Team hundreds of millions of dollars and let them run wild with it. I want them to put my money where there mouth is and do real science, by which I mean advancing knowledge, not writing “assessments” (read: doing audits), which is what the NIPCC are essentially already doing: attempting to “disprove” the IPCC without doing much of any original research. So they get a charter which is decided by committee involving “all” parties in advance. It contains specific research goals, a defined product in the form of a “full-fledged” climate model and a guaranteed but limited amount of time to get it done. And they must at least submit their main findings for peer review and publication in top-tier journals.

    Matching funds go toward mitigation/replacement programmes for the same set period of time. Whether they sink or swim, we get immediate forward progress on policy.

  64. Isn’t James Hansen already the leader of the Red Team?

  65. Brandon Gates says:

    Willard,

    When you get a Koch-funded research group who confirms what otters say, you know there’s a bias somewhere.

    “Know” may be too strong a word, especially if “where” is specified.

  66. Brandon Gates says:

    oneillsinwisconsin,

    Not by how I’m using the term Red Team in this context. That doesn’t preclude him from being the Red Team leader in some other context. Frex, his advocacy for fission power is arguably not a stereotypical Green Team position.

  67. Willard says:

    > I tried

    I’d rather say you tried to exploit Chris’ question to peddle in a laundry list of old hockey stick stories instead of bringing proof of confirmation bias, RickA.

    Now.

    Try again.

    Slowly.

    In your own words.

    An explanation.

    Something that shows understanding.

    Almost en engineer-level formal derivation.

    If you please.

    Also.

    If you could address the points sent your way.

    At least once.

    That’d be great.

  68. Brandon – James Hansen is the de facto Red Team leader because he is doing science and challenging the consensus.

    COP21
    Ice Sheet collapse
    AMOC
    Sea Level Rise

    I’m sure there are many more points on which he disagrees with the consensus. The assumption that the Red Team must be on the *other* side of the consensus seems both arbitrary and less likely to bear fruit.

  69. Brandon Gates says:

    oneillsinwisconsin,

    By the same token, assumption that there can be only one Red Team seems overly limiting. Likelihood of bearing fruit, I cannot say. I’ll be interested to see where this goes:

    https://www.whitehouse.gov/the-press-office/2016/02/04/fact-sheet-president-obamas-21st-century-clean-transportation-system

    […] to meet our needs in the future, we have to make significant investments across all modes of transportation. And our transportation system is heavily dependent on oil. That is why we are proposing to fund these investments through a new $10 per barrel fee on oil paid by oil companies, which would be gradually phased in over five years.

    If needed, funding a Red Team to sweeten that deal seems like a good trade to me.

  70. Marco says:

    @RickA: thank you for the clarification

  71. @Victor
    No, sorry, I can’t tell you who funds the GWPF or Greenpeace because
    (1) I do not know; and
    (2) in England and Wales, the identity of charitable donors is protected by law.

  72. England and Wales, the identity of charitable donors is protected by law.

    There are also laws as to what is, or is not, a charity. There are some who seem to think that the GWPF doesn’t really qualify. In fact, according to this the charity commission ruled in 2014 that the GWPF had breached the rules on impartiality. Given the make up of the Academic Advisory council, this may not be all that surprising.

  73. dikranmarsupial says:

    “(2) in England and Wales, the identity of charitable donors is protected by law.”

    Is the Global Warming Policy Forum a charity?

  74. @dikran
    See the recent ruling on Friends of the Earth. It is a charity, and bound by the rules on charities. It donates money to Friends of the Earth Ltd, which is not a charity and not bound by those rules.

  75. Richard,
    I think the question was about the GWPF, not Friends of the Earth.

  76. Marco says:

    ATTP, they do the same thing: the charitable organisation is essentially bankrolling the political arm.

  77. The Very Reverend Jebediah Hypotenuse says:

    Willard:

    If bias bias is a thing, then bias bias bias is a thing too…

    Whoever reads about Cohen’s Law may lose less debates.

    And you doubted that you can be tossed around as the meta-guy.

    Level abstractions like that, and you’ll eventually find yourself deriving Wiio’s laws by accident.

    Moreover, observational evidence of Skitt’s Law leads to the following:
    It’s ‘fewer’, not less.

  78. Willard says:

    > It’s ‘fewer’, not less.

    Thanks. It could be “lesser” too. ClimateBall ™ is mostly lossless, if not lossmore.

  79. dikranmarsupial says:

    Richard I know the GWP Foundation is a charity, I was asking if the GWP Forum is itself a charity. Are those who donate directly to the GWP Forum donating to a charity (and therefore have their privacy protected by law)?

    I’ll take your word for it that several other organisations have similar arrangements, but that doesn’t mean that the GWPF/GWPF arrangement is “transparent”.

    Having said which, the important thing is whether the arguments put forward by either of the GWPFs are sound.

  80. RickA says:

    [Mod: Sorry, but things get moderated here and it’s not just one side, despite what you might read elsewhere. I deleted your comment. There were too many standard “skeptic” talking points for it to have lead to anything constructive. If you want to try again you can, but ideally constructing an argument, not just parroting what has been repeated ad-nauseam for years and years.]

  81. snarkrates says:

    Willard, you really need to learn what an ad hominem is.

  82. Willard says:

    Here you go:

    It isn’t and shouldn’t be our job to convince the “slow students” of the correctness of our research.

    Research Integrity

    Do you want me to reconstruct the ad hom for you as I would with a “slow student”?

  83. The Very Reverend Jebediah Hypotenuse says:

    Snakrates, you really need to learn what an argumentum ad lapidem is.

    Only then will your feat be on solid grounding.

  84. dikranmarsupial says:

    As Richard has reminded me. I asked Richard a technical question about about one of his papers:

    “…So, did you use a complexity penalty, such as AIC in comparing models?”

    He replied

    “Dikran, do you teach your grandma to suck eggs?”

    I asked him the question again (twice), but Richard conspicuously failed to give a direct answer to the question.

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.