On transparency

A while ago I wrote a post about a Nature commentary on Research Integrity and transparency, by Stephan Lewandowsky and Dorothy Bishop. Warren Pearce and colleagues wrote a brief response and have since expanded on this in a blog post.

Something that struck me when I read the Pearce et al. response is that I’d forgotten how antagonistic (okay, maybe too strong, but I can’t think of another term) debates can be in this context; I don’t think that the Pearce et al. response has anything positive to say about the Lewandowsky and Bishop article. This is a little surprising given that one of the Pearce et al. criticisms seems to be that one should be careful of over-simplifying what is a complex issue. My interpretation of the Lewandowsky and Bishop article is that we should indeed be careful of simplistically thinking that transparency in research is some kind of panacea. We shouldn’t think that more transparency will mean that science will somehow be free from attack, or that it will somehow make people more trusting of science in general. So Pearce et al. are correct that we should be careful of over-simplifying this issue, but it’s not clear in what way Lewandosky and Bishop did so.

Pearce et al. then discuss the issue of experts, saying:

the fundamental question is who counts as an expert, and under what conditions?

I realise that there might not be some kind of clear boundary between expert and non-expert, but surely it’s pretty easy to establish if someone qualifies as an expert, or not. Of course, it is quite possible for someone who isn’t formally an expert to make a positive contribution to a field, either by bringing some kind of fresh perspective, or by engaging with scientists who are then forced to think more about what they’re presenting, how they’d doing so, and the significance of their research. None of this, however, suggests that making all of one’s data and codes available is likely to lead to some layperson noticing something obvious that experts have missed. I agree that this is not an argument against doing so, but – similarly – doesn’t seem like a particularly good argument for doing so.

Pearce et al. then discuss the role of the public and suggest that [a] more fruitful approach to addressing public doubts about science was proposed by David Demeritt in 2001 who said:

“The proper response to public doubts is not to increase the public’s technical knowledge about and therefore belief in the scientific facts of global warming. Rather, it should be to increase public understanding of and therefore trust in the social process through which those facts are scientifically determined. Science does not offer the final word, and its public authority should not be based on the myth that it does, because such an understanding of science ignores the ongoing process of organized skepticism that is, in fact, the secret of its epistemic success. Instead scientific knowledge should be presented more conditionally as the best that we can do for the moment. Though perhaps less authoritative, such a reflexive understanding of science in the making provides an answer to the climate skeptics and their attempts to refute global warming as merely a social construction.”

I actually don’t find much to disagree with in the above quote; I think that a better understanding of the scientific process would be of benefit. However, it seems that my interpretation of the above is somewhat different to that of Pearce et al. One thing that concerns me about the drive for greater transparency is that it could lead to a greater trust in individual studies. I don’t think that this is necessarily a good thing. Typically we start to trust our understanding of a scientific topic when results from different studies by different people/groups start to converge in some way.

We don’t increase our trust in a particular study simply because we can’t find an error and – similarly – we shouldn’t necessarily dismiss a study because someone finds a mistake. Similarly, we shouldn’t trust something more because the authors have been completely transparent, and shouldn’t necessarily dismiss a study because the authors have not been as transparent as we might have like; we trust the overall scientific method, not individual studies or individual researchers. That to me is the social process through which those facts are scientifically determined. So, I don’t necessarily see anything in the above quote that is at odds with what was being suggested by Lewandowsky and Bishop.

Pearce et al. finish their post by saying

What is noticeable is how little these social sciences critiques have cut through to those in the natural sciences.

Well, there may be some reasons for this. From what I’ve seen, some of the social science critiques appear to be coming from those who seem to think that they’re in some kind of special position where they can observe and critique the natural/physical sciences. Why? We’re all researchers. Most of us work in the same environments with the same pressures and incentives. We all potentially suffer from biases. We can all make mistakes. We’re all expected to engage with the public and, potentially, with policy makers. So, maybe some natural scientists just don’t really see why they should pay much attention to some social scientists who seem to think they’re in some kind of position to critique how natural/physicists undertake their research. It also doesn’t help when some social scientists make it fairly clear that they don’t really understand what’s being presented by natural/physical scientists.

In my view, if social scientists want their critiques to be taken seriously by natural/physical scientists, they should put more effort into engaging with them directly, rather than appearing to be standing back and simply observing. On the other hand, I do remember a post in which some in the comments suggested that I simply didn’t understand what was being presented by the social scientists. This may be true, but – if so – I was not the only physicist to misunderstand what was being presented, and might suggest that social scientists are not putting sufficient effort into making their critiques understandable.

I’ll end by quoting their final two sentences

To be clear, there is no excuse for ignoring the existing evidence base. However, we believe that social scientists must be more proactive in using that evidence base in order to lead the debate from a position of strength.

What evidence and why should social scientists be aiming to lead the debate from a position of strength? I don’t really understand what is being suggested here, but maybe this is an opportune moment to re-highlight Michael Tobis’s post about swimming in your own lane.

Advertisements
This entry was posted in Climate change, ClimateBall, Global warming, Science, Universities and tagged , , , , , , . Bookmark the permalink.

44 Responses to On transparency

  1. I’ll make two quick comments, which I couldn’t work into the post. It’s always possible that I simply misunderstand what social scientists who study the science/policy, science/society interface are suggesting. However, given that it is they who are studying and presenting this information and that I appear to be not alone in this, I think the onus is on them to be clearer, not on me to necessarily put effort into understanding what they’re presenting.

    Another thing is that the above is written in the context of normal science; the process by which we gain understanding of a natural/physical system. This is typically a slowish process in which multiple studies and multiple researcher work within a consistent framework, and in which the results ultimately converge towards a consistent picture; no single study dominates. However, if we were to consider a scenario where a single study might, by itself, determine some policy decision, or whether or not a drug – for example – should go to market, then I think how we approach that might be very different. Wanting to be convinced that there are no errors/mistakes in such a study may well be much more important than delving into the details of one study amongst the many that determine our understanding of a natural/physical system.

  2. John Hartz says:

    ATTP: I can’t see your OP. Please make it more opaque. Thanks.

    [Belated April Fool’s greetings.]

  3. That is belated, as it’s April 2 for you too 🙂

  4. John Hartz says:

    ATTP: Thank you. I have a cold and am a tad loopy this morning.

  5. Willard says:

  6. Kevin Boyce says:

    Hah! I was going to comment about how much this reminds me of Matt Nisbet’s “framing” arguments about creationism, much discussed over at Pharyngula a few years back. In which the framers were insistent that they were the experts in communication and the scientists just weren’t understanding what they said. The irony never made it through to them. And then I followed your link to Tobis’ article, and who should I find in the middle of this episode but Nisbet.

    I suppose that should give me some comfort.

  7. Kevin,
    There is a wonderful irony in a group who claim to be experts at communication being unable to get another group to understand what they’re suggesting.

  8. whimcycle says:

    (aTTP’s comment at Pearce’s blog is a model of restraint, but I’ve not seen anything that contradicts Teh Stoat’s earlier assessment of WP: [Mod: redacted])

  9. I suspect Warren is aware of Stoat’s assessment and can always go there to find out if he doesn’t already. Given that I’m commenting on something he’s written, and have myself published a response to something he’s published, I think I’ll aim to keep this comment thread polite in case Warren wishes to engage here.

  10. Willard says:

    Warren & Brigitte claim that Lew & Dorothy’s letter is “overly simplistic” and that it lacks the “nuanced evidence” their own field could provide. They say why it’s over-simplistic because they omit to mention even more complex issues. This armwaving and handwaving looks suboptimal to me.

    Lew & Dorothy’s overarching point doesn’t require that we assume the issues are simple. On the contrary, they present a tension between openness (or “transparency,” a concept I dislike) and the researchers’ individual rights:

    The progress of research demands transparency. But as scientists work to boost rigour, they risk making science more vulnerable to attacks. Awareness of tactics is paramount. Here, we describe ways to distinguish scrutiny from harassment.

    http://www.nature.com/news/research-integrity-don-t-let-transparency-damage-science-1.19219

    The main distinction presented by Lew & Dorothy is between scrutinity and harassment. Unless they can show that this distinction is “complex” or “overly simplistic,” Warren & Brigitte should pay due diligence to the specific recommendations and be done with it.

    PS: It might be interesting to compare our Stoatness’ concerns regarding how people are mean to Junior and his own stance regarding Warren.

  11. I get the impression from Warren Pearce’s writings that he enjoys adopting a rather superior position of standing back and observing, but not questioning the accuracy of what either ‘side’ is saying. His 2013 Guardian article* gives a very scientifically-uncritical ride to ‘skeptics’ like Anthony Watts and Andrew Montford, almost suggesting their constant criticising is a noble undertaking.

    One would have to be scientifically naive to adopt the stance he does. If he had a little more scientific understanding he’d perhaps see where Lewandowsky is coming from.

    * https://www.theguardian.com/science/political-science/2013/jul/30/climate-sceptics-scientific-method

  12. angech says:

    “I realise that there might not be some kind of clear boundary between expert and non-expert, but surely it’s pretty easy to establish if someone qualifies as an expert, or not.”
    v
    “The debate must also include representatives from across the broad range of public viewpoints.”

    Worked on a medical advisory panel once which by law had to include a consumer representative.
    No medical knowledge.
    Projects we discussed all required expert medical knowledge.
    Having to explain in layman’s terms and getting her perspective on issues was a breathe of fresh air much as I was disdainful initially.
    So boundaries for experts only?
    That would be a little precious.
    Similarly transparency to one involved group is quite different to transparency as seen by the public.
    Not to mention that transparency should not be an excuse for “what we can legitimately hide”.
    The message in the title seems clear and unambiguous but the text did seem to stray into areas where it is alright to not be transparent [probably just my bias, sorry].

  13. angech,
    I’m not quite following what you’re getting at.

    Having to explain in layman’s terms and getting her perspective on issues was a breathe of fresh air much as I was disdainful initially.

    Yes, I think I said something similar.

    So boundaries for experts only?
    That would be a little precious.

    No, I don’t think I said any such thing.

    Similarly transparency to one involved group is quite different to transparency as seen by the public.

    Yes, isn’t this the point? Defining what would be suitable for different groups is non-trivial.

    The message in the title seems clear and unambiguous but the text did seem to stray into areas where it is alright to not be transparent [probably just my bias, sorry].

    No, not really. I think that everyone should aim to be open and honest. The complication is what is a reasonable expectation. In my field it is almost always possible to check what others have done simply by reading their papers. The codes are mostly public (or equivalent codes) and everyone understands the terminology and the definitions. I don’t need much else from them. It’s completely transparent. Similarly, most of the data is also public and there are publicly available codes for analysing the data.

    So, transparency doesn’t require that scientists dump every single possible bit of code or every single possible data file (processed, for example) on some site for others to download. So, what do you do if someone with no formal expertise at all, demands that you send them the exact data and code that you used to generate a specific figure? There’s no reason not to do it, but you might be slightly put out if it was the kind of thing that you’d expect them to be able to reproduce from what was already available.

    The bigger issue, though, is that science advances when we do independent checks of other people’s results, not when we check their data and codes for errors. There’s nothing necessarily wrong with the latter, but it’s not obviously cost effective. As I said above, though, it might be very important if we’re considering a study that will, by itself, have an enormous impact on some policy decision or that might, by itself, determine if a drug goes to market, or not.

    Possibly my view is biased by what is the norm in my field. As I said above, people typically redo things using their own code, or another, or they download the raw data and do their own analysis. The only occasion I’ve had when people have asked me for some raw data was when they wanted to do something different to what I had done. I always give it and whether or not I end up as an author of their paper depends on how much I have to do for them to do what they want to do. I’ve both simply been acknowledged in the paper (when they did all the extra work) and been an author (when I had to quite a lot of extra analysis myself). I’ve never been in a situation where someone has apparently wanted my data or codes to carefully check to see if they could see some kind of problem in my analysis.

  14. Chris says:

    Haven’t Pearce et al built their “critique” on a misrepresentation of Lewandowsky and Bishop’s (L&B)’s article? L&B’s theme is straightforward to understand and is stated explicitly in the preamble to their article:

    ”Stephan Lewandowsky and Dorothy Bishop explain how the research community should protect its members from harassment, while encouraging the openness that has become essential to science”

    That could hardly be clearer. We know that harassment of scientists occurs (see e.g. [1] and [2] at bottom of post). So L&B describe this problem as something we should be aware of in the context of the growing trend towards research transparency.

    In their section “The danger of simplified dichotomies”. Pearce et al. complain that L&B “do over-simplify the complex issues in play when thinking about transparency in science, portraying them as dichotomies that pit researchers against their critics.” But that’s a “dichotomy” constructed by Pearce et al.; it’s not inherent in L&B’s discussion. L&B indicate that good scientists are open to being confronted by alternative views but that this openness can be exploited by those with agendas (e.g. in attempting to stall inconvenient research). They suggest 10 things (“Red Flags”) that might be considered both in assessing whether requests for data or communications with scientists (or their institutes) may be vexatious or otherwise unreasonable, and in considering whether researcher(s) themselves might be less than honest in protecting their research against open disclosure. Despite the fact that Pearce et al. focus exclusively on “what is an expert” in relation to the “Red Flags” L&B are pretty clear that it is the range of behaviours that might indicate critiques, demands for data etc. are unreasonable or vexatious: i.e. L&B say in relation to the “Red Flags”: “None by itself is conclusive, but a preponderance of troubling signs can help to steer the responses of scientists and their institutions to criticism.” So L&B isn’t really “dichotomous” at all.

    I get the feeling that a small number (I would hope!) of social scientists feel that they would like to be part of “the debate” but don’t actually have anything in particular to say. So they construct “arguments” involving misrepresentations of other’s work and write unnecessary critical commentaries which generates tedious pseudo-debates.

    ————————————————————————
    Last week the Climate Science Defense Fund (CSDF) filed a new brief is support of two scientists under attack from some legal institute demanding access to 13 years of emails and other documents because (according to the data-demanders “…these two researchers somehow constitute a “scientific-technological elite” that has “successfully corrupted public policy” with respect to “climate alarmism.” [1] (quoting from the linked article from the CSDF.) Likewise, in relation to the very tricky and rather unfortunate Chronic Fatigue Syndrome (CFS) situation, harassment of CFS researchers seems to have had a negative impact both on the morale of some CFS researchers and may have limited recruitment of good researchers into the condition. [2]

    [1] http://climatesciencedefensefund.org/2016/03/28/csldf-files-new-brief-support-protecting-climate-scientists/
    [2] http://www.bmj.com/content/bmj/342/bmj.d3780.full.pdf

  15. Chris says:

    I might add that I don’t agree with everything Lewandowsky and Bishop (L&B) say. The title of their article “Research integrity: Don’t let transparency damage science” isn’t very descriptive of their article and is susceptible to misinterpretation (deliberate or otherwise)- I wonder whether L&B or Nature Editors chose it.

    One might also quibble with L&B’s points on post-publication data availabiity. Is it acceptable that release of certain data could depend on the perception of the researchers about whether or not the data-requesters might misrepresent the data (e.g. “cherry pick” elements to pursue an agenda position)? L&B tend towards that conclusion. Ideally non-confidential data should be deposited in a state that is freely available to anyone independent of their motives. It’s obvious that some data (most obviously in relation to analysis of human subjects) may not be releasable. It might also be possible that some of the justifiably-non-released data might be utilised by others as part of a collaboration with the original authors, once the latter are comfortable with the motives/competence of data-requesters. The devil is in the detail…

    I guess that leaves the situation where there may be a real problem with the analyses and interpretations of a study that could only be properly addressed by an independent analysis of all the data in the study. That’s a tough one to address – the devil really is in the detail there.

    A third quibble might relate to the specific focus of L&B and whether this may be somewhat unrepresentative of science, transparency and researcher-non-researcher interactions in general. Dr Lewandowsky has chosen to work in a field in which highly antagonistic researcher-non-researcher interactions are abundant – indeed he seems to relish this and it’s the specific focus of some of his research. So he, at least, is talking from a rather untypical viewpoint. That’s not to say ‘though that addressing researcher harrassment in science isn’t important, as are attempts to better define the circumstances under which data-release and researcher-non-researcher interactions are acceptable and productive. That’s where L&B seems to be a decent contribution..

  16. Willard says:

    > Despite the fact that Pearce et al. focus exclusively on “what is an expert” in relation to the “Red Flags” L&B are pretty clear that it is the range of behaviours that might indicate critiques, demands for data etc. are unreasonable or vexatious: […]

    Right on. I think Pearce et al. focus on that because in Warren & Brigitte’s letter, we can read:

    The authors present important topics such as expertise, disciplinary boundaries and communication as simple dichotomies. These divisions overlook extensive nuanced evidence from the social-science literature about who counts as an expert and under which conditions (see, for example, go.nature.com/xdfzrn).

    http://www.readcube.com/articles/10.1038%2F531035d

    Instead of clarifying how Lew & Dorothy’s so-called dichotomies can be dispelled by a more refined construction of the concept of expertise, the blog post begs the question and raises the usual concerns:

    Unfortunately, Lewandowsky and Bishop do over-simplify the complex issues in play when thinking about transparency in science, portraying them as dichotomies that pit researchers against their critics. What’s worse is that this over-simplification appears in the pages of Nature, science’s most high-profile forum. This dangerously inflames tensions in controversial areas of public science and stymies efforts to break deadlocks.

    http://blogs.nottingham.ac.uk/makingsciencepublic/2016/03/30/transparency-lewandowsky-bishop-socialscience/

    Since they provide no evidence about the dichotomies nor about the dreadful deadlocks, I think it’s safe to say that they don’t use any “evidence base in order to lead the debate from a position of strength.”

  17. Willard says:

    I now see what ticked Warren & Brigitte: the lines of the red flags list. Lew & Dorothy simply listed a few items that could help recognize if a request was genuine or vexatious. They readily admit that:

    None by itself is conclusive, but a preponderance of troubling signs can help to steer the responses of scientists and their institutions to criticism.

    http://www.nature.com/news/research-integrity-don-t-let-transparency-damage-science-1.19219

    These signs are more about credibility than expertise. This is why “expertise” is only the first sign.

    Warren & Brigitte’s letter was therefore unclear and their blog post (with Sarah) incorrect in pinpointing expertise “specifically.”

  18. Chris,

    Haven’t Pearce et al built their “critique” on a misrepresentation of Lewandowsky and Bishop’s (L&B)’s article? L&B’s theme is straightforward to understand and is stated explicitly in the preamble to their article:

    That was my impression. Pearce et al. appear to be criticising something that Lewandowsky and Bishop haven’t actually said. As you say, the argument in Lewandowsky and Bishop appears to be that openness/transparency is important (crucial, in fact) but that we should be careful of it being mis-used in some circumstances.

  19. Willard,
    Indeed, this part of the Pearce et al. response was interesting, given that Lewandowsky & Bishop simply seem to be highlighting some possible signs, not trying to provide anything definitive.

    Lewandowsky and Bishop pinpoint some pitfalls in the societal process of science, but a cavalier attitude to evidence has inadvertently reinforced a caricatured image of this process.

    I was kind of hoping that Warren (or someone) might respond to my comment on MSP, because there appears to be an irony to their response. Part of their argument appears to be that this is all very complex and we should be careful of assuming that some group of supposed experts can define how things should operate; we should bear in mind that there are many factors to consider and should also include the views of others who might have a different perspective. However, the end of the Pearce et al. responce appears to be suggesting that there is an incontrovertible evidence base that supports their position and that social scientists should be placing themselves in some kind of expert-like position where they can lead this debate. It’s possible I misunderstand what’s being suggested by what’s said at the end of their post, but – if not – it does seem rather ironic given what they appear to be saying in the rest of their post.

  20. snarkrates says:

    In some ways, this problem is not new. Physicists have been confronted with cranks who wanted to disprove Einstein since Einstein wrote his Special Relativity paper. What’s changed is that the Internet makes it easier for the cranks and nutjobs to find each other.

    And while I am all for transparency, I don’t think that will be helpful in assuaging the doubts of those who are incapable of understanding how science works–or for that matter in detecting problems when vested interests can hide unflattering results behind a “company proprietary” label.

    And as far as educating folks about the sausage-making that goes on in science, I rather doubt that most have the patience to understand why it works. Ultimately, we may simply be reduced to pointing out that the sausage that comes out of the process is pretty damned good–the “Science! It works, bitches” defense.

  21. Brigitte says:

    ATTP, I have replied to your comment. I have also read Michael Tobis post and agree with much of what he says.

  22. Brigitte,
    Thanks, I’ve just seen that. I have to admit that what you describe in your comment (which makes some interesting points) is not what I took from reading the post. I have to head out for dinner with the family, but I’ll try and digest what you’ve said and respond in more detail later.

  23. Willard says:

    > In some ways, this problem is not new. Physicists have been confronted with cranks who wanted to disprove Einstein since Einstein wrote his Special Relativity paper.

    It’s even older than that:

  24. izen says:

    @-““The proper response to public doubts is not to increase the public’s technical knowledge about and therefore belief in the scientific facts of global warming. Rather, it should be to increase public understanding of and therefore trust in the social process through which those facts are scientifically determined. ”

    This reveals the assumptions underlying the position. That scientific accuracy is a function of social context, shades of Ravetz and Hulme
    and ‘post-normal’ science.

    @-“Science does not offer the final word, and its public authority should not be based on the myth that it does, because such an understanding of science ignores the ongoing process of organized skepticism that is, in fact, the secret of its epistemic success.”

    And there is where the mistaken assumption is exposed. The organized skepticism is a social process that generates the best understanding we have, but that does not mean that understanding is just, or only, accurate in relation to its social context. The epistemic success of this social process of organized skepticism is a real utile result. It may not be the FINAL word, but it is considerably more than a passing paradigm defined exclusively by the social context in which it was discovered/invented.

    It is the same unstated post-normal belief in the primacy of social context to shape scientific understanding behind the idea that a greater diversity of political ideologies or more equal gender representation without reference to ability, would somehow improve the meritocracy of scientific discovery.

  25. Willard
    “I now see what ticked Warren & Brigitte: the lines of the red flags list. Lew & Dorothy simply listed a few items that could help recognize if a request was genuine or vexatious. They readily admit that:

    None by itself is conclusive, but a preponderance of troubling signs can help to steer the responses of scientists and their institutions to criticism.”

    The red flags have little if anything to do with Scrutiny or Harassment. Or whether a request is genuine or vexaxious.

    The red flags actually correspond to another distinction they make. See if you can find it.
    You have to read line by line to see where they add a third category

    For grins go through the red flags for a “suspect researcher” Apply those flags to Hansen’s last effort.

    I threw 4 red flags..

    What follows?

  26. Tom Curtis says:

    Mosher:

    “For grins go through the red flags for a “suspect researcher” Apply those flags to Hansen’s last effort.

    I threw 4 red flags..”

    I, on the other hand, only count two.

    As an aside, you only “throw a red flag” where grid iron is a national sport. I’am sure in Lewandowki’s usage, you raise a red flag.

  27. Michael 2 says:

    “but surely it’s pretty easy to establish if someone qualifies as an expert, or not.”

    Certainly easier than discerning whether that expert is telling the truth. The title of expert is bestowed by other experts, who in turn obtained it from other experts, back to Aristotle I suppose who coined the term and bestowed it upon himself.

  28. > What follows?

    Wrong inference.

  29. entropicman says:

    Michael 2
    “Expert” ?
    An ex-spurt is someone who used to be a drip under pressure.

  30. Willard says:

    Awaiting moderation at Warren’s:

    Warren,

    In response to AT, you say:

    You are correct, that no-one was suggesting it [that the “this” in question should be left to researchers alone] explicitly. However, neither did the piece suggest any role for the public. This was the omission we are correcting.

    I think the “this” in question is “how the research community should protect its members from harassment, while encouraging the openness that has become essential to science,” as you can [r]ead in bold in the lede.

    If I’m correct, then I’m not sure how what you call an “omission” matters exactly. In fact, I think it’s safe to say that the goalposts are shifting a bit. The issue Lew & Dorothy were discussing was the protection from researchers harassment, not how to accomodate the public(s).

    Interestingly, the concern similar [to] yours is raised over and over again in debates regarding women issues: What About The Men? This decoy is so omnipresent there’s an acronym for it: WATM.

    While wondering about the public is interesting, I duly submit that What About The Public is not that interesting, Warren.

    http://blogs.nottingham.ac.uk/makingsciencepublic/2016/03/30/transparency-lewandowsky-bishop-socialscience/#comment-1216752

  31. Willard says:

    I will also note that this sentence:

    As Harvey Graff explains in his recent book on interdisciplinarity, the exchange of ideas between different areas of knowledge has been central to the emergence of many of today’s established disciplines.

    handwaves to a book that costs forty-five US bucks plus shipping :

    https://jhupbooks.press.jhu.edu/content/undisciplining-knowledge

    This specific “exchange of ideas” does not come cheap.

    Just think about the poor public(s)!

  32. izen says:

    @-“As Harvey Graff explains in his recent book on interdisciplinarity, the exchange of ideas between different areas of knowledge has been central to the emergence of many of today’s established disciplines.”

    Improvements in observation / measurement technology are more significant. As in the role of X-ray crystallography and isotopic labeling in Genetics.

    Including the public(s) however reified as a religious, political or gender social group would be an exchange of ideas between an area of knowledge, and one of a lack of understanding of the science, but some belief in the ongoing skepticism of the scientific process.

    But outside esoterica like exoplanets transparency in science will always be an ideal that melts in the face of any commercial or military application or implication of the research.
    Medical research is one obvious example where commercially sensitive details are routinely omitted. But ‘National Security’ has been invoked before over one-way functions in mathematics.

  33. Willard says:

    > Improvements in observation / measurement technology are more significant. As in the role of X-ray crystallography and isotopic labeling in Genetics.

    Yes, but Galileo:

    In 1609, Galileo Galilei was a 45-year-old, largely unknown, north Italian professor of mathematics, a profession with a low social status, well on his way to total obscurity. He had produced his brilliant experimental demonstrations of the laws of falling bodies years earlier but had not published them. He was known among his circle of friends as a purveyor of good wines and a castigating, razor-sharp wit. Then Galileo stumbled upon the recently invented telescope and began the astronomical observations that would make him famous. Realising that he had lucked onto the scientific equivalent of winning the lottery, he rushed into print in early 1610.

    https://aeon.co/opinions/galileo-s-reputation-is-more-hyperbole-than-truth

  34. Dikran Marsupial says:

    “largely unknown” I’m not sure that is really true, being the son of an (at the time) famous father, who was also not exactly of “low social status”. Also while Gallileo wrote, but didn’t publish, a book on his findings, that doesn’t necessarily mean they were unknown or that he made no efforts to promulgate his ideas. I don’t know what the research culture was like at that time, but the Tartaglia-Cardano kerfuffle suggests that public debates were also a means of communicating academic ideas and that unpublished papers and letters were also circulated.

  35. izen says:

    Could we agree that Galileo’s fame rests largely on his exploitation of a new observational technology, Not from a surfeit of interdisciplinarity or crowdsourcing his peer review.

  36. Willard says:

    Warren just borrowed a page from honest brokerage:

    [T]hat’s certainly a perspective. Thanks for sharing.

    http://blogs.nottingham.ac.uk/makingsciencepublic/2016/03/30/transparency-lewandowsky-bishop-socialscience/#comment-1216752

    I’m glad that Warren confirms that my perspective is a perspective. It’s important to have a perspective:

  37. angech says:

    Willard says: April 2, 2016 at 5:46 pm
    “they present a tension between openness (or “transparency,” a concept I dislike)”
    “and the researchers’ individual rights:”
    Leading to my comment
    ” So boundaries for experts only?
    That would be a little precious. ”
    …and Then There’s Physics said:
    “No, I don’t think I said any such thing”.
    but Willard implies he dislikes openness and that the researchers have rights that may transcend normal responsibility.
    This is the gist of the article by Lewindowsky.
    That is that by virtue of having done the work the scientist can be placed in a privileged position where he may not have to follow the rules that scientists traditionally publish by.
    The issue is not one of nut jobs being able to find the physicists, or uninformed people having the right to material they do not understand but the scientist himself self choosing who and whom he wishes to be transparent to.
    While this is perfectly understandable and desirable in normal life it is against the spirit of scientific research that we all aspire to and suggests an elitist attitude that sits very uncomfortably with the public at large and obviously these rersearchers.

  38. angech,

    That is that by virtue of having done the work the scientist can be placed in a privileged position where he may not have to follow the rules that scientists traditionally publish by.

    No, that isn’t the argument. The argument is more that a scientist shouldn’t be expected to jump through hoops (metaphorical) just because someone who regards themselves as having some kind of rights to their data/code insists that they do so. Of course, scientists should be expected to provide all relevant information. However, they do have some right to be involved in deciding if they’ve done so. Just because someone else insists on more, does not mean that they should provide more. Maybe they have provided all they can.

    While this is perfectly understandable and desirable in normal life it is against the spirit of scientific research that we all aspire to and suggests an elitist attitude that sits very uncomfortably with the public at large and obviously these rersearchers.

    Again, you’re presenting a simplistic caricature of science and research. A researcher should provide all the information necessary to replicate or reproduce their results. This, however, does not mean that they provide every single little thing that someone who wants to check their work wants. The other parties have to do some work themselves.

  39. Willard says:

    > Leading to my comment […]

    Your question (“So boundaries for experts only?”) was a comment?

    ***

    > Willard implies he dislikes openness […]

    Where? I said I dislike the word “transparency.” Openness is simply not transparency. If you can find the minutes for the IAC report, that’d be great.

    ***

    > [Willard implies] that the researchers have rights that may transcend normal responsibility.

    Again, where? Unless having to deal with harassment is included under the “normal responsibility” heading, I don’t think you have a case. You’re simply peddling “but normal responsibility” instead of dealing with Lew & Dorothy’s main point, i.e. that institutions may need to protect their researchers from harassment.

  40. angech says:

    Sorry I misread you Willard.
    Researchers do need protection from harassment, I agree.
    Defining harassment is the problem. It cannot be open to only one group [the scientists] to define harassment though.

  41. angech,

    It cannot be open to only one group [the scientists] to define harassment though.

    But who should define it? Formally most researchers work for universities, and universities in the UK are not public sector. So, universities can and do define codes of conduct. Funding bodies can and do define what should be public if research is funded. Journals can and do insist on some things being public if a paper is to be published. This all makes sense to me; the employer has some role to play, the funders have a role to play and the journals also have a role to play. Who else could impose rules and if someone has satisfied their employer, their funder, and the journal, then can anyone insist on more?

  42. angech says:

    But who should define it?
    their employer, their funder, and the journal .
    Seems reasonable.
    The burden of complaint would then fall on these people , rather than the scientists and mechanisms exist to tackle said situations if a refusal is felt to be unreasonable.

  43. The burden of complaint would then fall on these people , rather than the scientists and mechanisms exist to tackle said situations if a refusal is felt to be unreasonable.

    Exactly. The problem is then how you deal with those who regard the decisions by the university, the journal, the funding body as being insufficient.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s