Science communication: an illustration in irony?

There’s a new paper on public understanding of science called Communicating science in public controversies: Strategic considerations of the German climate scientists. Andrew Montford has already concluded that:

scientivists have so completely corrupted the field that it is now largely unreliable.

If you read only the abstract, you might think he has a point, as it says

Asking scientists about their readiness to publish one of two versions of a fictitious research finding shows that their concerns weigh heavier when a result implies that climate change will proceed slowly than when it implies that climate change will proceed fast.

So, scientists might be more reluctant to publish something if the results suggest that climate change will proceed slowly, than if it suggests it will proceed fast?

Well, no, that isn’t what the paper illustrates at all. What the paper did was to consider a scenario in which a scientist has already published a paper that suggests climate change will be slower than or faster than expected. They then asked a group of German climate scientists to consider this scenario and to then rate a set of concerns related to publicising this research in the media. They were rated on a scale of 1 to 5, with 1 being that the concern was relevant and 5 being irrelevant. The concerns included the work being misrepresented, it leading to unnecessary criticism from colleagues, and it putting the credibility of climate science at risk.

The results are shown in the table below. In all cases, the results suggests that most regard the concerns as being closer to irrelevant then relevant. I can’t find any mention of uncertainties, but the slower than and faster than results seem quite similar. It seems that most regarded concerns about putting the credibilty of climate science at risk, and bringing too much uncertainty to the debate, as being largely irrelevant.

Credit : Post (2016)

Credit : Post (2016)

There is, however, a suggestion that some regarded the work being misrepresented as being a relevant concern, and a suggestion that it would be more relevant if the work suggested climate change would be slower than expected, than if it suggested it would be faster than expected. In other words, they might be more concerned about the work being misrepresented by people like Andrew Montford, than by those who are concerned about the risks associated with climate change. It’s possibly somewhat ironic, then, that Andrew Montford appears to have misrepresented this paper. However, it might also be somewhat ironic that the abstract of a paper discussing how science might be misrepresented in the media, manages to write an abstract that is so easily interpreted in a way that misrepresents what the paper actually says.

Advertisements
This entry was posted in Climate change, ClimateBall, Science and tagged , , , , . Bookmark the permalink.

122 Responses to Science communication: an illustration in irony?

  1. jsam says:

    Type – disripute.

  2. Thanks, I meant to change that to “putting the credibility of climate science at risk”, which I’ve now done.

  3. Given the industrial production of nonsense by the mitigation sceptical movement it seems to make sense to more worry about being misrepresented when a result suggests that climate change proceeds slower.

    However, it is probably even more important in such a case to send out a press release so that everyone can read the truth before your work is misrepresented.

    I prefer to simply report the p-values, but maybe one should mention that p<0.1 is normally called statistically not significant. But I guess that will not stop a blog that made a statistically not significant trend change being the end of global warming one of their main talking points.

  4. John Hartz says:

    Montford and his ilk have a history of twisting scientiifc statements into propaganda as they see fit — no surprise here.

  5. I prefer to simply report the p-values, but maybe one should mention that p<0.1 is normally called statistically not significant.

    Indeed, so the differences appear to be not significant in all cases. So, basically there are some possibly relevant concerns but there is not real difference between a case when the result suggest climate change will be slower and one where it suggests it will be faster.

  6. “I prefer to simply report the p-values, but maybe one should mention that p<0.1 is normally called statistically not significant. "

    I think we should just banish the phrase statistically significant and simply report p values.

  7. I think we should just banish the phrase statistically significant and simply report p values.

    We should certainly be willing to recognise that if we do use the term statistically significant, it is normally based on a convention.

  8. Lars Karlsson says:

    Wow, Montford doesn’t give any indication that the question concerns only publication in media, no publication in scientific journals. His entire post reeks of misdirection.

  9. MartinM says:

    Not only is p<0.1 insignificant, the paper makes no mention of correcting for multiple hypothesis testing. With five tests, p<0.1 isn't even remotely close to significant.

  10. lerpo says:

    Multiple hypothesis testing at xkcd: https://xkcd.com/882/

  11. ‘We should certainly be willing to recognise that if we do use the term statistically significant, it is normally based on a convention.”

    Agreed.

  12. Lars Karlsson says:

    The Cato institute is also quite bad.

  13. Bishop Hill writes “[t]here is plenty of anecdotal evidence that climate scientists moderate their behaviour accordingly, withholding anything that might give “fodder” – in Mann’s words to the sceptics.” This accurately reflects the paper, as Mann has consistently argued that sceptics are sceptical because they are paid to be so.

    The paper is underpowered, but it does suggest self-censoring.

  14. Wow, even you can’t avoid mentioning Michael Mann

    but it does suggest self-censoring.

    Not in terms of what they publish in the literature, only in terms of what they might say in public and even then it was more about relevant/irrelevant concerns, than an explicit sense that they wouldn’t actually say something. Part of the problem is clearly those, like Andrew Montford, and organisations, like the Global Warming Policy Foundation, who regularly misinform about this topic.

    Martin,

    With five tests, p<0.1 isn't even remotely close to significant.

    Indeed, I’d wondered the same myself. This post by Dorothy Bishop seems relevant.

  15. @wotts
    So, you object to Bishop Hill using the verb “to publish” in a different sense than the paper?

  16. Richard,
    No, I think the suggestion that this paper allows one to conclude that

    scientivists have so completely corrupted the field that it is now largely unreliable.

    is stupid, especially coming from someone who has helped to make scientists’ concerns about their work being misrepresented, relevant.

    I realise that you defending a misinformer like Montford is no great surprise, but maybe we could avoid spending too much time on this. There are better things to do.

  17. Lars Karlsson says:

    Richard, please notice the word from the Montford quote that I have bolded here:
    “…withholding anything that might give “fodder” … to the sceptics”

    That “anything” is definitely not supported by the paper.

    Also notice that the greatest concern (which was still only moderately strong) was about being misrepresented, not that their reports would actually support the “skeptic” cause. And the concerns for misrepresentation of the “proceeds faster” story were almost as large.

  18. @wotts
    I don’t think that Bishop Hill based the conclusion that “scientivists have so completely corrupted the field that it is now largely unreliable” on this single paper. He has been arguing the same for many years.

    Anyway, you write that “they might be more concerned about the work being misrepresented by people like Andrew Montford, than by those who are concerned about the risks associated with climate change.” I read that as a bias, not by you, but by “the[m]”.

  19. Richard,
    He quotes the abstract and then immediately says

    It’s fair to conclude …

    Are you suggesting that the paper indicates a bias against “skeptics”? The alternative – which seems much more likely – is that they’ve seen scientific work mis-represented by “skeptics” and hence are possibly more concerned about this possibility. Of course, as has been pointed out, the difference between the relevance of the concern when the paper suggests slower than when compared to faster than is not statistically significant. Hence, at best, this paper seemes to suggest that scientist think that concerns about their work being misrepresented in the media is somewhat relevant.

    I’ll add something more. Bishop Hill is essentially a science denial site. You’re a Professor of Economics at a major UK university working on climate change. You’re defending someone who runs a science denial site. I find that utterly bizarre.

  20. Richard,
    In fact, not only are you defending someone who runs a science denial site, you’re doing so when it’s clear that they’ve mis-represented this paper. If you going to defend Montford, maybe do so when it’s actually justified.

  21. Lars Karlsson says:

    Disinformers like Montford have so completely corrupted the public discourse on climate change that many scientist are hesitant about reporting their discoveries in the media out of fear of being misrepresented.

  22. @wotts
    As I told you before, economics departments work hard to keep their political neutrality. As part of that, we don’t do guilty by association. Also, I don’t think that Bishop Hill misrepresents this particular paper.

  23. Richard,

    As I told you before, economics departments work hard to keep their political neutrality. As part of that, we don’t do guilty by association.

    Firstly, it’s hard to see how associating with a site that specialises in Hippie/Green Bashing qualifies as political neutrality. However, that wasn’t what I was getting at. I was suggesting that it is bizarre that someone who, I assume, is trying to be a serious academic would defend a site that promotes a great deal of nonsense. One can aim to be politically neutral without promoting science denial.

    Also, I don’t think that Bishop Hill misrepresents this particular paper.

    That you think it doesn’t is not a surprise. That it does seems self-evident.

  24. Dan says:

    Since Richard Toll is lurking and the thread includes p-values, Willard on CE last year asserted that Cook13 was not statistically significant. Mercifully he did not explain his reasoning because I don’t speak Willard.
    How could he possibly be correct ? (Richard that’s not an invitation for you to start sprouting your Cook bullshit again)

  25. dikranmarsupial says:

    “I think we should just banish the phrase statistically significant and simply report p values.”

    I don’t think that would help much as the key problem with “statistically significant” is due to the p-value fallacy, i.e. treating the p-value as if it was the probability that the null hypothesis is true (not that a frequentist probability could be assigned to such a thing anyway). The real problem is that hypothesis tests are widely used by people who don’t really understand the framework and are just using “cookbook statistics”.

    BTW p < 0.1 is used in statistics, Fisher wrote that the threshold needs to be set according to the needs of the analysis (or words to that effect), the threshold basically takes the place of the priors in a Bayesian analysis (in a rather clunky and opaque manner).

  26. dikranmarsupial says:

    “As I told you before, economics departments work hard to keep their political neutrality.”

    by having links with and/or writing for political “think tanks”?

  27. Dikran,
    Richard can clarify, but I think his argument is that Economics departments are politically heterogeneous, not that inidividual economists are – or aim to be – politically neutral.

  28. BTW p < 0.1 is used in statistics

    As a test of significance?

    If you have 5 samples, what are the chances that one of them will have p < 0.1? If I've done my calculation correctly (0.9^5) it seems like there is a 60% of one having p < 0.1.

  29. Lars Karlsson says:

    Montford seems to think that the paper provides support for the ramblings in the first part of his post.

    Some of the more “politically aware” climate scientists have been keen that nobody should publish anything that might work against the green agenda. [bla bla bla]

    On the contrary – the paper actually identifies disinformers like Montford as a major obstacle for scientists interacting with the media. They are concerned that Montford and his ilk (on both sides of the issue, to be fair) are going to misrepresent them.

  30. dikranmarsupial says:

    ATTP fair enough, however in that case, I can’t see how it is a big deal as “academic freedom” means that having a hetrogenous department doesn’t mean that their overall output is politically neutral.

    Yes p < 0.1 is used, but it isn't at all common (e.g. Demsar 2006 – 3863 cites according to google scholar). The threshold is basically performing a similar function to the priors in a Bayesian analysis, and the use of such a threshold would be a pretty strong (implicit) statement about priors (and possibly the costs of false-positive and false-negative errors). There is nothing particularly special about 0.05, it is just a tradition.

  31. Lars Karlsson says:

    Over at Bishop Hill, Tol has made a comment about this post. It is quite clear where his sympathies are.

  32. Dikran,

    ATTP fair enough, however in that case, I can’t see how it is a big deal as “academic freedom” means that having a hetrogenous department doesn’t mean that their overall output is politically neutral.

    I agree. In fact, I have real reservations about the whole “Heterodox Academy” idea. If there really are biases that influence what is researched, or how research is interpreted, you don’t fix it by adding a different set of biases; you fix it by encouraging better research practices. I think the idea that one should somehow laud political heterogenity as if it implies that a discipline is unbiased, is bizarre. It is similarly bizarre to suggest that it’s okay for an individual to show bias because the discipline is heterogeneous. As far as research is concerned, we should be aiming to be unbiased, even if that ideal is not actually achievable.

  33. dikranmarsupial says:

    ATTP, your last sentence in particular hits the nail on the head. The “hetrogenous academy” is basically just saying that the department is not merely an echo chamber, which is of course a good thing, but hardly a ringing endorsement!

  34. @wotts/dikran
    Indeed, economics is politically heterogeneous rather and politically neutral. I responded to Wotts’ implicit “what if your department head finds out you hang out with Montford and Lawson?” My dept head would answer: “I don’t care. Let me know when Tol meets Willie Nelson.” Other members of our department do not care about Nelson either.

  35. Richard,

    “what if your department head finds out you hang out with Montford and Lawson?” My dept head would answer: “I don’t care. Let me know when Tol meets Willie Nelson.” Other members of our department do not care about Nelson either.

    That wasn’t what I said or was implying. I don’t think they would care, or should care. However, that doesn’t change that someone who is a Professor of Economics at a major university (which I said as a reflection of your supposed credibility) seems comfortable associating positively with a site that essentially promotes science denial. You also appear supportive of his suggestion that the field is so corrupted that it is largely unreliable. The irony, of course, is that this is coming from a site (which you appear to associate positively with) that one would only describe as reliable if one was considering their tendency to bash Greens or promote anything that minimises anthropogenically-driven climate change. That your university/department/colleagues don’t care does not mean that your association does not influence your overall credibility.

  36. @wottsywotts
    I also seem comfortable associating with this site.

  37. dikranmarsupial says:

    Richard, I notice your reply does not address the point that a politically hetrogenous department does not imply that its output will be politically neutral. While individuals will hold different political beliefs, it is hard to see how membership/links with political think tanks is likely to encourage political neutrality of the individual researcher or of the department.

    BTW if those in your department do not care about links to political think tanks doesn’t that rather imply that the “political hetrogeniety” has little active effect on encouraging political neutrality? I’m not saying that is a bad thing, I think academic freedom is rather important; I think it is better for individual researchers to take responsibility to avoid bias in their research (as far as it is possible).

  38. Richard,
    I didn’t say only “associate”, I said “associate positively”.

  39. dikranmarsupial says:

    “@wottsywotts” we seem to have devolved somewhat to the playground level.

  40. Dikran,
    Indeed, but I sometimes don’t even notice. I don’t expect much else 😉

  41. @dikran
    Wottsy (a term of endearment) wrote “[German climate scientists] might be more concerned about the work being misrepresented by people like Andrew Montford, than by those who are concerned about the risks associated with climate change.” That is a statement of bias: Misrepresentation in one direction is worse than misrepresentation in the opposite direction. This is a collective bias.

    Heterogeneity checks individual biases, if for every bias there is a counterbias. That is why it is important to maintain political diversity in a discipline and a department.

  42. “[German climate scientists] might be more concerned about the work being misrepresented by people like Andrew Montford, than by those who are concerned about the risks associated with climate change.” That is a statement of bias:

    Except, it could be based on their observation of people, like Andrew Montford, misrepresenting scientific results more than people who happen to be concerned about the risks associated with climate change. On the other hand, the differences are barely statistically significant, which seems to suggest that – overall – they are simply concerned about it being misrepresented, irrespective of who might do so. One could argue that your attempt to suggest that the result indicates a bias on their part, might indicate one on yours?

    Heterogeneity checks individual biases, if for every bias there is a counterbias.

    One doesn’t reduce bias in a research area by introducing a new bias. The way to reduce bias in research is to encourage best practice in how people carry out their research, not simply add a new set of biases.

    That is why it is important to maintain political diversity in a discipline and a department.

    Really? I have no objection to political diversity, but the idea that we should actively aim for it in a discipline and a department seems ridiculous. Could it be that you work in an environment in which researchers are obviously incapable of not letting their political views influence their research? Maybe that would suggest that your discipline needs to work on reducing individual biases, rather than encouraging other disciplines to add new biases.

  43. dikranmarsupial says:

    Richard “Wottsy (a term of endearment)” I don’t think anybody is fooled by that.

    “Heterogeneity checks individual biases, if for every bias there is a counterbias. ” no that is just diluting the bias, without checking it. Ideally what hetrogeniety ought to achieve is pre-publication scrutiny that prevents bias being expressed. Your colleagues ought to be pointing out the weakpoints* or political bias in your research to help you produce better work that is not affected by political bias. Just diluting the literature with publications of the opposite nature still means that individual studies cannot be taken at face value, which I don’t think is a terribly satisfactory situation.

    *e.g. a statistical analysis where the conclusion is heavily dependent on a single datapoint, which itself appears to be arguably an outlier.

  44. @dikran
    Most of these battles are pre- rather than post-publication.

  45. Marco says:

    …or deliberately not including estimates because the authors were partly funded by supposedly biased agencies.

    (Yup, Tol did that, too)

  46. dikranmarsupial says:

    Richard if nobody in your department cares about its members having links that are likely to introduce political bias, then it is hard to see how that the policy of hetrogenous departments is likely to encourage such pre-publication battles. Your assertion does nothing to support the contention that “hetrogenous departments” do any more than passively dilute political bias.

    Did any of your colleagues point out that the statistical model in one of your recent papers (also discussed here) was highly sensitive to an outlier datapoint (representing your own study), and hence didn’t really support the conclusion?

  47. Lars Karlsson says:

    Judith Curry has commented on the paper. Nobody should be surprised that she ignores the fact that the differences in levels of concern for the “faster” and “more slowly” stories were not very large. Nor should anybody be surprised that she portraits herself as some kind of martyr.

  48. Joshua says:

    Richard Tol.

    IMO, usually your participation at this site doesn’t advance the convos. IMO, you mostly make drive-by” points that don’t actually take on and engage with counter-arguments.

    It is somewhat useful to read your arguments, sometimes, because you present the extreme form of positions on issues..so you help to “set the edge” so to speak…but ideally I would find it more useful to read people actually engaging in exchange on issues from different perspectives

    So Anders wrote the following:

    One doesn’t reduce bias in a research area by introducing a new bias. The way to reduce bias in research is to encourage best practice in how people carry out their research, not simply add a new set of biases.

    […]

    Really? I have no objection to political diversity, but the idea that we should actively aim for it in a discipline and a department seems ridiculous. Could it be that you work in an environment in which researchers are obviously incapable of not letting their political views influence their research? Maybe that would suggest that your discipline needs to work on reducing individual biases, rather than encouraging other disciplines to add new biases.

    And Dikran wrote the following:

    Richard, I notice your reply does not address the point that a politically hetrogenous department does not imply that its output will be politically neutral. While individuals will hold different political beliefs, it is hard to see how membership/links with political think tanks is likely to encourage political neutrality of the individual researcher or of the department.

    So those seem to me to be fundamental arguments that go to the root of the logic framework of the implications of the paper referenced in the original post, and to the flaws in how “skeptics” are arguing about those implications.

    I would be interested in reading thoughtful counterarguments.

    Do you have any?

  49. Joshua says:

    Anders –

    ==> “I think the idea that one should somehow laud political heterogenity as if it implies that a discipline is unbiased, is bizarre. “

    I guess, (obviously?), the thinking is that biases in opposing directions cancel each other out. That, for example, the inclusion of a “conservative” bias within an overall field can, at least to some extent, serve as a check against a pervasive and dominating “liberal” bias. At some level I get the logic, but your point is a good one, IMO. For example, does biased research from “conservative” “skeptics” really serve as a valid check against ideologically-rooted bias from “liberal” “realists?” It would seem maybe not. Research that is biased by (conservative) political orientation can’t, by definition, be assumed to serve as a check against anything – because it biased findings aren’t inherently valid.

    I do think, however, that from a collaborative perspective, there is certainly the potential for diversity within a field to decrease the overall level of bias – as in my life experiences the best way to reduce bias is to expose people to greater diversity and more varied life experiences.

  50. Willard says:

    > Willard on CE last year asserted that Cook13 was not statistically significant. How could he possibly be correct ?

    RTFP.

  51. there is certainly the potential for diversity within a field to decrease the overall level of bias – as in my life experiences the best way to reduce bias is to expose people to greater diversity and more varied life experiences.

    I agree that exposing people to more diversity and more varied life experiences can help to reduce biases against others groups. I do think, however, that in this context we’re talking about the possibility that some kind of bias influences how research is done, or what research is done, or how it’s interpreted. Given that there are typically best practices when it comes to doing research, this would imply that some are not following best practice, which – in my view – should be corrected by improving how people conduct their research, not simply introducing other types of biases. This, however, is not an argument against diversity, simply a suggestion that diversity alone is not some kind of solution.

    There are also other issues in this context. We allow, and encourage, free speech. We can’t suddenly insist that academics no longer express political views because a few overly sensitive right-leaning people might feel as though they’re not welcome. We can’t have some kind of affirmative action for those on the right side of the political spectrum. Are we really suggesting that there is some kind of discrimination between people with certain political views? At what point does it happen? I don’t think I can identify someone’s poltical leanings from an application, or what they might say in a conference talk. In fact, I currently find it quite hard to take the whole “heterodox academy” idea all that seriously.

  52. @dikran
    Some disciplines and some departments are politicised and drive out particular people. The most recent flare-up is in social psychology. In economics, we have always tolerated political diversity, and because we don’t select on this, most departments house people from across the political spectrum. That in turn implies that conclusions that are more inspired by ideology than evidence rapidly get slapped down.

  53. The most recent flare-up is in social psychology.

    Really?

    That in turn implies that conclusions that are more inspired by ideology than evidence rapidly get slapped down.

    This doesn’t follow.

  54. dikranmarsupial says:

    Richard, you still have not addressed my point, plus ca change…

    Your still have not given any evidence/argument that “hetrogenous academies” do any more than passively dilute political bias with an opposing bias. As I point out, that isn’t very reassuring as it means that we can’t really trust individual studies at face-value.

  55. Willard says:

    > In economics, we have always tolerated political diversity […]

    Citation needed for both the tolerance and the political diversity.

  56. Citation needed for both the tolerance and the political diversity.

    (R. Tol, private communication)

  57. Andrew dodds says:

    Of course, if there is a heterogeneous set of conclusions in economics, then the problem becomes one of politics. With no definitive conclusion, politicians can pick and choose which economists support the policies they have already decided on..

  58. Andrew,
    In most natural/physical sciences (maybe all) there is a general expectation that there is an answer to a research question, you just need to collect sufficient evidence to find it. It doesn’t, of course, mean that an answer will always be found, but that would typically be because there is insufficient evidence to distinguish between different viable possibilities. In such scenarios, there may be people who hold very different views, but they will mostly be views that are at least consistent with the evidence. Not always, of course, but in general new evidence acts to rule out some of the previously held conclusions that have now become inconsistent with the evidence. I don’t know enough about economics to know if this is the same there, or if there is some argument as to why this should not be expected in economics.

  59. verytallguy says:

    Tols first and second laws of blogs 

    First Law: “However poor you expect Tol’s behaviour to be, he will promptly fail to meet even that level”

    Second Law: “In any blog comment thread where Tol contributes the subject will tend to being about Tol”

    https://andthentheresphysics.wordpress.com/2015/03/29/the-big-questions/#comment-51990

    And we now also demonstrate Tol’s paradox: Tol’s tedious predictability is such as to allow definition of the laws of Tol, and he is thereby in contravention of his own first law.

  60. @willard
    Just look at the list of Nobelists.

    @dikran
    You can take my word for it. Or do your own research. Or ignore me.

    @wotts
    See Duarte et al. (2015), Behavioral and Brian Sciences

  61. See Duarte et al. (2015), Behavioral and Brian Sciences

    Brian science?

    Which, given Duarte’s typical behaviour, actually seems quite appropriate.

  62. verytallguy says:

    Thanks Richard for so promptly demonstrating “Tol’s tragedy”.

    I don’t think that one needs to be spelled out.

  63. Phil says:

    Isn’t there a rule that every thread that Tolly-Wolly(*) comments on ends up being about him ?

    Anyway, the OP raised a different point for me. It seems to me that it is quite likely that climate scientists perceive a bias in any policy response to message that Climate change is less or more than we think. In other words, because enacting carbon reduction policies is hard, policy makers are more likely to act on studies that suggest global warming is less serious (by loosening carbon reduction policies) whereas the policy response to studies that show global warming is worse than we previously thought is likely to receive the response “we’re doing as much as we can already” from policy makers.

    Thus the experiment may simply be reflecting that the scientists perceived a bias in policymakers and were concerned about it, rather than being biased themselves.

    (*) A term of endearment

  64. Layperson alert here. The truth matters. Politics is irrelevant to real science.

    Both p-values and statistical significance are ways of indicating likelihood. We are way past the point when the public needs to be understand that they are being misled by the complexities of phony concerns about scientific uncertainty that are exploited by advocates claiming neutrality who are anything but neutral. The very honesty of the scientific method is being used to make it appear dishonest.

    Richard Tol seems bent on claiming neutrality in an argument where he is anything but neutral. He wishes to put his thumb on the scale of evidence.

    Real people need real information, and there is no longer any excuse for discouraging them from collecting all the world’s evidence in plain sight by some people in politically defended hermetically sealed political rooms, supported by people like the US Congress, your Cameron/Osborne/Rudd buddy system with big fossil, and lately, sadly, Australian defunding of CSIRO.

    That Judith Curry is willing to cater to Ted Cruz, Mark Steyn (defending his calumny that Mann is like child molester Sandusky) and a Republican Congress is evidence that her paranoid victim-bullying has reached an acute stage. Her hero Montford has feet of clay. Her villain Mann and his colleagues (Gavin Schmidt) have treated her with considerable respect and maintained their integrity, both scientific and otherwise, and she continue to spit on them instead of considering that she might have gone astray. Montford is no hero.

    If I sound angry, it’s because I am. We are in a fix, and these guys are channeling Nero.

  65. re politics and science:

    Political neutrality is no excuse for scientific dishonesty.

  66. MartinM says:

    I think it’s important to stress just how thoroughly wrong the contrarian interpretation of this paper actually is. It’s not just the fact that the between-group differences in Table 2 aren’t even close to significance; look at the survey question itself:

    Fictitious case in two versions: “Suppose a geologist conducted measurements to explore how the soil in the Northern hemisphere influences the climate. His measurement data show that the soil’s capacity to store CO2 has been overestimated [underestimated]. The geologist concludes that climate change could proceed faster [more slowly] than expected.”
    Question: “Suppose the geologist’s finding was published in a scientific journal. Now he wants to publish it in a newspaper, concluding that climate change will proceed faster [more slowly] than expected. One can have several objections to his decision. How relevant or irrelevant do you consider each of the following?”

    The scientists weren’t asked whether the newspaper article should be published, or if they had any objections to publishing it. They were asked to consider five hypothetical objections somebody might have, and to score them for relevance. None of the five scored closer to ‘relevant’ than ‘irrelevant’, in either group. If there’s a take-home message in this study, it’s that the average German climate scientist doesn’t really give a toss about the media’s reaction to scientific research, regardless of whether that research supports or contradicts the mainstream consensus.

  67. “One doesn’t reduce bias in a research area by introducing a new bias. The way to reduce bias in research is to encourage best practice in how people carry out their research, not simply add a new set of biases.”

    no. enforce best practice, not merely encourage.

  68. Marco says:

    “In economics, we have always tolerated political diversity”

    I’ll point out once again that Tol declined to use several estimates because the authors’ funding sources were deemed (by Tol) to bias the results.

    This is relevant (in an ironic way) in connection with Tol referencing Duarte et al (2015). I’m sure Tol knows how, the others can read the abstract of Duarte et al to get an idea.

  69. no. enforce best practice, not merely encourage.

    Except I don’t know how you police such things.

  70. Marco says:

    “no. enforce best practice, not merely encourage.”

    The view of what constitutes “best practice” tends to change over time, and science also does not work in a vacuum. An example: I have signed various NDA’s (*), which sometimes means I get information/data that I am not allowed to share. It is possible that someone might consider that information/data relevant for reproducing my results, and wants me to give it to them. “Best practices” in science says I should. “Best practices” from a judicial point-of-view says I’d find myself in court and in major trouble if I did. The only option not to get into that situation is to not do the type of research that requires me to sign NDA’s. That means a lot of interesting and relevant science will not be done.

    (*) non-disclosure agreement.

  71. This is relevant (in an ironic way) in connection with Tol referencing Duarte et al (2015). I’m sure Tol knows how, the others can read the abstract of Duarte et al to get an idea.

    Yes, there is something rather ironic about this. The whole “Heterodox Academy” idea reminds me of a seminar I went to about unconscious bias, at which the speaker mentioned that one of the major problems are those who think they aren’t biased.

  72. It’s fair to conclude …

    Montford’s usage of ‘it’s fair to’ is like a barium marker for a non sequitur.

  73. Richard wrote “@dikran You can take my word for it. Or do your own research. Or ignore me.”

    It seems we have established that the “heterodox academy” indeed does nothing more than passively dilute political bias with the opposite bias, which means that we cannot take individual papers (or by extension individual researchers) at face value. Not a satisfactory situation.

    Now I don’t particularly mind Prof. Tol being unwilling to answer questions about a blog discussion topic, it just demonstrates to everyone the paucity of his position on that particular topic. Not being willing to answer technical questions about his papers, that is another matter entirely.

    I can understand why Richard would want someone who asks him awkward questions to ignore him! ;o)

  74. dikranmarsupial says:

    “You can take my word for it. Or do your own research. Or ignore me.”

    perhaps that is how the “hetrogenous academy” operates in practice? ;o)

  75. Willard says:

    > Just look at the list of Nobelists.

    Thanks, Rich. Found this:

    http://econjwatch.org/articles/ideological-profiles-of-the-economics-laureates

    Can’t find stats about econ depts, however. Do you think it’s Gremlins?

    As far as anecdata is concerned, you know that Emanuel and Hansen are conservatives, right?

  76. Willard says:

    Clicking through, I found this:

    http://econjwatch.org/articles/economics-professors-voting-policy-views-favorite-economists-frequent-lack-of-consensus?ref=articles

    A quote:

    Dividing the Democratic count (167.67) by the Republican count (61.83), we find a D:R ratio of 2.71, which is in line with previous findings for economists. The Libertarian count is 17, making for 5.69 percent of the sample, which is unusually high. We break out the Libertarian voters as a separate group, and, to follow through, do likewise for the Greens, even though their count at 5.17 means that the Green results are especially uncertain.

    This survey reaches a conclusion that is too obvious for not paying due diligence to its methodology. That’s just the job for you, Rich.

    There’s a further division along the gender axis, and there’s an intriguing remark:

    Even at 16.4 percent, the contingent of free-market economists is smaller than many people seem to think. Explanations for why people have faulty impressions about economists are offered by Klein and Stern (2007, 324-329).

    My own hypothesis is Gremlins.

  77. Willard says:

    Here’s Klein & Stern. Among all the explanations, there’s this gem:

    Free-market positions are superior. Here we must openly acknowledge our own conviction that by and large free-market positions are superior and hence, when well argued, have a persuasiveness by virtue of the rightness of the arguments

    http://econfaculty.gmu.edu/klein/PdfPapers/Klein-Stern%20AJES%202007.pdf

    You heard it from GMU, folks: “only a small percentage of AEA members ought to be called supporters of free-market principles” are right.

  78. Steven Mosher says:

    ATTP
    “except I don’t know how you police such things.”

    Think a bit.
    You want to encourage a practice you can’t enforce.

    Now you can understand why some folks want mandatory heterogeneity.

    That would be because the practice of merely encouraging unbiased behavior doesn’t work.

    Old saying. You can’t improve what you don’t or can’t measure.

  79. dikranmarsupial says:

    I don’t think heterogeneity is a solution to the problem of bias, especially in a politically relevant subject, because it is a recipe for cherry picking the biased economic sources that suit your political position. The heterogeneity (without other measures) merely ensures that the cherry picking is possible. If it is really an issue in economics, the solution that I would suggest would be to recognize the value of comments papers and reward them. At least then the (effects of) the bias are explicitly pointed out and discussed and there is a direct incentive to participate. There would also then be an incentive to avoid political bias in ones own output as the comments papers would be regarded a black mark on our reputation.

  80. The Very Reverend Jebediah Hypotenuse says:

    Tol:

    In economics, we have always tolerated political diversity, and because we don’t select on this, most departments house people from across the political spectrum. That in turn implies that conclusions that are more inspired by ideology than evidence rapidly get tolerated.

    FTFY.

    Nobels and economics…
    Economics is the only field in which two people can share a Nobel Prize for saying opposing things.
    (Myrdahl and Hayek)

    Anderson:

    We are in a fix, and these guys are channeling Nero.

    Alfred E. Nero. “Quid, me anxius sum?”

  81. verytallguy says:

    Steve,

    You can’t improve what you don’t or can’t measure.

    you may be channelling your inner Lord Kelvin:

    When you can measure what you are speaking about, and express it in numbers, you know something about it; but when you cannot measure it, when you cannot express it in numbers, your knowledge is of a meagre and unsatisfactory kind

    Einstein had a good counterpoint

    Not everything that can be counted counts, and not everything that counts can be counted

    Disclaimer: Attribution of quotations can go down as well as up.

  82. Steve,

    Think a bit.
    You want to encourage a practice you can’t enforce.

    That’s because I trust the overall method, rather than the individuals involved. As Marco points out, best practice can evolve. Trying to specify it is probably entirely unworkable.

    Now you can understand why some folks want mandatory heterogeneity.

    Yes, but that doesn’t make it a solution.

    That would be because the practice of merely encouraging unbiased behavior doesn’t work.

    and, as I’ve said before, I don’t think adding new biases does either. Trust the method, rather than the individuals.

  83. Marco says:

    “Now you can understand why some folks want mandatory heterogeneity.”

    And how are we to decide how such “heterogeneity” should look like? Considering that about half the US population believes humans were created by a god, should 50% of evolutionary biologists believe similarly?
    Must the NAS throw out a large percentage of its members, because way too many of them are atheists/agnostics?

    Mandatory heterogeneity just creates new problems.

  84. @dikran
    I’ve answered your question in a variety of ways. You do not recognize my replies as an answer. I don’t know how to express myself such that you understand.

    @willard
    Indeed. In most economics departments, you would find a variety of opinions that is similar to the variety of opinions of Nobelists. There is not much hard data on that, but I guess we learned our lesson from the Methodenstreit and the Cambridge Controversy.

  85. You do not recognize my replies as an answer.

    I don’t think Dikran is alone in that regard.

    I don’t know how to express myself such that you understand.

    This may well be true, but not for the reason I suspect you’re implying.

  86. Tadaaa says:

    I will have to check my sources, but I am sure I read somewhere that Astrologists only exist to enhance the reputation of Economists

  87. Willard says:

    > In most economics departments, you would find a variety of opinions that is similar to the variety of opinions of Nobelists.

    The same may not apply to the Internet, the MSM, or the GWPF’s newsies, Rich.

    Unless “similar variety of opinions” means they just differ.

  88. I had some sympathies for taking diversity of political views into account when hiring scientists in fields studying humans. However, the term “mandatory heterogeneity” sounds really terrible. That sounds like an official policy to hire scientists who do not even try to fight their biases as much as possible. That sounds like a really bad excuse to justify hiring mistakes.

    I do agree with the slogan that people who are not aware of their biases are most in danger of producing biased work.

  89. Dikran, thanks for the laugh. Those are in short supply as things get worse. I don’t know anything about economics, but if politics is more important than reality, then Nobel award is a popularity contest. And that’s not how I like science.

    btw, if the reference went missing here’s a weird outtake that doesn’t quite fit in both categories under discussion:

    This is actually not the “hockey stick” but the economy, but I rather liked it as it includes Willard’s icon on the side:
    http://evilspeculator.com/what-me-worry/

    So in a crosseyed kind of way, I can indulge my mildly twisted sense of humor, which when we are comparing economist Nobels in the face of out and out misrepresentation, is all I can say.

  90. lerpo says:

    Richard Tol says: “Heterogeneity checks individual biases, if for every bias there is a counterbias.”

    Diversity of opinions may be a bug rather than a feature:

    “if we want economics to be a science, we have to recognize that it is not ok for macroeconomists to hole up in separate camps, one that supports its version of the geocentric model of the solar system and another that supports the heliocentric model. As scientists, we have to hold ourselves to a standard that requires us to reach a consensus about which model is right, and then to move on to other questions.” – http://paulromer.net/mathiness/

  91. Willard says:

    Please note that Romer’s screed did not end well for him:

    Judging by his reply to my post here, it seems that Romer has a rather low opinion of me. Evidently, I am an “Euler-theorem denier.” Admittedly, this is not the worst thing I’ve ever been called. But in addition to this, I am apparently motivated to deny the truth of this mathematical proposition because doing so signals my commitment to an academic club of serpents that includes the likes of Nobel prize-winning economists Bob Lucas and Ed Prescott. Paul, you flatter me.

    I want to take some time here to address the specific charge leveled against me by Romer:

    Andolfatto’s brazen mathiness involves a verbal statement about a mathematical model that flies in the face of an impossibility theorem. No model can do what he claims his does. No model can have a competitive equilibrium with price-taking behavior and partially excludable nonrival goods.

    Romer’s proposition is stated clearly enough. Now all we have to do is check whether it’s valid or not. If I can produce a counterexample, then I will have shown Romer’s proposition to be invalid. Let me now produce the counterexample.

    http://andolfatto.blogspot.com/2015/06/competitive-innovation.html

    A bit later, instead of issuing a correction, Romer pushed it to 11 by appealing to Feynman.

    No wonder Judy loved Romer’s rants.

  92. “That’s because I trust the overall method, rather than the individuals involved. As Marco points out, best practice can evolve. Trying to specify it is probably entirely unworkable.”

    Weird argument.

    Suppose we agree that sharing your data is best practice.

    You merely encourage it. I enforce it. No data, no tenure puppy.

    Then the best practice evolves

    We like to see code.

    you merely encourage it. I enforce it. No code, no cookie.

    problems?

    Then the best practice evolves some more..

    we agree a red team review or hostile review is required.

    you encourage it. I enforce it.

    Next

  93. Steven,
    Okay, but that’s not how I would interpret the word enforce. Of course we can incentivise good practice by having potential penalties. It could influence hiring, funding, promotions. I agree. However, I still think Marco’s point is valid. Things do change and there are limits. There is a difference between requiring open data, or open access, and insisting that the only way to analyse a particular dataset is using a particular statistical technique.

    So, I still maintain that when it comes to how best to develop understanding, it’s through the scientific method, not through imposing, and enforcing, rules. The latter can help, if those who don’t follow best practice are penalised, but we should still see this as a way to help the scientific method, not some kind of alternative to the scientific method.

  94. dikranmarsupial says:

    Richard wrote “I’ve answered your question in a variety of ways. You do not recognize my replies as an answer. I don’t know how to express myself such that you understand”

    Utter rubbish. You have asserted that the “heterodox academy” has the desired effect, but nowhere have you stated the mechanism involved or provided any evidence. The fact that you have responded with the above bluff is just making it even more apparent that you have no substantive answer.

  95. @dikran
    So you did recognize my reply as an answer, you just did not accept the answer. This is a blog, not an academic journal. You are someone I converse with, not a referee. If you don’t like the little evidence I put forward, you can look for more yourself. Or just ignore what I write.

    I did not, by the way, say anything about the Heterodox Academy. I am not a fan.

  96. You are someone I converse with

    Even that’s a bit of a stretch.

    I did not, by the way, say anything about the Heterodox Academy. I am not a fan.

    Except you’ve quoted Duarte and said That is why it is important to maintain political diversity in a discipline and a department. So, it’s not unreasonable that people might regard your views as similar to those of the Heterodox Academy.

  97. dikranmarsupial says:

    Richard “So you did recognize my reply as an answer, you just did not accept the answer.”

    no, stating that the hetrogenous academy addresses political bias is in no way an answer to a request for “any evidence/argument that “hetrogenous academies” do any more than passively dilute political bias with an opposing bias.”

    “You are someone I converse with, not a referee.”

    I am someone willing to listen to what you have to say and ask questions where I am not convinced by your argument, so that you know what to do to make your argument convincing. You don’t have to make your arguments convincing if you don’t want to, but in that case don’t then pretend you have answered the question when you obviously have not.

    On the other hand, if someone asks you a technical question about one of your papers, then as an academic I’d say you do have a responsibility to give a direct answer, rather than be rude, and then evasive as you did in our previous discussion (especially if you then comment on a thread about transparency!).

  98. dikranmarsupial says:

    Richard Tol wrote “Heterogeneity checks individual biases, if for every bias there is a counterbias. That is why it is important to maintain political diversity in a discipline and a department.”

    Richard Tol wrote “I did not, by the way, say anything about the Heterodox Academy. I am not a fan.”

    At best this is pedantry (Richard may not have used the exact phrase “Heterodox Academy”, but he has very clearly stated the importance of political heterogeneity within a department, in his own words).

  99. Marco says:

    “Suppose we agree that sharing your data is best practice.

    You merely encourage it. I enforce it. No data, no tenure puppy.”

    A question, Steven, based on a real life example.

    I have a manuscript almost ready for submission. It is a collaboration with industry. We did lots of interesting stuff, and I can likely give you the data points for all the graphs. But I cannot disclose the exact nature of all of the compounds we used, only some generic information. That means that *no one* can reproduce our data unless the company is willing to share those compounds with those others. People can, however, check the calculations based on the data points. Would that be enough?

  100. BBD says:

    Steven is still banging an empty bucket with a hockey stick. Or perhaps fighting a proxy war by proxy. Whatever you like – it’s not terribly subtle.

  101. Steven Mosher says:

    “I have a manuscript almost ready for submission. It is a collaboration with industry. We did lots of interesting stuff, and I can likely give you the data points for all the graphs. But I cannot disclose the exact nature of all of the compounds we used, only some generic information. That means that *no one* can reproduce our data unless the company is willing to share those compounds with those others. People can, however, check the calculations based on the data points. Would that be enough?”

    Enough for what?
    If you are going to do hypotheticals or thinly veiled hypotheticals, you have to up your game.

    My point is simple. Attp wants to encourage good behavior. I suggested enforcement.

    If you have a point related to that, I’d be happy to get back to the point.

  102. Steven Mosher says:

    There is a difference between requiring open data, or open access, and insisting that the only way to analyse a particular dataset is using a particular statistical technique.

    #################

    I don’t think we disagree.

  103. I don’t think we disagree.

    Then we probably agree in general. I think that if there is some reason to regard someone as not doing research in a manner that is consistent with a reasonable definition of best practice, that this should influence the decisions made when hiring, funding, or promoting someone. That seems entirely reasonable. Journals can also insist on it when publishing. I suspect we’re just disagreeing about what we mean by the words we used.

  104. Steven Mosher says:

    Marco let’s make it easier.

    Suppose we agree that best practice is that you don’t plagiarize. Do you seriously want to object to enforcing that because it might change as a best practice?

  105. Willard says:

    > Suppose we agree that best practice is that you don’t plagiarize.

    Then suppose we agree that best practice is that you don’t kill.

    Then suppose we agree that best practice is that you don’t sexually harass.

  106. Marco says:

    Steven, is it enough that you can get my raw data, but not the compounds and their exact structure with which I generated that data (and no, you can’t buy it anywhere, you’d have to ask the company)?

    If it isn’t enough in your world of “have to share data, will be enforced, your tenure is at stake”, my research field is in trouble.

  107. Steven, I was going to make a similar point to Marco, which is that rigid enforcement of data availability limits the ability to pursue joint research with industry. The obvious example in climatology is that some national MET offices are required to exploit the commercial property (e.g. station data) and so that wasn’t always in the public domain. It seems daft to me that academics can’t use that data at all. That is why encouragement/research culture is better than enforcement, as some compromise is likely to be needed.

  108. Marco says:

    “Suppose we agree that best practice is that you don’t plagiarize. Do you seriously want to object to enforcing that because it might change as a best practice?”

    Some cases are easier than others, and not just when it comes to plagiarism (should Said & Wegman had their two (IIRC) review papers with ample copying from Wikipedia retracted?).

    There are plenty of best practices that are not universally agreed upon, and some even directly contradict. It is best practice to keep to an NDA, and not just because of the legal aspects, but that best practice may well clash with the best practice of freely sharing data.

  109. Willard says:

    While Moshpit shows how to go a bridge too far with the concept of best practice, I think we need to concede that if a journal requires data and code, one does not simply pretend one’s dog named Mordor ate it. Such policy may seem to protect the journal, but I’d contend it protects readers from bad reviewers.

    Here would be such a case:

    Now this discussion of McKitrick and Michaels stirred a memory in Eli’s rememberer, a comment that Steve Mosher had made when a follow on paper to MM04 and MM07 was being featured by Judith Curry.

    I downloaded his data. In his data package he has a spreadsheet named MMJGR07.csv.
    This contains his input data of things like population, GDP etc.

    […]

    So, at latitude -42,5, Longitude -7.5 he has a 1979 population of 56 million people and 240940 sq km and a population density in the middle of the ocean that is higher than 50% of the places on land. Weird.

    A few others looked at the spread sheet and saw that well in the words of another McKitrick was spreading the population and GDP of France across a couple of small islands in the Pacific.

    http://rabett.blogspot.com/2016/02/nigel-persaud-dons-his-eyeshade-and.html

    McKitrick & Tole 2012 was published in Climate Dynamics:

    http://link.springer.com/article/10.1007%2Fs00382-012-1418-9

    This is a Springer journal, which merged with Kluwer a while ago:

    The new company has, among others, offices in Berlin, Heidelberg, Dordrecht, Vienna, London, New York, Boston and Tokyo.

    As a result of the merger, Springer is now the world’s second-largest supplier of scientific literature. Its range of products includes 1,250 journals and some 3,500 new book titles a year. The ‘old’ Springer-Verlag traditionally focused on clinical medicine, biomedicine, life sciences, economics, statistics, physics, engineering sciences, mathematics, and computer science. By merging with KAP, publications in the humanities and the social sciences have enriched Springer’s programme

    http://www.researchinformation.info/features/feature.php?feature_id=115

    In a way, the scientific publishing business is the first techno-communist industry: the smallest inventory possible, most of the work outsourced, a big selling platform to a captive market.

  110. @dikran, @wotts
    Heterodoxy and heterogeneity are different words with different meanings. I disagree with Duarte that heterodoxy is a solution to homogeneity.

  111. Heterodoxy and heterogeneity are different words with different meanings.

    Oh FFS!

  112. Heterodox academy

    We are social scientists and other scholars who want to improve our academic disciplines. We have all written about a particular problem: the loss or lack of “viewpoint diversity.”

    heterogeneity

    Heterogeneity is a word that signifies diversity.

  113. Willard says:

    > Heterodoxy and heterogeneity are different words with different meanings.

    FWIW, when Dikran said:

    The “hetrogenous academy” is basically just saying that the department is not merely an echo chamber, which is of course a good thing, but hardly a ringing endorsement!

    Rich did not seem to have taken a cue about Dikran’s referent:

    Indeed, economics is politically heterogeneous rather and politically neutral.

    While Rich might be right in saying that he never talked about the Academy, he may need to acknowledge that his follow-ups have not been responsive to Dikran’s points.

    When life gives you semantics, you can feel rich.

  114. Richard posts yet again without actually addressing the substantive point. It is almost as if he was deliberately avoiding answering the substantive point, but for some reason feels the need to keep the discussion going. What odd behaviour.

    To be fair to Richard I did occasionally use heterodox where I meant heterogenous, however the substantive point ought to have been clear from the other words that I put around them. BTW if there is homogeneity then there is an orthodox (“of the ordinary or usual type; normal”) position and hence to introduce heterogeneity would require some heterodoxy AFAICS. I agree there is a nuanced difference between the two terms, but it is not central to the substantive issue.

  115. I don’t see that plagiarism and code/data availability are commensurate issues. Plagiarism is essentially theft, whether code and data are made available is not an academic crime, and the extent to which it happens is essentially dependent on the needs of the particular research field. There is no moral dimension AFAICS.

  116. I don’t see that plagiarism and code/data availability are commensurate issues.

    I agree. There are things that most would agree qualify as a form of misconduct (plagiarism, fraud) and others things that might be regarded as poor practice (code availibility, for example). It’s the grey areas that make it complicated.

  117. Chris says:

    Steve Mosher writes:

    “You want to encourage a practice you can’t enforce.
    Now you can understand why some folks want mandatory heterogeneity”

    and:

    “That would be because the practice of merely encouraging unbiased behavior doesn’t work.”

    The first assertion has a couple of logical flaws (e.g. sentences one and two constitute a crashing non sequitur).

    But there is a more general problem with Steve’s theme that we could discuss (but probably won’t!) which relates to encouraging good practice in science (especially in relation to “bias” – Steve has also referred to plagiarism later in the thread).

    In all the academic institutes I have studied/worked in (4 UK and 1 US Universities/Research Institutes) good scientific practice has been encouraged. We require students/postdocs to ensure sufficient replicates (for statistical significance!), ask them to redo problematic expts, critique their experiments/interpretations and ask them to do more controls, explore contrary interpretations etc. etc. We want to get things right to the best of our abilities – most scientists take that approach – even my builder says “we don’t want any cock-ups!”.

    This seems pretty standard. It’s encouraging unbiased behavior and it works in my experience (Willard will take me to task for the anecdotal nature of my discourse, no doubt)

    As an interested observer I don’t see anything different in climate science. There is a sort of “climate science fringe” which wallows in bias (I gave examples in a post here recently:
    https://andthentheresphysics.wordpress.com/2016/02/03/transparency/#comment-72248 ). But where’s the bias in climate science against which some of us would like to encourage and Steve would like to enforce.? Can you give us an example or two Steve?

  118. @dikran
    As I said, I don’t think it worthwhile to further discuss whatever you may consider to be the substantive point.

    I just noted that, contrary to your assertion, I am no fan of the Heterodox Academy.

  119. Willard says:

    Chris,

    Perhaps it’s just a vocabulary thing:

    A best practice is a method or technique that has consistently shown results superior to those achieved with other means, and that is used as a benchmark. In addition, a “best” practice can evolve to become better as improvements are discovered. Best practice is considered by some as a business buzzword, used to describe the process of developing and following a standard way of doing things that multiple organizations can use.

    Best practices are used to maintain quality as an alternative to mandatory legislated standards and can be based on self-assessment or benchmarking. Best practice is a feature of accredited management standards such as ISO 9000 and ISO 14001.

    https://en.wikipedia.org/wiki/Best_practice

    When your admin tells you “we need best practices,” you nod approvingly and do as you always did. When a standard officer tells you what you need to accomplish to meet the requirements he’s there to evaluate, his opaque stare will make you feel it’s not just some business buzzword.

    The long and the short of it is that standardization efforts cost money.

  120. BBD says:

    Chris

    But where’s the bias in climate science against which some of us would like to encourage and Steve would like to enforce.? Can you give us an example or two Steve?

    Simply by asserting as Steven does that:

    we agree a red team review or hostile review is required.

    you encourage it. I enforce it.

    …You imply that a problem with (climate) science exists that must be policed by enforced Audit. Actual examples are superfluous. All one needs is the assertion. Of course Steven can rake up a vast, poisoned and lifeless hinterland of nitpickery going back a decade and more, but substantive issues with (climate) science? Serious matters that we should thank the Auditors for dragging out into the harsh spotlight of public concern?

    Well, perhaps it’s all a work in progress.

  121. Richard is still prolonging the discussion, whilst not actually addressing the substantive issue. He wrote “As I said, I don’t think it worthwhile to further discuss whatever you may consider to be the substantive point.”

    I stated the substantive point very clearly, several times, here it is again: can you give any evidence/argument that “hetrogenous academies” do any more than passively dilute political bias with an opposing bias. As I have pointed out, that isn’t very reassuring as it means that we can’t really trust individual studies at face-value.

    Richard asserts that ““Heterogeneity checks individual biases, if for every bias there is a counterbias. That is why it is important to maintain political diversity in a discipline and a department.”, so this seems a pretty reasonable question to me.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s