Publication bias?

There’s a new paper called Selective reporting and the Social Cost of Carbon, that is being lapped up with glee by the largely unskeptical. As I understand it, the basic argument is that if one analyses the published estimates for the Social Cost of Carbon, there is an indication of a publication bias, which can then be used to estimate the unbiased Social Cost of Carbon.

When I noticed this, it rang a bell, so I went back through some things and discovered a similar paper with one common co-author. This one is called Publication bias in Measuring Anthropogenic Global Warming and it is quite remarkable, in the seriously, someone’s actually done this? kind of way. When I first saw this, I decided not to discuss it, but thought I might now, as an illustration of what this newer paper has probably done.

Credit : Reckova & Irsova (2015)

Credit : Reckova & Irsova (2015)

The basic argument is related to regression toward the mean. If your initial sample is small, the result could be a long way from the “true” mean, but with a large uncertainty, and could be either larger than, or smaller than, the “true” mean. As you increase the sample size, the difference should get smaller (but with results that are both larger than and smaller than the mean) and the uncertainty should reduce. The larger the sample, the closer the result should be to the “true” mean, and it should become more and more precise. If, however, there is some kind of publication bias (for example, negative results don’t get published) then you would see the results becoming more precise from one side only, as illustrated by the figure on the right.

Credit : Reckova & Irsova (2015)

Credit : Reckova & Irsova (2015)

What they do in this study is to apply the same argument to estimates of climate sensitivity. What they find – as shown in the figure to the left – is that there is a tendency for the more precise estimates to have a lower climate sensitivity. They therefore conclude that there is a bias, saying: In the absence of publication bias these figures should look like an inverted funnel. However, Figure 3 depicts only the right-hand side of the inverted funnel and the left-hand side is completely missing, indicating publication selectivity bias.

They then analyse this and conclude that the unbiased climate sensitivity is somewhere between 1.4oC and 2.3oC, despite the published estimates having a mean of 3.3oC. What they, of course, fail to realise is that the reason the left hand side is missing is not indicative of a publication bias; it’s because it is very difficult to develop a physically plausible argument as to why climate sensitivity should be this low. That the lower published estimate tends to be more precise is largely irrelevant. This is not simply a sampling issue.

So, quite a remarkable idea. Analyse the published results to show that there is some kind of bias in the published estimates, and then use this to present what is meant to be some kind of unbiased estimate. Now, of course, I haven’t gone through their Social Cost of Carbon paper, but if the Anthropogenic Global Warming one is anything to go by, I won’t be taking it too seriously. I really don’t think the scientific method includes a section that says use completely non-existent publications as part of your estimate. I would argue that in any sensible scenario we should base our understanding of these topics on what is actually published, not on what is neither published nor – as far as we’re aware – actually in existence.

Advertisements
This entry was posted in Climate change, Climate sensitivity, ClimateBall, Science and tagged , , . Bookmark the permalink.

120 Responses to Publication bias?

  1. I do think it’s a legitimate way of approaching and correcting the (real) problem of publication bias in many disciplines. What I think is the problem is that the funnel plot argument only works if the uncertainty is symmetric. But climate sensitivity estimates have a long tail that makes that assumption invalid.
    There’s also the problem that many estimates use different assumptions and models so it’s not really a straightforward comparison.

  2. Elio,
    Yes, I agree. It’s not without it’s uses. As you say, though, this isn’t really appropriate for climate sensitivity estimates. In a sense, you’d need to have actual evidence that these other estimates actually exist and have simply not been published.

  3. You forgot to mention the authors are all economists. Or am I just showing my bias? 🙂

  4. Yes. In the Cochrane Handbook there’s some discussion about why a funnel plot might be asymmetrical: http://handbook.cochrane.org/chapter_10/10_4_2_different_reasons_for_funnel_plot_asymmetry.htm
    Publiation bias is just one of the reasons.

    The relevant point here, I think, is the idea of small study effects. Climate Sensitivity estimates with high and low variance are not arrived via the same methods. Simple models with high uncertainty may very well be biased high because they lack the representation of key processes. That alone may be sufficient to explain such extreme outliers.

    Also, Cochrane reviews try to get every frikking piece of data, even if it is unpublished. So, to your point, they should actually try to show that those unpublished estimates exist, not just pull them out of their hats.

  5. dana1981 says:

    The same ‘long tail’ (asymmetric distribution) problem applies to social cost of carbon estimates too. There are some really high SCC estimates, especially among the few papers that include climate impacts on economic growth. And there’s no long tail in the opposite direction – we know the impacts won’t be significantly beneficial (as Tol’s gremlins showed).

    It’s a somewhat related problem to the other paper you discuss – particularly high climate sensitivity and/or particularly bad climate change impacts could lead to very expensive consequences, and hence a high SCC.

    It irks me a bit that the last author on this paper is a Berkeley guy, as a Cal grad myself.

  6. talies says:

    Surely estimates which give high climate sensitivity include all sorts of feedbacks which are difficult to measure.

  7. Ethan Allen says:

    There’s a publication bias. That is the draft paper itself, that you linked to above ” Publication bias in Measuring Anthropogenic Global Warming”, that draft paper mentions an “appendix” six times, that appendix is located here:

    http://meta-analysis.cz/climate/appendix.pdf

    The abstract of that appendix states:

    “This documents contains details of computation and additional results for “Publication
    Bias in Measuring Anthropogenic Climate Change,” which is to be published in Energy &
    Environment.”

    If that’s the E&E of denier fame, oh boy.

    In the appendix they list 16 studies, two are Scafetta (both from 2013, one of those is an E&E paper) and one is Lindzen & Choi (2011) and no other paper postdates circa 2011.

    They state that:

    “Notes: The search for primary studies was terminated on March 3, 2014”

    I would have thought that in the runup to AR5 WG1 that there would have been many more estimates than these “authors” were able to find, in fact the only reference to AR5 is:

    Stocker, D. Q. (2013): “Climate change 2013: The physical science basis.” Working Group I Contribution to the Fifth Assessment Report of the Intergovernmental Panel on Climate Change, Summary for Policymakers, IPCC.

    The “Summary for Policymakers” I mean WTF?

  8. Not only is the distribution asymmetric because we know the lower limit much better than the upper limit, which would lead to low estimates needing to have smaller confidence intervals.

    There may be a sociological effect as well: the low estimates partially come from the mitigation sceptics and they are typically overconfident and it seems normal to expect them to report too low confidence intervals.

    And there could be a methodological effect: Do the low climate sensitivity estimates with a low estimate for the confidence interval come from people using extremely simplified climate models tuned to the global temperatures? (What mitigation sceptics like to “observational” estimates.) That would be comparing apples to oranges and something one would need to take the method used into account in the statistical analysis.

  9. matt says:

    It would be interesting to see a list of caveats included in these studies (of SCC). I looked into this many years ago and noticed most dealing with more than 2degC change stated something along the lines of “these bad effects are not included because uncertainty is too large/has not been studied, but we know its not good”. Somewhat like the previous IPCC estimates of SLR (the ice models are bad so we won’t include them but please notice this caveat). Also ignoring effects on impacts to EG.

    Anyway, here is “Forty Percent Little Fred” claiming the chances of no publication bias is less than 1/”> the number of stars in the universe”. He is addressing the Oz equivalent to the Heartland Institute.


    (9:40-15min approx. sorry not sure about the end time and could not be bothered looking it up. beer+ashes+caffeine = hope u understand. There is a reasonable chance the comedy doesn’t stop there)

    Based on http://multi-science.atypon.com/doi/abs/10.1260/095830508783900735?journalCode=ee

  10. matt says:

    Ethan points out E&E. Michaels paper above is also E&E. Seriously attp, no need to dig here. Nothing of interest to be found.

  11. “This one is called Publication bias in Measuring Anthropogenic Global Warming.”
    My interpretation:If there is a Publication bias in Measuring Anthropogenic Global Warming means that you cannot measure it with high accuracy.No more, no less.

  12. BBD says:

    Publication bias = E&E

  13. Ethan Allen says:

    matt,

    Michaels (2008) is an E&E paper as you mentioned, that paper is also referenced in the aforementioned appendix above.

  14. Ethan Allen says:

    OK, Michaels (2008) is referenced in the main draft paper, but not in the appendix. Sorry about that one.

  15. beer+ashes+caffeine = hope u understand.

    Yes, indeed I do. Although, as I may have mentioned before, my main problem with the Ashes is deciding which team I’d most like to see losing.

  16. Publication bias = E&E

    Is it regarded as a pretty poor journal?

  17. Ethan Allen says:

    Well the editor is on record as saying “Denier Papers Welcome” or words to that effect, see:

    https://en.wikipedia.org/wiki/Energy_%26_Environment
    https://en.wikipedia.org/wiki/Sonja_Boehmer-Christiansen

    I’ve seen several dozen of the E&E’s papers with respect to climate change, on a scale of one to five, they rate a zero (or less). 🙂

  18. Actually, the Social Cost of Carbon paper is published in Energy Economics, not Energy & Environment. Given I’d quite like a relaxing weekend, I’d probably prefer that we didn’t explicitly mention one of the editors of Energy Economics.

  19. Ethan Allen says:

    “Actually, the Social Cost of Carbon paper is published in Energy Economics, not Energy & Environment. Given I’d quite like a relaxing weekend, I’d probably prefer that we didn’t explicitly mention one of the editors of Energy Economics.”

    I guessed right, took like one second.

    The E&E draft paper you linked to above titled ” Publication bias in Measuring Anthropogenic Global Warming” is the paper I am referencing above, not the Energy Economics paper.

  20. Ethan,
    How do you know it’s in E&E? I haven’t managed to confirm which journal it is being published in.

  21. dana1981 says:

    Reading a new Citi report on the costs of climate action vs. inaction, I just saw a statement that gets to the point I was making above.

    As just one example,
    modelling by Ceronsky et al with FUND, a fairly standard IAM, suggests that if the
    thermohaline circulation (THC) were to shut down, the corresponding social cost of
    carbon (SCC) could increase to as much as $1,000/t CO2.

  22. Ethan Allen says:

    There is a website that includes the appendix that I found and noted above:

    http://meta-analysis.cz/climate/

    There you will see:

    “Reference: Dominika Reckova and Zuzana Irsova (2015), “Publication Bias in Measuring Anthropogenic Climate Change.” Energy and Environment, forthcoming.”

    Let me repeat what the appendix states:

    “This documents contains details of computation and additional results for “Publication Bias in Measuring Anthropogenic Climate Change,” which is to be published in Energy & Environment.”

    There could always be more than one “Energy & Environment” journal, I wouldn’t know for sure.

    But if it is the E&E I’m thinking it is, then, oh boy.

  23. dana1981 says:

    And, immediately following that statement:

    3. Omission bias may lead to misleadingly low estimates … The main source of concern is that, by definition, IAMs only model the effects that they are capable of modelling. The implication is that a wide range of impacts that are uncertain or difficult to quantify are omitted. It is likely that many of these impacts carry negative consequences. Indeed, some of the omitted impacts may involve very significant negative consequences, including ecosystem collapse or extreme events such as the catastrophic risks of irreversible melting of the Greenland ice sheet with the resulting sea level rise. Other consequences – such as cultural and biodiversity loss – are simply very difficult to quantify and are hence just omitted.

  24. Ethan,
    Gotcha, thanks.

    Dana,
    Precisely. It is much more likely that SCC estimates are biased low, than that there are a whole lot of low estimates that have not been published because of the biases of the researchers or the journal editors.

  25. anoilman says:

    I’m only familiar with publication bias in medicine. In that case, the there are motivating reasons behind why, say drug companies suppress poor results and only publish good results for their drugs.

    I’m not sure I could conclude publication bias was occurring in physics. ’cause its physics.

  26. anoilman says:

    For instance… is June 2015 the hottest month ever, or is that publication bias? This all seems like a silly silly argument. (Would those same people be willing to claim 1998 wasn’t that warm, thus ending all arguments that temperatures in any stalled? I seriously doubt it.)

  27. BBD says:

    AOM

    I’m not sure I could conclude publication bias was occurring in physics. ’cause its physics.

    Yes but conspiracy theories and the groupthink meme 😉

    And let’s not mention Chris De Freitas.

  28. anoilman says:

    What would he make of gravity I wonder?

  29. What would he make of gravity I wonder?

    Presumably it’s biased because it only attracts?

  30. jsam says:

    Unicorns are underreported. Therefore the exist. I knew it.

  31. lerpo says:

    It may be possible to validate whether this method can be applied to physics by testing it against something that was settled long ago. If it can predict the correct answer before it was settled in the literature then maybe it is worth investigating here as well. Feynman offers a possible topic to study:

    “It’s interesting to look at the history of measurements of the charge of an electron, after Millikan. If you plot them as a function of time, you find that one is a little bit bigger than Millikan’s, and the next one’s a little bit bigger than that, and the next one’s a little bit bigger than that, until finally they settle down to a number which is higher. Why didn’t they discover the new number was higher right away? It’s a thing that scientists are ashamed of – this history – because it’s apparent that people did things like this: When they got a number that was too high above Millikan’s, they thought something must be wrong – and they would look for and find a reason why something might be wrong. When they got a number close to Millikan’s value they didn’t look so hard.”

  32. E&E:

    http://www.desmogblog.com/sonja-boehmer-christiansen

    Sonja Boehmer-Christiansen
    Doctorate “International Relations researching into environmental issues in international and also national politics.”
    Ph.D. in “marine pollution control in the Law of the Sea negotiations.”
    Master’s Degree, physical geography./ Master’s Degree, social science.

    Sonja Boehmer-Christiansen is an emeritus reader in geography at the University of Hull and the editor of Energy and Environment, a journal known for publishing the papers written by climate change skeptics.

    In a 1995 article written by Paul Thacker, Energy and Environment was described as being a journal skeptics can go to when they are rejected by the mainstream peer-reviewed science publications.

    She has described herself as “an ‘expert’ on the science and politics of global warming since the late 1980s.”

    Boehmer-Christiansen explained at the time that “it’s only we climate skeptics who have to look for little journals and little publishers like mine to even get published.” According to a search of WorldCat, a database of libraries, the journal is carried in only 25 libraries worldwide. And the journal is not included in Journal Citation Reports, which lists the impact factors for the top 6000 peer-reviewed journals.

    After a great deal of controversy involving a research paper published by two well-known climate change “skeptics,” Sallie Baliunas and Willie Soon in the journal Climate Change, Boehmer-Christiansen’s proceeded to run a more extensive version of the article in Energy and Environment.

    “FC: Do you think humans are causing global warming?

    SC: To be very honest I’m agnostic on this. I don’t have the evidence. I mean I have lots of contradictory evidence but I do think, from my experience on ocean pollution and all the other pollution hypes, that when it goes to the political phase there are huge exaggerations. Once bureaucracies get regulatory and taxation powers, the exaggerations decline, scares may even be forgotten. So I honestly believe that there may be a problem but that this problem also has beneficial sides. We know how positive carbon dioxide is to life. So I do think there’s much exaggeration (of the man-made warming threat), of the negative aspects, for political reasons. So that’s why I’m here (at this Conference). I do think the skeptical scientists are more honest and more truthful than those funded by governments to support the IPCC.” [1]
    Key Quotes

    “… As editor of a journal which remained open to scientists who challenged the orthodoxy, I became the target of a number of CRU manoeuvres. The hacked emails revealed attempts to manipulate peer review to E&E’s disadvantage, and showed that libel threats were considered against its editorial team…” [5]

    July, 2010
    Boehmer-Christiansen was one of a group of climate change skeptics who claimed that Phil Jones had manipulated climate data in the IPCC Fourth Assessment Report. Five separate inquiries were conducted to investigate these claims but the conclusion reached was that there had been no scientific dishonesty or misconduct by the IPCC scientists.

    The last review, done by the Independent Climate Change Email Review (ICCER), responded to Boehmer-Christiansen’s allegations by concluding that she had provided no evidence in support of her claims.

    May 16 – 18th, 2010
    4th International Climate Change Conference hosted by the Heartland Institute. [1]

    The Scientific Alliance — Advising member. [7]
    Oil, Gas, Energy Law Intelligence (OGEL) — Contributing Author. [8]
    Energy and Environment — Editor.
    etc.

  33. Eli Rabett says:

    A logical explanation is they sent it to Environmental Economics which declined and then to E&E

    As to Millikan and his oil drop, the real issue was that Millikan used the wrong value for the viscosity of air. This was finally figured out by a young assistant professor at HopkinsJA Bearden ~ 1935.

    As life would have it, Eli took Sr. Physics Lab many years ago from Bearden, and believe Eli, Bearden was anything but shy about it.

    He was able to get a more accurate value of e using X-Ray spectroscopy, and this set off a “conversation” between him and Millikan, which eventually came down to Bearden figuring out that Millikan’s student had measured the viscosity of air incorrectly. This is basic to the oil drop experiment because what is measured is the movement of the drops through air.

    According to Bearden what the student had done was to take the average of previous measures and when his value exactly matched the average he wrote it up and graduated. Unfortunately for Millikan, there was a subtle experimental bias in the apparatus that was used (everyone agreed that the method was a large improvement on other instruments for measuring the viscosity).

    Anyone doing the oil drop experiment and using the Millikan value for viscosity would get the Millikan value for the charge on the electron.

  34. anoilman says:

    jsam: Actually you’re wrong. Unicorns do exist, its a well known fact;
    http://www.nbcnews.com/id/25097986/ns/technology_and_science-science/t/unicorn-deer-found-italian-preserve/

    However, since we’ve only scene one, that must mean there are a lot more. Maybe there are between 1.4 and 2.4? Did anyone look for its parents? I think not!

  35. John Mashey says:

    1) It is worth revisiting Morgan and Keith(1995) “Subjective Judgments by Climate Experts.”
    They estimated climate sensitivity.
    Expert 5 has a very low estimate, and is also very sure about it.

    2) Ignoring the physics, and without looking at this in any great detail, the paper says:
    “As we cannot be sure about the true distribution of the CS estimates, we assume the standard normal distribution to be the best approximation.”

    That might be plausible if the differences in estimates are caused by differences in small additive assumptions … but it is not obvious why that should be so, or any more likely than differences caused by multiplicative factors, in which case one would try a lognormal instead and see if that’s a better fit. After all, their figure 1 implies a non-zero probability of negative sensitivity 🙂

    Of course, nothing guarantees any particular distribution, but just assuming a normal seems dubious without clear reasoning for such.

    Again, all this is ignoring the physics.

  36. Marco says:

    Looking at the papers used for the climate sensitivity paper, I wonder how their “publication bias” would look if they had also included papers using paleoclimatological data to estimate ECS. to me it looks like they only used data from papers that used the instrumental record (maybe with exception of 1-2 papers). Add a few papers from known contrarians (3 out of the 16), and you bias it even more.

    Perhaps one can speak of a selection bias as a potential reason for the supposed publication bias.

    The final potential issue is the comparison of older papers and newer papers when using that instrumental record, since the supposed ‘slowdown’ since about 1998 has significant effects on the ECS calculations when you use more data since 2000, if I understand it correctly. So, a paper from 2003 using data up to 2002 is likely to give a higher ECS than a paper from 2011 using data up to 2010. But new data giving a different (and lower) result are not any evidence of publication bias, since it is based on new data not available for the older estimates.

    Anyone see any obvious flaws in my assessment? I’d really be happy to hear them.

  37. Marco,
    Yes, I agree that their sample could well have been biased and they didn’t seem to include many (if any) paleo papers. I’m trying to remember if the surface warming slowdown does affect ECS. I’ve seen arguments suggesting that it doesn’t, but I can’t quite remember what they were.

  38. Marco says:

    Thanks, ATTP.

  39. Just remembered that I think the point about ECS not being influenced by the slowdown is that the energy balance approach is essentially

    ECS = \dfrac{\Delta F_{2x} \Delta T}{\Delta F - \Delta Q},

    where \Delta Q is the system heat uptake rate. If \Delta T goes down then – in the absence of variability – \Delta Q goes up, and the ECS is unaffected. I think the is correct on average, but not necessarily at all instants (Palmer & McNeal – I think – show this). So, the slowdown could have influenced ECS.

    I also noticed that the Social Cost of Carbon paper has a dig at Cook et al., saying

    Given how important climate change research is for current policy making, we believe more work is needed on selective reporting in the field. For example, in the light of our results the 97% consensus on human-made climate change reported by Cook et al. (2013) should be understood as the upper boundary of the underlying consensus percentage, because Cook et al. (2013) do not account for potential selective reporting.

  40. matt says:

    attp,

    Cmon, just put on ur green and gold. OZ lost the ashes so you can cheer for that, now cheer for the other enemy to lose the final match. Seems like a safe bet too. Weird series. A reluctant congrats to the barmy.

    The targeting of Cook seems odd. It seems that those who criticise the consensus pick out Cook and pretend it hasn’t been shown multiple times before (Oreskes 2004?, Anderegg et al, Doran & Zimmermann, ….).

  41. Steven Mosher says:

    ECS = \dfrac{\Delta F_{2x} \Delta T}{\Delta F - \Delta Q},

    arent confidence intervals on ratios inherently nasty.

  42. Steven Mosher says:

    shit moderator help

    [Mod: sorted.]

  43. Rob Nicholls says:

    This is really interesting – I’ve seen funnel plots like the one shown at the top of the post in medical stats textbooks, but hadn’t encountered this sort of thing in the climate wars before.

    This line in the new paper’s abstract really made me chuckle: “Our estimates of the mean reported SCC corrected for the selective reporting bias range between USD 0 and 134 per ton of carbon at 2010 prices for emission year 2015.” So the social cost of carbon dioxide emissions may be as low as zero. Cancel the World Climate Summit in Paris!

    I really should get around to learning more about how estimates of the social costs of greenhouse gas emissions are calculated. I’m strongly in favour of putting taxes on GHG emissions (with any necessary tweaks to ensure such taxes are progressive), although the price will have to be high enough, and there are huge vested interests which I fear will do everything they can to make sure that the price is never high enough.

    I think estimates of the real costs of GHG emissions might be useful as long as it is acknowledged that they can never hope to capture the true costs because a lot of things cannot be quantified in monetary terms.

    Surely estimates of social costs of carbon must be subjective, and dependent on the value systems employed; e.g. if I believe that the cost arising from the quite possible extinction in the wild of the orange-spotted filefish (a fish heavily dependent on corals) due to climate change would be infinite (and that therefore the true cost of GHG emissions is arguably infinite), I would struggle to see how anyone can refute that objectively as this would be a matter of subjective value judgments. I’m not aware of any method of estimation that can overcome this problem of dependency on value judgments and I don’t believe that it’s possible that such a method can exist. I’m happy to be corrected on this.

  44. John Mashey says:

    Most of this is about the “earlier” paper, by Dominika Reckova and Zuzana Irsova, both at Charles U.

    The Appendix to the *earlier* paper says:*
    “With asymmetric distributions this assumption does not necessarily hold, but there is no reason why climate sensitivity estimates should not be distributed symmetrically.”
    Well, economists said so.

    It gives 16 primary studies, of which 3 are:
    Lindzen and Choi(2011)
    Scafetta(2013a)
    Scafetta(2013b)

    i.e.
    “Lindzen, R. S. & Y.-S. Choi (2011): “On the observational determination of climate sensitivity and its implications.” Asia-Pacific Journal of Atmospheric Sciences 47(4): pp. 377–390.

    Scafetta, N. (2013a): “Discussion on climate oscillations: Cmip5 general circulation models versus
    a semi-empirical harmonic model based on astronomical cycles.” Earth-Science Reviews 126: pp.
    321–357.

    Scafetta, N. (2013b): “Solar and planetary oscillation control on climate change: hind-cast, forecast and a comparison with the cmip5 gcms.” Energy & Environment 24(3): pp. 455–496.”
    See arXiv version, just flip through pages, then skim the references. See how many “interesting” names you can find.

    In the paper itself, we find (as somebody mentioned)
    “Michaels, P. J. (2008): “Evidence for” publication bias” concerning global warming in science and nature.” Energy & environment 19(2): pp. 287–301.”
    We also find of ~44 references,
    14 are first-authored by Havranek and 6 coauthored, i.e.. 20,, almost half of the references,
    That may be OK, or it may not be.
    It would have been nice had Dominika Reckova and Zuzana Irsova shown more familiarity with climate science literature.
    ========
    Finally, although I’d guess this is just accident, (Czech Republic is not huge), the affiliations for the *new* paper are:
    “Tomas Havranek a, b, Zuzana Irsova,b , , Karel Janda b, c, David Zilberman d
    a Czech National Bank
    b Charles University, Prague
    c University of Economics, Prague
    d University of California, Berkeley”

    Vaclav Klaus graduated from U of Economics, and was at the Czech National Bank (under its previous name).

  45. izen says:

    @-Rob Nicholls
    “I’m not aware of any method of estimation that can overcome this problem of dependency on value judgments and I don’t believe that it’s possible that such a method can exist. I’m happy to be corrected on this.”

    Some months ago on this blog we were all lucky enough to be informed about this very issue by an economist whose name must not be spoken, (say it three times and bad things happen…). The mistake you are making is to think that because value judgements are subjective they have any value. Economist know that value is unmeasurable and indefinable, but you can always put a number on the price.

    On the issue of coral reefs, specifically the risk of ocean acidification on the Australian Great Barrier Reef, he had this to say;-

    “Valuing natural resources is something that many environmental economists do for a living. A common finding is that the vast majority of people cares a little about these matters, and a small minority cares a lot.”

    Continuing…

    “With 2 million visitors a year, the Great Barrier Reef isn’t even Australia’s top attraction; the Sydney Opera House has 8 million. …Even if ocean acidification would completely destroy the Great Barrier Reef, which it will not, then the impact on the global tourism industry is small. Even the Australian tourism industry is unlikely to take a big hit, as capital and labour in tourism are rather mobile. The more likely scenario, however, is that local tourist operators will preserve that bit of the Great Barrier Reef that attracts tourists. After all, that’s what they do with Venice, ski slopes, and sandy beaches.”

    So now you know, subjective values do not count, it’s the numbers of how many people are prepared to pay what price that is the only objective measure in these matters.

  46. @Rob N
    “I believe that the cost arising from the quite possible extinction in the wild of the orange-spotted filefish (a fish heavily dependent on corals) due to climate change would be infinite”

    People’s values are people’s values. The job of an economist is to measure these value, rather than to pass judgement.

    That said, your statement is peculiar. If you think that the value of the filefish is arbitrarily large, then you should be willing to give up anything that has a finite value if that would ever so slightly increase the chance of the filefish surviving.

    Giving up anything would include your use of the internet, and the carbon dioxide thus emitted.

  47. Rob Nicholls says:

    Izen, thanks v much for your response. “So now you know, subjective values do not count, it’s the numbers of how many people are prepared to pay what price that is the only objective measure in these matters.”

    OK, but the price that people are prepared to pay does not in my opinion give the full value of something. Although we might be able to ask people how much they value the continued existence of a certain species of fish, we’re not able to ask future generations of humans or members of the species of fish itself. Maybe this is a bad example, perhaps a better one would be to ask what’s the monetary cost of the death of a human being due to flooding or crop failure caused by climate change. I think the answer to that is subjective and political and not objective.

    At least internalising some of the cost of GHG emissions is better than not internalising any of it, but I think it would be wrong to think that the cost could be properly calculated as then some people might be tempted to think that after 1) calculating an objective cost and 2) applying that as a price to GHG emissions, we’ve internalised the cost and solved the problem.

  48. Rob Nicholls says:

    Thanks Richard Tol. “People’s values are people’s values. The job of an economist is to measure these value, rather than to pass judgement.” I’m okay with that, and it wouldn’t be fair to expect more from economists than to do this as well as they can (and I’d hope that economists would do their best to account for differences in purchasing power when doing this); however I would hope that people realise that the value of some things cannot be expressed in monetary terms.

  49. Eli Rabett says:

    Now some, not Eli to be sure, might ask whether Richard Tol is worth a single filefish

  50. Willard says:

    > People’s values are people’s values. The job of an economist is to measure these value, rather than to pass judgement.

    This presumes two dubious ideas: that economics is value neutral and that economists model people‘s values. The two dubious ideas might be interconnected:

    Consider an example. The concept of Pareto efficiency is defined in value-neutral terms: a distribution is Pareto-efficient if there is no other distribution that improves some individuals without harming at least one individual. The concept of distributive justice is not value-neutral; it invokes the idea that some distributions are better because they are more fair or more just than others. The positive economist holds that the latter set of distinctions are legitimate to make — in some other arena. But within economics, the language of justice and equity has no place. The economist, according to this view, can work out the technical characteristics of various economic arrangements; but it is up to the political process or the policy decision-maker to arrive at a governing set of normative standards. Walsh and Putnam (as well as Amartya Sen) dispute this view on logical grounds; and this leaves the discipline free to have a rational and reasoned discussion of the pros and cons of various principles of distributive justice.

    http://economistsview.typepad.com/economistsview/2012/03/value-free-economics.html

  51. @Rob
    The value of filefish can and has been expressed in monetary terms. You should note that this is the value of filefish to humans, rather than the value of filefish to filefish.

    Eli just perfectly illustrated the method: Given the choice, would you rather save a single filefish or me?

  52. izen says:

    In the field of clinical biology studies investigating publication bias are motivated by the knowledge that it does exist for a well known reason. Medical R&D is known to carry out research and obtain at least preliminary results, but that research is never published.
    The industry has a financial motive to promote ‘good’ research results, and suppress ‘bad’ results.

    The distribution of results does not have to be purely Gaussian. Clinical research may often have results that are inherently skewed, with fat tails and fixed upper or lower bounds. But the PDF is usually at least implicit in the method and often explicitly discussed in the context of what the research can measure and what the results imply. A common methodology that ‘should’ detect a particular distribution of benefit and harm from a clinical treatment, but which in the published literature only shows the ‘good’ side of the outcomes is likely to raise suspicions.

    One response to this is the adoption of research methods that confine the range of outcomes they are capable of testing for. This is often justified as the adoption of more accurate methodologies because they are less prone to variance or uncertainty. If you only look for the positive results you want, then the absence of negative results is not exactly publication bias…

    To apply the methods of detecting publication bias to the value of ECS would seem to make the unstated assumption that the methodologies used to determine this value are capable of generating a wider range of results, but that there is a significant number of unpublished results that have been suppressed for some unstated reason. In the case of clinical research no conspiracy theory is required, there is ample evidence the practice exists, and for a known reason.

    But making this assumption about the research into climate sensitivity does seem to straying into a conspiracy theory in which around the world scientists are filing in the bottom draw any results on ECS that should occupy the other part of the ‘funnel plot’ they make.
    To most rational people such a conspiracy, or even group-think effect seems unlikely. But ironically a possible explanation for their result is a preponderance of research that is following the Pharma playbook, and using methodologies that give ‘good’ results with smaller error bars although others in the field suspect the methodology may be inherently incapable of capturing the true probabilities in the upper range.

  53. whimcycle says:

    Richard’s is but another variation of the classic contrarian response: If you guys were TRULY concerned about emissions, you would have killed yourselves already.

  54. Richard,

    The value of filefish can and has been expressed in monetary terms.

    Are you really able to reliably include the ecological complexity?

    would you rather save a single filefish or me?

    I would regard this as a question that isn’t worth answering.

  55. Joshua says:

    ==> “The job of an economist is to measure these value, rather than to pass judgement.”

    Interesting. What is your methodology for measuring values, with controlling for accountability bias, the paradox of choice, hyperbolic discounting, confirmation bias, variances in how people maximize utility, etc.?

    ==> “If you think that the value of the filefish is arbitrarily large, then you should be willing to give up anything that has a finite value if that would ever so slightly increase the chance of the filefish surviving.”

    Interesting. I read someone somewhere write that the job of an economist is not to pass judgement. I just can’t quite rememb……

    Oh.

    Wait.

  56. Joshua says:

    ==> “The job of an economist is to measure these value, rather than to pass judgement.”

    This is really quite interesting. Richard, I’d say that virtually all your comments in these threads express your judgements of people’s values.

  57. @joshua
    “should” here follows from the conditional clause starting with “if”

    Rob should do what Rob thinks best. He’s a grown man. It’s a free country.

  58. If you want a taste of how economists measure value, why not try this:
    http://www.surveygizmo.co.uk/s3/2156353/Dolton-Tol-beta

  59. Richard,
    The last time you linked to a survey on my site, I ended up deleting the links because it appeared that you did not have suitable ethics approval. Can you confirm that you do for this one?

  60. @wotts
    Yes, we have ethics approval for this survey.

  61. Joshua says:

    Richard, let’s try again. Why don’t you address this point:

    “I’d say that virtually all your comments in these threads express your judgements of people’s values.”

    Often, you express judgements of the values of particular groups, and sometimes you express judgements of the values of individuals.

    How do you reconcile your behavior with your description of what economists do?

  62. Joshua says:

    And Richard –

    The other question I asked was about how you control for how people respond to questions in order to assess their “values.” I referenced a number of complicating factors that speak to the complexity of transforming expressed opinions into an assessment of values. Perhaps you could speak to how you do that – which was the point of my comment. Just linking to one of your polls doesn’t actually address my point. I wasn’t asking how you sample opinions.

    Are you trying to actually address my questions? Because if so, it seems to me that you’re doing a very poor job so far.

  63. Joshua says:

    I mean seriously, look at this:

    Here’s my question (with bold added to make the problem more apparent):

    “Interesting.What is your methodology for measuring values, with controlling for accountability bias, the paradox of choice, hyperbolic discounting, confirmation bias, variances in how people maximize utility, etc.?

    And in response you linked a poll for how you measure opinions… with the following description from the survey: “By answering these questions, you will help researchers at the University of Sussex to understand what people know and think about public policy and its various domains.“

    If you are using the poll to measure values, why are telling them that you are measuring what they know and think about public policy? Don’t you think that there’s an ethical problem where you tell people that you’re doing something other than what you say elsewhere that you’re doing? How did you get ethics approval for that? What did you indicate in your submissions for ethics approval – did you say that the survey was intended to measure values?

  64. Joshua says:

    And Anders…do other people really end up in moderation as much as me?

    Is this yet more evidence that you’re trying to censor me because my arguments are so devastating to your world view? 🙂

  65. Actually, quite a few do and I don’t always know why. It’s not just you, as much as you might like that to be the case 🙂

  66. Joshua says:

    Finally, Richard –

    ==> “@joshua
    “should” here follows from the conditional clause starting with “if””

    This, also, seems hard to reconcile with your previous comment:

    “That said, your statement is peculiar. If you think that the value of the filefish is arbitrarily large, then you should be willing to give up anything that has a finite value if that would ever so slightly increase the chance of the filefish surviving.”

    Are you actually contending that isn’t a judgement of values? Your argument there looks to me like a judgement of values. You are asserting (behind a veil of plausible deniability) an inconsistent or hypocritical approach to values – which is, in itself, a judgement of values.

  67. Joshua says:

    Yeah. Sure. That’s what they all say. 🙂

  68. Joshua says:

    It’s always a challenge to get Richard to actually address points that I’ve made. I fear, however, that that my efforts will be in vain. Dude’s a non-sequitur machine.

  69. Kevin O'Neill says:

    Rob Nicholls writes: ““I’m not aware of any method of estimation that can overcome this problem of dependency on value judgments and I don’t believe that it’s possible that such a method can exist.”

    Rob, one would hope that not only is the present value of ‘filefish’ included in these economic models but that it’s also included in the discount rate. We could ask, How many and how much would people pay to see a live Dodo? The Passenger Pigeon has been extinct now for a 100 years. Does anyone lament them? How could they – no one alive remembers them. Those that watched them disappear certainly did. Here are the words of Simon Pokagon, a Potawatomi tribal leader, recounting in 1895 an event he witnessed nearly a half century earlier:

    While I gazed in wonder and astonishment, I beheld moving toward me in an unbroken front millions of pigeons, the first I had seen that season … I have stood by the grandest waterfall of America,” he wrote, “yet never have my astonishment, wonder, and admiration been so stirred as when I have witnessed these birds drop from their course like meteors from heaven.

    There are also the unknown possible benefits lost. Many species are lost before they’re even discovered or studied. What could they have taught us about new chemical or biological ‘tricks’ that Mother Nature evolved to fill a specific niche?

    We could ask an econometrican what price the Passenger Pigeon carries in their models – I suspect it’s zero. I also suspect that if Simon Pokagon were alive today he’d give a slightly different answer.

  70. izen says:

    @-Kevin O’Neill
    “We could ask an econometrican what price the Passenger Pigeon carries in their models – I suspect it’s zero. I also suspect that if Simon Pokagon were alive today he’d give a slightly different answer.”

    The potential financial gains from the revival of extinct reptiles has been explored in film. The passenger pigeon is rather less likely to be a big audience draw although the feasibility is higher.
    Wooly mammoths are probably somewhere near the feasibility/profitability cusp.

    But you need an expert of such things to determine that. However from observation such experts seem to be chosen for their ability to provide an economic justification to satisfy those that pay them.
    Economeretricians!

  71. @joshua
    I made only a few interventions on this thread, and in only one did I express a judgement. I judged the credibility of Rob’s claim that filefish are infinitely valuable, as it is inconsistent with his observed behaviour.

    As to the other points you raise: All true, all subject to active research.

  72. BBD says:

    @ Joshua

    And Anders…do other people really end up in moderation as much as me?

    It happens to me regularly too.

  73. Willard says:

    > In only one did I express a judgement.

    This makes at least two.

    Speaking of which, what about two filefishes?

  74. izen says:

    @-” I judged the credibility of Rob’s claim that filefish are infinitely valuable, as it is inconsistent with his observed behaviour.”

    One of the major flaws in the ‘rational economic agent’ assumption is that economic behavior is directly correlated with values.

    Sometimes the link is even supposed to be linear!

    There are significant external costs in car ownership and gun ownership in those nations where it is widespread. They are major causes of mortality and morbidity. However the value we place on the ability to own and use those machines prevents any imposition of a tax to represent those external costs.

    Certainly there is no expectation of imposing costs to eliminate the source of the harm, just enough to maximise tax income for the state. It is unlikely a carbon tax would play a different role.

  75. Rob Nicholls says:

    Thanks Richard Tol for your responses, and the survey link, and thanks to everyone else for their comments.

    ATTP, sorry if I sent this thread off topic. I think I’ve said what I wanted to say already, but I’ll think about this more and will think about what people have said.

  76. Joshua says:

    izen –

    ==> “One of the major flaws in the ‘rational economic agent’ assumption is that economic behavior is directly correlated with values.”

    Yes, that’s very much what I was getting at.

    Unfortunately, it seems that at least for the purposes of the discussion here, Richard is quite content to make broad assumptions in exactly that regard – and isn’t interested in discussing the foundation of his assumptions.

    I would imagine that within his more professional framework, he’s careful about not taking those assumptions for granted – which then leaves the question of why he’d display such different reasoning here than what he engages in professionally.

  77. bill shockley says:

    I know this topic is not about climate sensitivity and the social cost of carbon, but I also think I correctly presume that many here have a genuine interest in those topics. So, with the hope that ATTP welcomes suggestions for future topics, I note that James Hansen has written an article for the Huffington Post elucidating his long and difficult recent sea level rise paper which, it turns out, was 8 years in the making (clip):

    2°C is not only a wrong target, temperature is a flawed metric due to meltwater effect on temperature. Sea level, a critical metric for humanity, is at least on the same plane.

  78. Paul S says:

    As Steven Mosher was possibly alluding, the ECS energy balance formula structurally will return larger uncertainty for larger ECS values for a given level of uncertainty in the inputs.

    E.g.
    For DeltaT = 1K +/- 0.2, DeltaF = 1W/m2 +/- 0.2, ECS = 2.5-5.6K (3.1K spread)

    For DeltaT = 1K +/- 0.2, DeltaF = 1.5W/m2 +/- 0.2, ECS = 1.7-3.4K (1.7K spread)

    This situation is compounded by negative aerosol forcing being the major, or in some cases only, source of uncertainty used in net forcing. Generally, larger (more negative) aerosol forcing estimates have larger uncertainties with the result that studies with higher net forcing (less negative aerosol estimate) will likely have lower net forcing uncertainty.

    In summary, the plot of precision vs. CS estimate is what I would expect from an unbiased sample in this context.

  79. Paul,
    I think I see what you’re getting at. It’s the standard error propagation formula. If R is given by,

    R = \dfrac{X Y}{Z},

    then \delta R is given by

    \delta R = \left| R \right| \sqrt{ \left( \dfrac{\delta X}{X} \right)^2 + \left( \dfrac{\delta Y}{Y} \right)^2 + \left( \dfrac{\delta Z}{Z} \right)^2 },

    So, the larger the value of R, the larger the uncertainty might be.

    I wasn’t sure I followed this, though

    Generally, larger (more negative) aerosol forcing estimates have larger uncertainties with the result that studies with higher net forcing (less negative aerosol estimate) will likely have lower net forcing uncertainty.

  80. Paul S says:

    To use the previous example, let’s say the 1W/m2 was made up of +2W/m2 from all other sources and -1W/m2 due to aerosols. The aerosol forcing uncertainty is +/- 0.4W/m2, which is used to determine the full net forcing uncertainty range of 1W/m2 +/-0.4. Then ECS range, using same DeltaT, is 2.1-7.4K

    If the central aerosol estimate is -0.5W/m2 with smaller uncertainty of +/-0.2 then ECS range will be 1.7-3.4K.

    Because net forcing uncertainty was determined by aerosol forcing uncertainty and aerosol forcing uncertainty was larger with the more negative central value and a more negative aerosol forcing value means higher sensitivity, the spread became even larger for higher sensitivity values.

  81. Paul,
    Okay, yes, I’m with you now. The more negative the aerosol forcing, the larger the mean ECS estimate, and the larger the uncertainty range. So, there are plausible argument as to why the increase in uncertainty with increasing mean ECS is not indicative of a bias, but is simply a consequence of basic error propagation.

  82. Rob Nicholls says:

    I just re-read Izen’s August 23, 2015 at 2:01 pm…I think it may have been ever so slightly tongue in cheek and I did not realise it earlier. I think Poe’s law will get me every time.

  83. Eli Rabett says:

    Of course, the real issue is the future value of filefish.

  84. Mal Adapted says:

    I’d pay $1000 to see a living Ectopistes migratorius. I’d pay $10,000 to see a brood of fledglings from a mated pair. I’d pay $100,000 to see a flock of them roosting in a grove of living, mature Castanea dentata. I’d require genetic verification of all individuals before paying, of course.

  85. From what I can tell, the Havranek study appears to suffer from a fundamental methodological flaw.

    Surely an increasing SCC spread is simply the mechanical result of a damage function that is convex in temperature (i.e. an assumption built into all IAMs? Far from pointing to publication bias, the widening confidence intervals are exactly what I would expect given the standard IAM setup.

    A toy example:
    – You want to estimate the SCC using an IAM in which climate damages are simply the square of temperature, i.e. D = T^2.
    – Let’s say your model is used to evaluate the costs associated with a global temperature increase that will be somewhere between 0 and 2 degrees with uniform probability. The implied spread on damages is then (2^2 – 0^2 =) 4.
    – Now imagine that your model is used to evaluate costs of a slightly higher temperature range between 1 and 3 degrees (again assume uniform probability)? Well, your damages spread increases to (3^2 – 1^2 =) 8!

    Clearly the increase in this little example has nothing to do with publication bias. (How can it? We’re using exactly the same “model”.) Instead, the increasing spread has everything to do with the mechanics of the model setup.

    Now, I first asked this question over a month ago (which is when I first became heard about the Havranek paper), but I still haven’t received a convincing answer… Can anyone convince me that the very same thing isn’t happening with the published SCC results?

  86. PS – I should add that I haven’t had time to read the full paper yet… but if the authors haven’t explicitly controlled for this issue, then I simply can’t see how their method is valid.

  87. Willard says:

    For what it’s worth, I left this comment at Judy’s:

    Actually, Cap’n, the E&E authors might even be able argue for publication bias as soon as scientific results evolve non-randomly.

    Econometrics’ the new sophistry.

    http://judithcurry.com/2015/08/22/week-in-review-energy-and-policy-edition-9/#comment-726600

  88. John Mashey says:

    PART 1
    Good. we’re back on the post’s main topic, the Paper and its Appendix. PDF page #s (not necessarily same as those printed) are used below for simplicity.

    SUMMARY … surveying opinions about the Moon shows that some think it is made of cheese, such as noted researchers Wallace and Gromit.

    1) Reckova and Irsova used 48 data points from 16 papers, most of which are relatively old, only 1 of which was used by IPCC AR5 WG I.

    2) There is little evidence they read and understood the relevant AR5 Chapter 10, which has a substantial discussion of sensitivity, and 19 papers, most newer than those they used. From some comments, they seemed unaware of the literature, saying
    ” Estimates of climate change and climate sensitivity occur only rarely in the scientific literature”

    Paper: 44 total references, of which 8 were climate *science*, including 2 for AR4 and 1 for AR5, none with page numbers.
    It is not a plus for credibility to reference 1000-page volumes without giving pages or sections.
    15 first-authored by Havranek, 6 coauthored by Havranek = 21.
    It might have been better to have spent more effort getting familiar with the literature.

    3) Of their 48 data points, 11 came from Lindzen+Choi(2011), who computed different numbers for others’ studies and provided another of their own, which was the smallest CS and the one with the smallest uncertainty. This work and its predecessor had issues.

    4) The conclusions of bias in the paper rest strongly on Figure 3, especially the 4 papers in upper left corner, plus one not shown.
    Neither the Paper not the Appendix gave any mapping from charts to studies they were based on, so I had to search for the references and look at papers until I found at least those in that upper left corner, on which so much of the argument rests.

    19 Lindzen+Choi(2011) was omitted as it was off the chart, although it showed up in Fig 4.
    12 Scafetta(2013a), “cycles” published in Energy and Environment
    20 Scafetta(2014), “planets”
    17 Hargreaves and Annan (2009), except the numbers given were *not* theirs, but part of their refutation of Chylek+Lohmann(2008), hence this is in some sense a false
    7 Andronova and Schlesinger (2001) gave 4 sets of numbers for different model parameters, and this low sensitivity number was not recommended. See below

    5) They wanted to assume that estimates should be normally distributed, despite the fact that implies 1/20 CS estimates would be negative, and 1/8 below 1, against which there is overpowering evidence. Every one of the 5 low-sensitivity-high-precision points has problems, of the sort that happen when people who are not domain experts read detailed technical papers, and without knowing the credibility of various authors or realizing that some numbers were only mentioned to be refuted.
    They also didn’t seem to understand the studies that explore parameters, giving the same weight to deprecated combinations as to those thought more relevant.

    6) Paper p.4 favorably cites Michaels(2008) … who possibly might have his own publication bias, given a long history of substantial fossil funding, and that paper is in E & E. This is not a plus for credibility. Likewise, there seems a random scattering of possibly cherry-picked technical papers … but zero evidence of thorough reading of IPCC AR5’s relevant section, i.e., the latest major assessment.

    Next part has the details.

  89. John Mashey says:

    PART 2a – DETAILS

    Appendix p.2: They have 48 points, from 16 studies, including Lindzen & Choi(2011) or L+S, Scafetta(2013a) and Scafetta(2013b), in E&E, from arXiv versions.

    Scafetta(2013a) (CYCLES) ” Power spectra of global surface temperature (GST) records (available since 1850) reveal major periodicities at about 9.1, 10-
    11, 19-22 and 59-62 years. … This hypothesis implies that about 50% of the ∼ 0.5 oC global surface warming observed from 1970 to 2000 was due to natural oscillations of the climate system, not to anthropogenic forcing as modeled by the CMIP3 and CMIP5 GCMs. Consequently, the climate sensitivity to CO2 doubling should be reduced by half, for example from the 2.0-4.5 oC range (as claimed by the IPCC, 2007) to 1.0-2.3 oC with a likely median of ∼ 1.5 oC instead of ∼ 3.0oC.”

    Scafetta(2013b) (PLANETS) “It is found that: (1) about 50-60% of the warming observed since 1850 and since 1970 was induced by natural oscillations likely resulting from harmonic astronomical forcings that are not yet included in the GCMs; …
    equilibrium climate sensitivity to CO2 doubling centered in 1.35 oC and varying between 0.9 oC and 2.0 oC.

    Paper p.9, Fig 3: Funnel plot for CS gives the CS estimate, and the precision, which is inverse of Std Error they computed in Appendix Table 11, I think.
    They say ” Notes: This figure excludes the single most precise estimate from the data set to zoom in on the relationship.” (Lindzen and Choi)
    It uses Se(low), whereas Fig 4 uses Se(high).

    Appendix Table 11, p.9 lists the data points, but doesn’t identify which studies they came from, which means one has to go dig them out.
    They are numbered from 2 to 20, with 1,14,15, missing, and the number of data points varies.
    For instance, study 2 has one point, study 19 (Lindzen and Choi) contributes 11, almost 1/4 of the total.
    It looks like the Paper just took L+S’s 0.7 main result, and added the 10 of the 11 items from L+S Table 2, p.9, omitting the one called infinity.

    Here are the 8 of 48 points with CS less than 2. I added the precisions for the Se(low) and Se(up) the *’d ones are the group at top left in Fig 3.
    Study CS Low Upper z Se(low) Se(Up) Prec(low) Prec(up) Precision = inverse of Se’s
    * 7 1.43 0.94 2.04 1.96 0.298 0.371 3.36 2.70 *Andronova+Schlesinger(2001), p.6, for case T1, NOT their preferred case T3
    11 1.54 0.3 7.73 1.96 0.754 3.763 1.33 0.27
    *12 1.5 1 2.3 1.96 0.304 0.486 3.29 2.06 * Scafetta(2013a)
    *17 1.8 1.3 2.3 1.96 0.304 0.304 3.29 3.29 * Hargreaves and Annan(2009), but that is really from Chylek+Lohmann, see below.
    19 0.7 0.6 1 1.96 0.061 0.182 16.39 5.49 X L+S Table 2, p.9, omitted from graph, but (I don’t think) from stats
    19 1.7 0.9 8 1.96 0.486 3.83 2.06 0.26 – L+S Table 4, ECHAM5/MPI-OM … recalculated by L+S
    19 1.7 1 8.8 1.96 0.426 4.316 2.35 0.23 – L+S Table 4, UKMO-HadGEM1 … recalculated by L+S
    *20 1.35 0.9 2 1.96 0.274 0.395 3.65 2.53 * Scafetta(2013b)

    PART 2b
    Andronovo+Schlesinger(2001) p.6 say:
    “Because Tl has no ASA forcing, its mean (μ = l.43°C), median (m = l.38°C), standard deviation (cr = 0.35°C), and skewness (s = 0.80) are small, and its 90% confidence interval, 0.94°C to 2.04°C, is narrower and shifted toward smaller values than the IPCC range of l .5°C to 4.5°C. …
    If one were to make a “best estimate” of this, one would likely choose T3 which does include ASA forcing and does not include solar forcing.”
    The paper’s authors do not seem to understand this sort of experiment, in which an unrealistic assumption (no aerosol forcing) is made to see its effects.

    Paper p.4 says:
    ” Andronova & Schlesinger (2001) disagree with the third IPCC report and argue that climate sensitivity lies with 54% likelihood outside the IPCC range. They find that the 90%confidence interval for CS is 1 to 9.3.”
    It is not a plus for credibility to quote a 14-year-old paper arguing about the TAR.

    “Masters (2013) notes a robust relationship between the modeled rate of heat uptake in global oceans and the modeled climate sensitivity. This signals that researchers could have ways of influencing their results.”
    Climate scientists deal with complex data and have to make assumptions, and their results unsurprisingly differ.
    Masters writes:
    ” The observational estimate for climate sensitivity of 1.98 K [1.19–5.15 K] produced by this method is slightly lower than that of the IPCC AR4″ and then goes on to discuss the issues .. with no hint of suspicion that I could see that people were distorting their results.

    Hargreaves and Annan(2009) were *refuting* a paper and its estimates:
    ” The sensitivity of the climate system to external forcing has long been a subject of much research, the bulk of which has concluded that the climate sensitivity to a doubling of CO2 is likely to lie in the range 2–4.5 C (IPCC 2007: Summary for Policymakers, Solomon et al., 2007; Knutti and Hegerl, 2008). Chylek and Lohmann (2008) (hereafter CL08) claim to have found evidence that the true value is much lower, around 1.8C, and present two main arguments in support of their claim. …
    The climate sensitivity to a doubling of CO2 is now estimated to be about 3.5oC, with a 95% range of 2.6–4.5oC, compared to CL08’s estimate of 1.3-2.3oC.”

    PART 2c
    Paper p.4 makes a curious claim:
    Estimates of climate change and climate sensitivity occur only rarely in the scientific literature. For instance, the fifth assessment report of the Intergovernmental Panel on Climate Change (IPCC) predicts only that climate sensitivity probably ranges from 1.5 to 4.5 with high confidence and is extremely unlikely to be lower than 1, again with high confidence (Stocker 2013).”

    They did not cite Knutti and Hegerl (2008), “The equilibrium sensitivity of the Earth’s temperature to radiation changes”. This key figure summarized the various lines of evidence and constraints.

    Despite mentioning IPCC AR5, they didn’t cite IPCC AR5 (2013) WG I, Fig 10.20 (p.941) and surrounding discussion, section 10.8. The gave specific page numbers for TAR and AR4, not AR5.

    Among other things, the graphs identify the studies (unlike the paper under discussion), so for instance, one can see LiIndzen & Choi(2011) as a real outlier at left. (small brown bar). The overall assessment is 10.8.2.5, p.940 says:
    “Some recent studies suggest a low climate sensitivity (Chylek et al., 2007; Schwartz et al., 2007; Lindzen and Choi, 2009). However, these are based on problematic assumptions, for example, about the climate’s response time, the cause of climate fluctuations, or neglect uncertainty in forcing, observations and internal variability”
    AR5 critiqued the same Chylek+Lohmann (2008) paper critiqued by Hargreaves+Annan.

    The Paper has 16 studies, dated 2001-2013, but only 4 from 2010-2013.
    The AR5 Fig 10.20 uses 19 studies, of which 16 are from 2010-2013, and one each from 2008, 2008, 2009.
    The only reference in both is L+S(2011), although AR5 has later papers by Hargreaves and Annan (2012 vs 2009), Murphy et al (2009 vs 2004).

    ECS LESS THAN 1, EVEN NEGATIVE
    Like I said before:
    “The Appendix to the *earlier* paper says:*
    “With asymmetric distributions this assumption does not necessarily hold, but there is no reason why climate sensitivity estimates should not be distributed symmetrically.”
    “After all, their figure 1 implies a non-zero probability of negative sensitivity :-)” (me)

    Figure 1’s dashed line is a normal distribution, which they think should be reflected in the papers.
    It has a mean of 3.27, with a density of ~0.2, which needs a Standard Deviation of about 2 (they got 1.96, which I used.).
    So:
    DENS CUM Sensitivity estimate
    0.01 0.00 -2
    0.02 0.01 -1
    0.05 0.05 0 Thus 1/20 estimates ought be negative.
    0.10 0.12 1 1/8 could be less than 1deg, which IPCC says extremely unlikely
    0.17 0.26 2
    0.20 0.50 3.27
    0.19 0.65 4
    etc

    Paper, p.8 says:
    “The left-hand side of the graph is completely missing and the shape of the solid line representing the kernel density of the CS estimates does not correspond to the normal density, shown as the long-dash dot line. All the figures indicate publication selectivity bias”

    I think something else is indicated, about the authors’ familiarity and understanding of the literature.

    They end:
    “A lower estimate of climate sensitivity would imply a lower estimate of the social cost of carbon. This, in turn, would influence the amount spent on reducing carbon dioxide in the atmosphere. This money could be spent on other areas of environmental protection.”

  90. Grant,
    It sounds like your point is similar to the point that Paul S (and maybe Steven) was making about the other ECS paper. Given the form of the function typically used and the inherent uncertainties in the variables, the uncertainty probably increases with increasing ECS estimate.

  91. John,
    Very thorough, thank.

  92. @grant
    Energy Economics welcomes replication, and there is always the Public Finance Review to fall back on.

  93. Richard,

    Energy Economics welcomes replication

    Except, if Grant is correct, then what Havranek have found isn’t some indication of publication bias, but simply a property of this particular type of analysis. Given that the premise of the Havranek paper is quite simple, wouldn’t one expect an editor – especially one who actually works on this topic directly – to have noticed this potential fundamental flaw?

  94. @wotts
    An online appendix with data and code is available at
    http://meta-analysis.cz/sccmeta-analysis.cz/scc
    Grant, indeed anyone, can download their stuff and test Grant’s hypothesis.

    I obviously recused myself. Ugur Soytas was the editor for this paper.

  95. I don’t think Grant has a hypothesis. I think he has a claim that there is something that they should have controlled for. Either Grant is right or he’s not, and either they did or they didn’t. I’m insufficiently expert to know whether Grant is right or not and whether they controlled for this or not. Someone who does have the expertise could probably quite easily clarify this.

  96. Richard,
    Unless I’m missing some subtlety, I think you may have linked to the wrong appendix.

  97. izen says:

    @-Rob Nicholls
    “I just re-read Izen’s August 23, 2015 at 2:01 pm…I think it may have been ever so slightly tongue in cheek and I did not realise it earlier.”

    No, it was only written to seem tongue in cheek…
    But I only joke about things I take seriously.
    http://www.econ.qmul.ac.uk/papers/doc/wp669.pdf

  98. John Mashey says:

    Reckova+Irsova took 11 points from Lindzen+Choi(2011)
    1 that was their computed value (the low-CS, high-precision outlier)
    10 were from Table 4, where L+S recomputed other people”s results, which changed them from the values in AR4. Interestingly:
    3 of the 10 got increased by factors of 2.36-3.43, generating the 3 largest outliers at right side of Paper Fig 3 funnel plot (7.9, 8.1, 10.4).
    The other two (6.1 and 7.53) are from Gregory et al(2001) and Androva et al(2002), i.e., old.
    Of the remaining 7 from L+S, 6 were decreased by factors of .39-.93), i.e., were shifted to the left.

    Conclusion for this: this paper relies on nearly a quarter of its data from Lindzen+Choi, which provides 3 of the 5 outliers at right.
    The outliers at left in Fig 3 include 2 from Scafetta, and one from Chylek et al explicitly refuted by Hargreaves and Annan.

  99. John,
    Wow, okay, thanks. That’s very thorough. I hadn’t realised that it relied so heavily on work that’s been heavily criticised/refuted. That Lindzen & Choi Table 4 is bizarre. Take a bunch of estimates from actual models, and then do some other analysis that completely changes the estimates.

  100. John Mashey says:

    And don’t forget Scafetta (2013a,b) – some astro guy might take a quick look at thise, as they provide 2 of the 4 upperleft points in Fig 3 of paper.

  101. Okay, have just had a quick look. From the abstract,

    In contrast, the hypothesis that the climate is regulated by specific natural oscillations more accurately fits the GST records at multiple time scales. For example, a quasi 60-year natural oscillation simultaneously explains the 1850-1880, 1910-1940 and 1970-2000 warming periods, the 1880-1910 and 1940-1970 cooling periods and the post 2000 GST plateau. This hypothesis implies that about 50% of the ∼ 0.5 oC global surface warming observed from 1970 to 2000 was due to natural oscillations of the climate system, not to anthropogenic forcing as modeled by the CMIP3 and CMIP5 GCMs. Consequently, the climate sensitivity to CO2 doubling should be reduced by half, for example from the 2.0-4.5oC range (as claimed by the IPCC, 2007) to 1.0-2.3 oC with a likely median of ∼ 1.5oC instead of ∼ 3.0oC.

    and from the Conclusions

    The physical origin of the detected climatic oscillations
    is currently uncertain, but in this paper it has been argued that they may be astronomically induced. This conclusion derives from the coherence found among astronomical and climate oscillations from the decadal to the millennial time scales.

    So, a curve fitting exercise with no basis in physics. Sounds like garbage to me.

  102. @Richard,

    Thanks, I do intend to take up the invitation… Just as soon as I submit my thesis.

    (If all goes to plan, that will be within the next two to three weeks.)

  103. John Mashey says:

    ATTP: ahh, but that was Scafetta(2013a), “cycles”. Even more interesting to an astro guy should be Scafetta(2013b) (“planets:).
    “Global surface temperature records (e.g. HadCRUT4) since 1850 are characterized by climatic
    oscillations synchronous with specific solar, planetary and lunar harmonics superimposed on a
    background warming modulation. ……
    As an alternate, an empirical model is proposed that uses: (1) a specific set of decadal, multidecadal, secular and millennial astronomic harmonics to simulate the observed climatic oscillations; (2) a 0.45 attenuation of the GCM ensemble mean simulations to model the anthropogenic and volcano forcing effects. The proposed empirical model outperforms the GCMs by better hind-casting the observed 1850-2012 climatic patterns. It is found that: (1) about 50-60% of the warming observed since 1850 and since 1970 was induced by natural oscillations likely resulting from harmonic astronomical forcings that are not yet included in the GCMs; (2) a 2000-2040 approximately steady projected temperature;
    (3) a 2000-2100 projected warming ranging between 0.3 oC and 1.6 oC, which is significantly lower than the IPCC GCM ensemble mean projected warming of 1.1 oC to 4.1 oC; ; (4) an equilibrium climate sensitivity to CO2 doubling centered in 1.35 oC and varying between 0.9 oC and 2.0 oC.”
    It also bashes the hockey-stick in favor of Lamb(1965) and in 2013, repeats McInttyre+Mckitrick’s claims
    “However, since 2005 a number of studies confirmed the doubts of Soon and Baliunas [36] about a diffused MWP and demonstrated: (1) Mann’s algorithm contained a mathematical error that nearl y always produces hockey-stick shapes even from random data [37]”
    The latter statement is false, since in fact there was no such error.

  104. Wow, I hadn’t realised that that claim about Mann’s algorithm had made it into the published literature. I had thought it was confined mainly to the blogosphere and sometimes, the media.

  105. @grant
    Great!

    PFR has a good template for replication papers: http://pfr.sagepub.com/site/includefiles/PFR_CALL.pdf

  106. John Mashey says:

    ATTP: well, it was in Energy and Environment.

  107. John Mashey says:

    Bottom line, from previous comments plus a clearer description of several problems,

    1) Unecessary fog around the data points
    Paper: graphs with unlabeled points
    Appendix: list of papers, Table with 48 points with 16 Study ID’s, but not authors/dates.
    Not all estimates are of the same nature and not all are equally credible (2 are “climastrology” for example), and the thrust of the paper depends on the 5 data points(* below) with low-sensitivity and high precision, i.e., the upper left corner of the graph ATTP showed.
    in practice, a reader has to find the papers, search them for the numbers, and then associate ID Study # and papers, just to figure out the dates, but also to assess the nature of the studies.

    2) I did that and an interesting pattern emerged.
    a) 10 of the 16 papers were from 2001-2006, mentioned in IPCC AR4(2007), most in Table 9.3.
    One of the top-left papers (*Andronova+Schelsinger(2001) was from this group.

    Needless to say, in 2015, most of these studies have been superseded, often by their own authors.

    b) IPCC AR5(2013) Fig 10.20(b) listed 17 distinct papers on sensitivity (+2 twice), which included zero (0) of the earlier papers in AR4. Of those, Reckova+Irsova included *Lindzen+Choi(2011) and Schmitner(2011) (some ambiguity, since AR5 referenced Shmittner(2012)). Basically, R+I managed to ignore AR5… and instead added:
    *Hargreaves+Annann(2009), really a refutation of Chylek and Lohmann’s too-low #
    Huber(2011) PhD
    *Scafetta(2013a)
    *Scafetta(2013b)

    c) So, 10 of the 16 studies were considered obsolete by IPCC.
    I+R almost entirely ignored AR5’s list of modern studies
    The 5 upper-left numbers include the oldest study (2001), the dubious Lindzen+Choi, a refuted number, and 2 “climastrology” papers.
    This is not exactly a reasonable literature analysis…,

    3) Following gives date, ID Study and count from I+R Appendix, Table 11, Authors
    *’d are the 5 studies from which the top-left data points came
    Year ID Study # Authors
    2001 7 4 *Andronova+Schlesinger
    2002 6 1 Gregory
    2004 8 2 Murphy
    2005 2 1 Frame
    2005 5 1 Piani
    2005 11 3 Wigley
    2006 4 2 Forest
    2006 16 2 Hegerl
    2006 3 6 Knutti
    2006 10 2 Webb
    =====================
    2007 AR5 – Tomassini et al
    2008 AR5 – Chylek and Lohmann
    2009 17 5 *Hargreaves+Annan (refuting C+L)
    2010 AR5 – Murphy et al
    2010 AR5 – Bender er al
    2010 AR5 – Lin et al
    2010 AR5 – Holden et al
    2010 AR5 – Kohler et al
    2011 13 2 Huber
    2011 19 11 *Lindzen+Choi (+AR5)
    2011 18 4 Schmittner (+AR5)
    2012 AR5 – Aldin et al
    2012 AR5 – Olson et al
    2012 AR5 – Schwartz
    2012 AR5 – Hargreaves et al
    2012 AR5 – Palaeosens
    2012 AR5 – Aldin et al (2nd time)
    2012 AR5 – Olson et al (2nd time)
    2013 12 1 *Scafetta(a)
    2013 20 1 *Scafetta(b)
    2013 AR5 – Lewis
    2013 AR5 – Otto et al
    2013 AR5 – Libradoni+Forest

  108. Ethan Allen says:

    Well I’ve outed myself over at RR, so whatever,

    John Mashey,

    Great job.

    You might also like this one:

    Charles University in Prague
    Faculty of Social Sciences
    Institute of Economic Studies
    BACHELOR THESIS
    Publication Bias in Measuring Anthropogenic Climate Change
    Author: Dominika Reckova
    Advisor: PhDr. Tomas Havranek, Ph.D.
    Academic Year: 2013/2014

    https://is.cuni.cz/webapps/zzp/download/130130738
    (it will start to automatically download, at least it did on my pc as BPTX_2012_2_11230_0_356327_0_134657.pdf)

    Table 4.1: List of primary studies used (p. 15)
    Notes: The search for primary studies was terminated on March 3, 2014.
    (same date as in the draft paper)

    Good luck

  109. Ethan,
    Isn’t that the paper that’s being discussed, or am I missing something?

  110. Ethan Allen says:

    ATTP,

    Yes, but it’s an earlier version, the Bachelor Thesis of Reckova, not the various (E&E) draft papers floating about (it might contain other stuff not mentioned in said draft).

    Also, Zuzana Irsova (maiden name) is Zuzana Havrankova (married name) she is married to Tomas Havranek:

    http://ies.fsv.cuni.cz/en/staff/havranek (PhD in 2013)
    http://ies.fsv.cuni.cz/en/staff/irsova (PhD in 2015)

    Just, you know, a curiosity.

  111. Ethan,
    I see, thanks. I’ll have a look.

  112. fourecks says:

    @RichardTol

    “@wotts
    Yes, we have ethics approval for this survey.”

    I find it very difficult to believe that an Institutional ethics committee would give ethical approval to an online study where the provided link gives no information about who is carrying out the research and who you should ask for more information or if you have concerns.

  113. fourecks,
    I’m taking Richard at his word. Maybe Eli will email Sussex and get confirmation?

  114. fourecks says:

    Maybe gremlins intervened in the “information for participants” section.

  115. John Mashey says:

    Ethan: good finds
    I feel sorry for badly-supervised students like Reckova

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s