Short memory?

To follow up from my earlier post I thought I might quickly write another one, as it almost seems as though Richard Tol has forgotten that he published – last year – a correction to his own 2009 meta analysis.

Tol2009In his 2009 meta analysis, the figure on the right appeared. In today’s BBC article, Matt Ridley says:

The literature is very clear; 2C is when we start to get harm. Up until then we get benefit

Which appears to be based on Tol’s 2009 paper.

Tol2014In Tol’s 2014 correction the figure on the left appears. Now, unless I’m missing something, this would seem to no longer be consistent with a suggestion that it is only 2C when we will start to get harm.

In this Bishop Hill post, however, Richard Tol says:

Matt and I disagree on many things, but not here: Matt referred to the point were the impact turns negative (about 2K), I referred to the point where the incremental impact turns negative (about 1K). You can also refer to the point where the impact turns significant and negative (about 4K).

So, has Richard forgotten his correction to his own paper?

I’ll also leave it as an exercise for the reader to discover whose study produced about the only positive data point (some of the ones near 0 are – I think – just positive), located at +2.5 for 1oC of warming.

This entry was posted in Satire and tagged , , . Bookmark the permalink.

92 Responses to Short memory?

  1. Ignoring various fits through data points (not all of which are independent) the idea that The literature is very clear when most of the studies show negative benefits, is rather bizarre.

  2. izen says:

    So Tol and Ridley agree that 2K is when the impact of warming is negative.
    Up till that point it is positive. But Tol makes the point that beyond 1K the INCREMENTAL negative impacts are reducing the previous positive benefits. From now on we are losing the benefits we have so far accrued from AGW.

    So we have, according to both Tol and Ridley, reached the peak benefit we can expect from AGW. From now on further warming will be reversing the benefits we have already enjoyed.

    I await with interest any study that has measured these benefits from the warming so far. Are there measurable effects on economic performance that are detectable above the … Natural noise of the boom and bust cycles?

  3. Are there measurable effects on economic performance that are detectable above the … Natural noise of the boom and bust cycles?

    We will most likely have 1 degree of warming this year. I am sure the friends of Tol are burning to know whether the positive economic effects of 1 degree have come true. If it is not falsifiable I am sure they will loudly protest that economics is not a science and start a FOIA harassment campaign against well-known economists.

  4. What is really infuriating about the likes of Matt Ridley is that he does this ‘shape shifting’ trick. Changing the frame, I assume to avoid being wacked in the never ending game of ‘wack-a-mole’.

    On one day the science is wrong – the world is not warming.

    On another day – so what if its warming, enjoy the benefits.

    On another day – ok, there may be some downsides if it gets really warm, but that’s good because it will spur economic activity and we can adapt.

    Today is the turn of ‘the world will get greener’ gambit. But Matt, wouldn’t that show up in a flattening of the Keeling Curve? Why is it stubbornly rising in line with man-made CO2 emissions?

    Btw I noticed an interview with Matt Ridley from June …

    where he gets hopelessly confused about water in the atmosphere. Water can exist in the atmosphere as an invisible gas/ vapour (which acts as a greenhouse gas to amplify warming due to CO2, because as the oceans warm the atmosphere holds more water) or in a condensed form as clouds (which has positive and negative effects which are less well understood but broadly appear to cancel each other out).

    He seems only to recognise that clouds can have an impact. Thereby he ignore the enormous impact of water vapour entirely. Extraordinary admission. So, it is he, not ‘they’ getting it wrong by a factor of 3:

    “They [climate scientists] are saying that small amount of warming will trigger a further warming, through the effect mainly of water vapor and clouds. In other words, if you warm up the earth by 1 degree, you will get more water vapor in the atmosphere, and that water vapor is itself a greenhouse gas and will cause you to treble the amount of warming you are getting.

    Now, that’s the bit that lukewarmers like me challenge.

    Because we say, ‘Look, the evidence would not seem the same, the increases in water vapor in the right parts of the atmosphere–you have to know which parts of the atmosphere you are looking at–to justify that. And nor are you seeing the changes in cloud cover that justify these positive-feedback assumptions. Some clouds amplify warming; some clouds do the opposite–they would actually dampen warming. And most of the evidence would seem to suggest, to date, that clouds are actually having a dampening effect on warming. So, you know, we are getting a little bit of warming as a result of carbon dioxide. The clouds are making sure that warming isn’t very fast. And they’re certainly not exaggerating or amplifying it. So there’s very, very weak science to support that assumption of a trebling.”

  5. dana1981 says:

    I suppose Tol could argue that a slightly positive welfare impact is within the range of possible outcomes at 2°C in his corrected version? The most likely outcome is negative, however, and a severely negative welfare impact is also within the range of possible 2°C outcomes.

    I think it’s a good question – is he forgetting this correction? Intentionally ignoring it? Taking liberties with the truth by focusing on the best possible outcome while glossing over the most likely and worst possibilities?

  6. Dana,
    I suspect that his argument might be that because the 95% confidence interval just extends above the x-axis that we can’t reject the hypothesis that warming till 2C could have a net positive benefit. You would like to think that someone with Tol’s credentials would be able to recognise that not being able to reject that hypothesis is not consistent with the claim that warming till 2C will have a net positive benefit. Even Richard should understand that in standard hypothesis testing, you try to determine if you can reject a hypothesis; you don’t accept it.

  7. It’s bad of Tol to ‘forget’ that he’d acknowledged this as an errror, but by focussing on a single error the whole discussion is giving far too much credit to Tol’s analysis. Most of the projections Tol compared were from the same family of integrated assessment models. Even the “correct” data points are hard to call independent estimates, being outputs of essentially the same machinery, tweaked in slightly different ways – to run a regression on them and create confidence intervals is really a bit of a joke. As for the models themselves, here’s how Professor Robert Pindyk of MIT begins his paper “Climate Change Policy: What Do the Models Tell Us?” (Journal of Economic Literature 2013):

    “Very little. A plethora of integrated assessment models (IAMs) have been constructed and used to estimate the social cost of carbon (SCC) and evaluate alternative abatement policies. These models have crucial flaws that make them close to useless as tools for policy analysis: certain inputs (e.g., the discount rate) are arbitrary, but have huge effects on the SCC estimates the models produce; the models’ descriptions of the impact of climate change are completely ad hoc, with no theoretical or empirical foundation; and the models can tell us nothing about the most important driver of the SCC, the possibility of a catastrophic climate outcome. IAM-based analyses of climate policy create a perception of knowledge and precision, but that perception is illusory and misleading.”

  8. Frederick,
    Thanks for the comment. Yes, I’m aware of those issues too. In fact, as I understand it, IAMs are essentially linear perturbation models (and that may be a slightly generous description too), which essentially means that the underlying assumption is that the impacts will be small; or rather, by design they are unable to determine if the impacts could be large.

  9. ATTP – have you seen this interview of RT by Roger Harrabin?
    Quite interesting. RT clearly gives the impression that he agrees with climate science prediction of warming, and believes we are heading for 3, 4 or 5C. Why then does he spend time challenging these predictions elsewhere? Just to be annoying?
    He thinks it will take till at least 2100 to decarbonise our economises. He believes in a carbon tax, because its a simple mechanism that can’t [in my words] be gamed.
    But I am curious at why he points out uncertainties in models of the planetary system, while showing great faith in economic/ impact models he uses (his are “uncontroversial finding”)
    His answer to the impacts? Economic development, because then poor people won’t be poor and can adapt like the rest of us will have to. If we adapt it won’t be so bad (except for the species who will die, and that’s is a concern).
    Does he really believe that it is harder to decarbonise energy than to drag the whole world out of poverty?
    Time to tune in to BBC Radio 4.

  10. Richard,
    Yes, I have seen that. It is rather strange that Richard seems to say moderately reasonable things in those circumstances and then utterly bonkers things elsewhere. I’m slowly heading back towards it being one massive joke.

    But I am curious at why he points out uncertainties in models of the planetary system, while showing great faith in economic/ impact models he uses

    Yes, this does seem rather blinkered.

  11. This bit of the interview was interesting. When discussing Bangladesh:

    RH: It would be engineering on a scale completely unprecedented.

    RT: So? I mean we always do things that are completely unprecedented, right?

    So, we can do unprecendented things when it comes to adapting to climate change, but decarbonising is just beyond us. Why are some people so insistent that we are simply incapable – in the short/medium term at least – of finding a way of producing energy that doesn’t require burning fossils? Surely we should be excited about the possibility of moving into the 21st century, not insisting on staying in the 19th?

  12. Rick,
    Here’s a question for you to ponder. If we were actively trying to change our climate, presumably we’d want to be pretty sure that the changes would be beneficial. If not, we’d probably leave it as it is. Why then, if we are unintentionally changing our climate, does the possibilty that some of the changes could be beneficial then factor in? We don’t know this – with high confidence – to be the case?

  13. ATTP – Roger Harrabin obviously did several interviews and has collected the transcripts. The edited program (1st in a series), I just listened to is at …
    I was fearing one of those ‘false balance’ fests but was surprised at how good it was.

    He could have called out RT on his ‘London like Barcelona’ quip, which reveals he really does not appreciate the risks. The Met Office says that risk of 2003 like heat wave (the one where 70,000 dies as a result) in Europe has risen from 1-in-50 year event to 1-in-5 year over last 10-15 years. Richard Tol is obviously referring to the average temperature change not the extremes.

    I wonder if his models for impacts on plants also miss this? For example, heat stress can have big impacts (and would not be well catered for by a “linear perturbation model” (to use your characterisation). The Nature paper referred to in …
    predicts a 6% drop in wheat crop yields for every 1C rise (in the average global rise). That’s massive.

    It seems difficult to reconcile impacts on this scale with RT’s optimism.

  14. izen says:


    Thanks for the link.
    Two egregious mistakes in the Abstract, and reading further… sigh.

    -“…we find that changes to be expected from the widely discussed, allegedly “dangerous” two degrees Celsius of global warming are both familiar and small. They are equivalent to moving from Wisconsin to Michigan, or Virginia to North Carolina, or more generally 180 miles south.”

    This is the ‘London will become like Barcelona’ mistake.
    It derives I think from the use of the global metric surface temperature as a single measure of climate change. There are rather a lot of obvious reasons why the GLOBAL rise in average temperature is NOT comparable to the local average temperature. A lot more than just the summer/winter local averages change when the global surface temperature average changes. Using the global average warming as the measure of local change is about as daft as you can get.

    -” On balance the nation benefits slightly. Regional differences are large, with northerners’ gains roughly equivalent to a 4% to 6% increase in their GDP, while southerners losses are about the same. These changes are important, in and of themselves about
    as large as the combined financial implications of all other aspects of global warming. ”

    On first reading this I couldn’t see how they jumped from a poll on satisfaction with the summer/winter temperatures in various locations could be converted in to gains and losses in GDP with financial implications.
    However down in the detail they tell the magical method…

    After several pages of nonsense about how a 2degC rise in global temperatures would be like moving from Minnesota to Michigan and the results of subjective satisfaction polling they get to this;-

    -“How important are these effects – is climate satisfaction something that matters a lot, like love or marriage, or something of little consequence? And how does it weigh in the balance compared to more familiar metrics, such as monetary gains and losses? Logic and a plethora of research suggests Jeremy Bentham’s answer: the key to evaluating something’s moral and policy impact lies in its impact on human happiness (Bentham 1780[1907];”

    Then a long list of citations supporting the importance of human happiness (subjective satisfaction index) in maximising human well-being. Then this-

    -” Our implementation of this approach for valuing climate satisfaction builds on a small but promising literature combining climate data with survey research (Barreca 2008; Frijters
    and Van Praag 1998; Pray 2007; Rehdanz and Maddison 2005; Van Praag 1988).

    Regression analysis on our data shows that satisfaction with climate has a powerful impact on well-being (Table 3). Each climate satisfaction point produces just under one-fifth
    (0.183) of a life satisfaction point (column 1). In contrast, $1000 in income produces only a third as much life satisfaction (0.054; column 2). Thus it takes about $3000 in income to to
    produce as much well-being as is produced by one climate satisfaction point ( 0.183 / 0.054 = $3000 approximately).”

    So if moving 180 miles south can gain people in the Northern States the satisfaction of $3000 dollars why have they not all just moved 180m South? Or all moved to North Carolina which has the best summer AND winter subjective seasonal satisfaction index?
    California has a smaller, and theoretically better seasonal range that might be expected to give a better satisfaction index, but I guess all those Kaliforni-liberals are just inherently unsatisfied.

    The paper does go on to make the point that such conversions of satisfaction with the local summer/winter temperatures to a monetary value do need to be weighted for income. So because a rich person needs more money to get the same satisfaction as a poor person, warmer winters in the Northern States produce more financial benefit because the satisfaction is worth more to a richer population…

    I find it rather depressing that a paper would be cited as surpporting the measurable benefits of warming when it is such a farago of mistaken comparisons – global average surface temperature rise to local seasonal means; subjective satisfaction indices derived from poll survey results about how much people like the local seasonal weather and a conversion of those survey results into finacial measures of GDP benefit.
    It’s abysmal.

  15. Michael Hauber says:


    The study you refer to certainly raises a valid point that winter will be much more pleasant for the majority of USA citizens, and that this may outweigh the increase in unpleasant heat during summer. However several points need to be considered:

    – predicted changes in a survey of satisfaction with summer and winter temperature does not reflect the possibility of dangerous climate change. Issues such as drought, ecological disruption, hurricanes etc also need to be considered
    – Local warming for most of the USA is predicted to be significantly higher than global warming. The study assumes that the warming amounts are the same.
    – The study excludes the impact of satisfaction on Florida, as nothing in the USA is hot enough to compare to future Florida and so predict the future satisfaction. Never mind the amount of future Florida that will be underwater.
    – A large part of the world’s population lives in countries as warm or warmer than Florida. Such as India, Indonesia, Brazil, Nigeria, Bangladesh, Mexico, Phillipines, Ethopia, at which point I”m already at 2.5 billion people. I’m sure I could find more to count in areas such as the middle east and southern parts of China and Japan.

  16. dikranmarsupial says:

    “RT: So? I mean we always do things that are completely unprecedented, right? ”

    No, actually most of the time we apply sensible tried and tested engineering, and even then we sometimes get it wrong (Tacoma Narrows?). We very rarely do anything completely unprecedented in engineering, and when we do it has a tendency to be very expensive and a bit risky (especially if you try to avoid the “very expensive bit”).

    So, who is going to pay for this completely unprecedented ultra large scale engineering project, Bangladesh itself (142/187 in terms of GDP per capita)?

  17. Dikran,
    Indeed. Bangladesh’s GDP per capita is about $3400. If it were to get 5 times wealthier, it would be the same as Azerbaijan is today. 10 times wealthier and it would be about the same as Spain is today.

  18. izen says:

    I thought I recognised a name or two from that paper on the climate satisfaction index.
    Paul Frijters and van Praag are the source of the ‘small but promising literature combining climate data with survey research.

    That original research used EU data and Russian data to which they arbitrarily applied a constructed regional weather index derived from temperature humidity and rainfall. It goes downhill from there. But Frijters has form…

    His other writings reveal all the standard conspiracy ideation about the IPCC perpetuatiung a fraudulent picture of climate change;-

    -” The International Panel for Climate Change (IPCC) sponsored by the UN has managed to convince the scientific community that the earth’s climate is changing and that we’ve seen an increase in temperature of about .6 degrees in the last 150 years. It says we can expect further increases ranging from 2 to 10 degrees depending on which wild assumptions underlying the various models you care to follow.”

    He also seems to belong to that group of political ideologues who believe that BECAUSE AGW is global and therefore requires a globally coordinated response it MUST BE a hoax promoted by all those people who WANT a one world government.

    -“Another speculative thought would be that many people in this debate secretly like the idea of having some kind of all-powerful world organisation that would police a world-wide solution, i.e. they yearn for a worldwide dictator-of-sorts.”

    The concept that a problem may NEED to be tackled at the global level for valid reasons seems to be outside his ability to grasp. That the problem requires a global response is seen as prima facia evidence it is a ideological hoax constructed to achieve that end. Cf Lamar Smith.

    A suggestion to RickA, if you are going to cite a paper that supposedly supports your argument it is better to pick something that is not the clear product of the denialist niche.

  19. Marco says:

    izen, you should read page 31. Now, I will gladly confess that I am not very good at economics, but to me it seems our dear sociologist completely ignores inflation in his “concrete example”.

  20. dikranmarsupial says:

    ATTP, if only we had an economics expert to explain who would pay for all the unprecedented engineering! ;o)

  21. I think we know the explanation already. It’s grrrrrowth!!!!

  22. izen says:

    Indeed, Grrrrowth cures all. as demonstrated by this bit of nonsense;-

    ” For example, even a modest 2% per year growth in the economy sustained over a century would leave the average American family with some $350,000 per year rather than today’s $50,000. ”

    But I found this ‘joke’ on page 13 particularly ironic.

    ” Global warming thus appears to be mostly a bad thing in summer. But it would be stretching things to call it “dangerous”. Florida is hot but millions live there happily throughout the summer; the
    danger is not the climate but the alligators.”

    Especially when those alligators are swimming in the streets of Miami due to flooding and sea level rise….

  23. izen says:


    The Florida alligator is a species that prefers freshwater. The combination of inland drought and sea level rise will probably decimate the alligator population.
    Its the larger growing and generally more aggressive saltwater crocodile that will be the problem.

  24. 0^0 says:

    Does anybody have a clue what RT actually meant whenclaiming having been one of the first to show human effect in global warming?

  25. o^o,
    Yes, I was planning a short post about it. Quite interesting, really.

  26. I wondered if you’d pop along and mention that. It’s a working paper which, as I understand it, is not peer-reviewed. Also, your piecewise-linear fit appears to show just how strongly that result depends on a single data point, which is from your 2002 paper. Even if we ignore that, it’s still hard to see why this is consistent with the statement from Ridley (which you apparently agree with)

    2C is when we start to get harm. Up until then we get benefit,

    especially as your own analysis suggests that we can’t reject a null hypothesis that we would see negative impacts starting now.

  27. dikranmarsupial says:

    Figure 1 looks interesting, it looks to me like the only reason for the piecewise linear model, rather than just a linear model is to incorporate the single datapoint (d’Arge 1979?). A more reasonable explanation is that that datapoint may be an outlier. I don’t think the data supports more than a linear model.

    In comparing the model fit between the new paper and the old one (quadratic model), does the criterion include a penalty term for the two additional parameters of the piecewise linear model?

  28. Dikran,
    Yes, and presumably Tol 2002 plays a role in where the break has to occur?

  29. Also, presumably the idea that it’s linear out to 6.5C is clearly a rather unsupported assumption.

  30. @wotts
    That paper is conditionally accepted for a learned journal.

  31. Richard,
    Well, that’s good then.

    Dikran makes an interesting point though. The piecewise linear model appears to be justified by a single datapoint at -1C. Presumably the data point at +1, 2.3C strongly influences the break, and linear to +6.5C seems rather unsupported. Any comments?

  32. dikranmarsupial says:

    Also the datapoint that the piecewise linear model is needed to accommodate appears to be the oldest study. I would have thought that the studies would become more reliable over time (as more research is done) and hence d’Arge 1979 is perhaps the datapoint in which we should have less confidence than the others?

    However, first I’d like to know if there was a penalty term, it isn’t surprising you can get a better fit to the data (which does not necessarily imply a better model) by using additional parameters, and the improvement looks fairly small to me. Afterall, with four (complex) parameters you can draw an elephant.

  33. 0^0 says:

    What do “conditionally accepted” and “learned” mean in layman terms?

  34. “conditionally accepted” probably means that a reviewer’s response has been received and that it is largely positive. Hence, you would expect the paper to be accepted once you’ve responded. “learned” probably means any peer-reviewed journal.

  35. dana1981 says:

    Indeed the piecewise linear function just seems bizarre, obviously unduly influenced by two data points, one from a paper published 13 years ago, and another published before I was born (36 years ago).

    Also worth noting: Tol (2002) falls outside the model 95% confidence interval.

    I find it amusing that the individual who criticizes the statistical methods in so many other papers could publish this one with a straight face.

  36. anoilman says:

    Back to reality… I came across a very interesting presentation on food prices;

    Its comparing all the different models out there for food production. It factors in increased production due to agricultural improvements, population growth, and finally losses to global warming. The result is pretty much a unanimous increase in food prices.

    Equally interesting is that current (military) hot spots in the world will also be taking the brunt of food losses. I’d like to also point out that they will be seeing a decline in oil revenue to keep their people fed.

    I wonder how much Tol priced war and military efforts? Does he know who will win? Does he understand that diplomacy will fail because you can’t ask a nation to chill while its population is starving. They’ll die either way.

  37. RickA says:

    ATTP asked “Why then, if we are unintentionally changing our climate, does the possibility that some of the changes could be beneficial then factor in?”

    I agree with your main point – which is that if you do something on purpose it is a good idea to evaluate what the consequences of that deliberate action will be.

    If you do something by accident or unintentionally – of course you cannot evaluate what the consequences will be – otherwise it would be on purpose rather than an accident.

    So if we decide to do some geoengineering – we will have to make sure whatever we propose doing doesn’t have unforeseen consequences (without study) which are worse than the disease.

    I guess my main answer would be that inertia controls.

    We burn hydrocarbons currently and that is business as usual.

    To decide to burn less hydrocarbons requires a change of course – so that requires thought.

    Staying the course (business as usual) is the default.

    Not very satisfying for those who think we are headed for an iceberg – but that is my answer.

  38. dikranmarsupial says:

    Tol (2002) is also the only one that suggests that beneficial climate change exists at all. My intuition would be that society organizes to exploit the conditions in which it developed, but was not so finely tuned that the inter-annual variability wasn’t so large as to cause a significant problem on a regular basis. Under that assumption, one would expect the effect of climate change to be negligible over a fair range around the pre-industrial average, and then have increasing losses once some threshold was reached. In that case a three parameter piecewise linear model where the slope is zero before the breakpoint would be better (and treat Tol 2002 as the outlier instead of d’Arge 1979). It might not fit the data quite as well, but it would also have one fewer parameter and hence perhaps be favoured by Occam’s razor.

  39. Rick,

    We burn hydrocarbons currently and that is business as usual.

    To decide to burn less hydrocarbons requires a change of course – so that requires thought.

    Staying the course (business as usual) is the default.

    Except we burn hydrocarbons to generate energy, not to release CO2 specifically. The release of CO2 into the atmosphere is likely to produce substantial changes to our climate. Suggesting that this is the status quo seems a little bizarre.

  40. RickA says:


    I am not sure how to respond to your last comment.

    It is the status quo to burn hydrocarbons – and I think that is self-evident.

    If you ask everybody in America (for example) to stop driving their cars, or turn off their heat and electricity so we do not burn hydrocarbons – I am afraid you would be ignored.

    In my view, the only way to burn less hydrocarbons is to invent some form of generating energy which is CHEAPER than our existing hydrocarbons (coal, oil and natural gas).

    If we had this invention than market forces would be working with us, rather than against us and switching would occur automatically as the new technology was rolled out to the masses.

    Unfortunately – we have not yet invented this new way to produce energy (yet).

    Perhaps fusion will be that cheaper source of energy.

    Perhaps nuclear could be made cheaper than hydrocarbons (it is not yet).

    That is where we should be focusing our efforts (in my opinion).

  41. Rick,
    I’m not really asking anything. I’m simply pointing out that an unintended consequence of using hydrocarbons is climate change. Suggesting that we should simply accept that as being the status quo just seems a little bizarre.

  42. The piece-wise linear function is the best fit to the data. The left branch is obviously sensitive to a few data points, but the more interesting right branch is not.

    Anyway, the data are there for all to fit their favourite function to.

  43. Richard,
    You haven’t really answered Dikran’s question as to whether there was a penalty term, or not.

    but the more interesting right branch is not.

    Even if a linear fit is the best fit to the data, linear to a temperature rise of 6.5C seems a little bit of a stretch.

  44. Ethan Allen says:

    Hey, I found a cheaper source of energy than FF, an order of magnitude cheaper even.

    It emits 2X current anthro CO2 and 4X current anthro CH4 and 8X current anthro NO2 and 666X current anthro CFC’s.

    Now, I am pretty certain that this is such a totally bankrupt idea, to PURPOSELY geoengineer a hothouse Earth. But that is some Deniers are suggesting, not by accident mind you, but on purpose.

    But it is CHEAPER.

    RickA, I got a car to sell you, it has no breaks, by design. It’s called the Humon-O-Crash car.

    But it is CHEAPER.

  45. dana1981 says:

    It’s all well and good to say ‘fit whatever function you like to the data’. The problem arises when you start making statements like ‘warming to 2°C is beneficial’ based on a poorly (at best) justified function choice. That statement isn’t supported by the data, it’s based on your chosen function.

  46. anoilman says:

    Ethan Allan, don’t forget to add 4 cigarette lighters, skip the seat belts and the insurance. Just think of all the money you’d save! What a cash cow! Besides, only money is important.

  47. Richard, of course the piecewise linear model fits the calibration better, it is a more complex model. However if you are going to argue that the piecewise linear model is a better model than e.g. your previous quadratic model, then you need to show that the added complexity of the model is justified by a much better fit. This is why statisticians use things like AIC as basic best practice in model fitting.

    So, did you use a complexity penalty, such as AIC in comparing models?

  48. Dikran, do you teach your grandma to suck eggs?

  49. Richard,
    I’ll post your comment only because you’re mentioned in the post. You could try answering what is an entirely reasonable question, or you could continue to illustrate why people hold the views of you that they do. Also, if your response to Dikran is implying that he’s in no position to question your work, then you might be illustrating why it tends to be so Gremlin infested.

  50. Richard, I suspect from your evasion that the answer is “no”, but don’t want to admit it. It is a shame that attempts to discuss science so often end up in this kind of unedifying behaviour. Your choice.

  51. Dikran, Wotts
    Once again we talk at cross purposes. I was trained as a statistician. I have taught statistics. I have published in statistical journals. I have written statistical software. Your null hypothesis should therefore be that I do not make elementary errors. And indeed, I counted my degrees of freedom. And indeed, the three-parameter function outperforms its two- and one-parameter alternatives — also when the performance criterion is appropriately corrected for the additional parameters fitted.

    (There is a more succinct way to write up the above: Do you teach your grandma to suck eggs?)

  52. I was trained as a statistician.

    Appeals to authority are irritating, and I think Dikran was too. However, given that you largely dismissed Andrew Gelman’s criticisms, I guess there are no statisticians that you feel have the credentials to comment on your work?

    There is a more succinct way to write up the above: Do you teach your grandma to suck eggs?

    You might call it succint. I would describe it differently.

  53. snarkrates says:

    Richard, what metrics did you use to account for the increased model complexity, and do you have the scores you got for your fits? It would seem to me that it would have been a good idea to include them in the publication.

  54. Willard says:

    > I was trained as a statistician. I have taught statistics. I have published in statistical journals. I have written statistical software. Your null hypothesis should therefore be that I do not make elementary errors.

    You also have had Gremlins in your closet for many years, Richard. You also let Bjorn use these Gremlins for his think tank, a think tank in which you still participate. You still participate in the GWPF’s activities, a think tank that has promoted bilions upon bilions of Gremlins like yours. There are many other Gremlins-like entities we could mention, but what’s the point? That you’re cherrypicking facts right now to appeal to your own authority should suffice.

    I duly submit that the null hypothesis you’re suggesting has been falsified long time ago, and that it may be time for a new null.

    Oh, and I thought you were an “econometrician,” as you insisted at Andy’s.

  55. Marco says:

    “Your null hypothesis should therefore be that I do not make elementary errors.”

    Like forgetting minus signs? Or forgetting to look whether there are ‘missing’ records in an excel file, meaning that the last and largest number is not necessarily the total number of records in that excel file? Or perhaps making elementary false assumptions in a calculation, resulting in a conclusion there should be 400 additional abstracts that reject the notion that at least half the warming since the 1950s is anthropogenic, and then being unable to identify even 10% of those abstracts?

    Looks to me like the null hypothesis has been discarded some time ago, already…

  56. Richard, you still haven’t answered my question, did you include a penalty term in the performance statistics given in the paper? Yes or no.

    “Your null hypothesis should therefore be that I do not make elementary errors.”. Nonsense, I have been a scientist long enough to know that plenty of bad papers make it through peer review, and not to trust anything I read on the basis of the authors training. When I spot a problem I ask for clarification, and I don’t think it unreasonable to expect a straight answer in a scientific discussion.

    Besides, everybody makes elementary mistakes from time to time (including myself), and having the hubris to think you don’t is as good a recipe for gremlins as any.

  57. matt says:


    Ur latest is paywalled. Want to fill us in on the interesting bits? What differs from u and Pielke Jr? I heard (from u i think) the data gathering changed from 2005. How hard do u think it will be to link the pre/post 2005 data for future studies (and do u know why it changed – simply funding)?

  58. anoilman says:

    “Your null hypothesis should therefore be that I do not make elementary errors.”

    Oh! He’s like Mary Poppins! A fantasy make believe fictional character!

    The rest of us be like;

  59. Infopath says:

    “Your null hypothesis should therefore be that I do not make elementary errors.”

    Seems to me that working under this assumption pretty much ensures elementary errors like the ones pointed out by Marco up-thread. (Or are those NOT elementary errors?)

    It is also consistent with an inability to admitting them afterwards.

    When I was a kid, a neighbor who was a race car driver drove me to the store once. I guess I expected the guy to slalom through traffic with his eyes closed. Instead, I never saw someone pay so much attention on the road. When I pointed this out to him, he said that if he drove assuming he was an amazing driver, he would not be acting as a good driver, and the odds of getting into an accident would be much higher. For a kid who wanted to tell his friends a cool story of hair-raising speed, this was very disappointing. But it makes for a great lesson to me now.

    I honestly feel that RIchard would be a much better statistician (or econometrician), if he worked under the assumption that he (as reality has repeatedly shown), is capable of elementary errors much like everyone else.

    Shifting this attitude would also expedite the admission and corrections of such errors, for the betterment of humanity.

    Go, RIchard!

  60. BBD says:


    I’m not ‘extremely stubborn and suspicious’. I am mauve, fluffy and almost comically benign. I appeal to Richard as my authority to support me in this assertion.

  61. Magma says:

    Your null hypothesis should therefore be that I do not make elementary errors.

    Understandably enough, Dr. Tol wishes people to focus on his more advanced ones.

  62. snarkrates says:

    Well, he has made some remarkable errors.

  63. Tom Curtis says:

    Late to the party, but here goes:

    1) Tol provides a list of relative likelihoods of different functions as Fig 2 in the working paper. The parameters for Tol 2009 differ from those published in Tol 2009 and those published in the update and correction of Tol 2009. I presume from this he has determined the best fit parameters for the data in the working paper to compare functions (as he ought to have done). Further, I presume he calculated relative likelihoods using the AIC as described here, or more probably some generalized variation of that approach.

    More detail would be nice but is should not be necessary for replication. That said, Tol’s refusal to give more detail, or to even confirm the use of something like the AIC is inconsistent with the standards he purports are required in his critique of the Cook et al Consensus paper.

    2) Despite the relative likelihood of 0.84, (6 times greater than its nearest rival, Tol 09), the cost function in the working paper is not plausible. Specifically, for a temperature 6 C less than 2000 levels, the cost is estimated at 4.44% of Gross World Product (GWP). For a 12 C increase it is just 18.37%. That is, according to his model reversion to conditions of the LGM would cost less than 5% of GWP, while warming to conditions where approximately 30% of the globe faces conditions 100% lethal to all large mammals including humans due to heat stroke almost every year will only reduce GWP by less than 20%. Neither claim has any credibility. Ergo, at a minimum Tol’s model needs the addition of two further changes in trend for moderately low and very high temperatures. That would result in a further increase in the AIC, possibly resulting in the a significant change in relative likelihoods. Against that, Tol 09 shows costs of 6.48% and 24.08% for the LGM and hothouse conditions respectively, which isn’t a great improvement. It is possible that other models will also require changes in parameters. As an alternative Tol might reject claims that the model in the working paper represents the actual damage function with respect to temperature – merely claiming instead that it is a sufficiently accurate approximation over a limited temperature range.

    3) The working paper suffers from the standard pointlessness of many similar economic analysis. That is because cost by nation is weighted by national GDP rather than population. The vast majority of the world’s population under Tol’s analysis will suffer from significant relative declines in GDP due to global warming. But because norther, western nations which currently dominate world wealth will not do as badly, the “global” result becomes that of only limited loss. This distorted ethical stance (and it is an ethical stance, even if adopted as a default assumption) is exacerbated by using real GDP rather than PPP. The former represents value to investors whereas the later represents impacts on the population. How significant this is can be seen by the reduction of the impact determined by Madison and Rehdanze (2011) from -11.5% in the correction of Tol 2009 to 5.1% in the working paper, the shift being primarily due to a shift from calculating cost in PPP terms to market exchange rate. Conversion of all costs to a PPP basis, to reflect the real cost to peoples lives, would greatly increase the damage function.

    4) Tol has not included relevant cost estimates that show a far greater cost for temperature increase. Specifically, he has not included Dell, Jones and Olken (2012). He has also not included Burke et al (2015). The later is at least excusable based on time of publication, but the former is not (at least on that basis). Inclusion of these more expensive estimates would have significantly increased the cost function (see pages 28 and 30 of this presentation).

  64. dikranmarsupial says:

    Tom, Richard wrote ” And indeed, the three-parameter function outperforms its two- and one-parameter alternatives — also when the performance criterion is appropriately corrected for the additional parameters fitted.”

    The “also” seems to imply that the figures given in the working paper do not adjust for the additional parameters, but that the conclusions remain the same if a penalty is included. I disagree about the paper being reproducible, the paper needs to detail exactly how the models are fitted to the data and how the evaluation is performed, if only in an appendix or in the supplementary information. Or at least the author ought to be willing to give straight answers to straight questions, if that doesn’t happen, the likelihood outweighs any prior I might have in arriving at my posterior belief in the value of the paper ;o)

  65. the figures given in the working paper do not adjust for the additional parameters, but that the conclusions remain the same if a penalty is included.

    You should bear in mind that Richard has publicly stated that the correction to his 2009 paper does not change the conclusions, so when Richard implies that the conclusions remain the same one should possibly interpret that as the results are quite different, but I can find some statistically pedantic reason why they’re unchanged.

  66. @Tom
    1) I don’t test for model selection, so avoid Akaike and Pearson. Instead, I use a more appropriate Bayesian approach, of course correcting the degrees of freedom for the number of estimated parameters.
    2) The modest impact for immodest warming is driven by two papers: Nordhaus and Roson. If you want to protest, take it to them.
    3) The rationale is given in Schelling’s Choice and Consequence. If you want to see an alternative, read the papers by Fankhauser/Tol/Pearce.
    4) Dell and Burke do not study the impacts on human welfare, and arguably do not study the impact of climate change.

  67. @dikran
    Sorry. In this context, the word “also” signifies emphasis rather than “as well”.

  68. dikranmarsupial says:

    If Richard has published the paper with the unpenalised statistics, then that would be an elementary error (as the unpenalised statistics are not satisfactory evidence that the new model may be better than the old one, the penalised statistics would be). However in this case (if the paper is only conditionally accepted) he could substitute the penalised statistics (and an explanation of exactly how the models were fitted – there is more than one way of fitting a piecewise linear model – and evaluated).

    However, I think the question of the sensitivity of the model to individual datapoints is rather more important, especially as only one datapoint suggests there are any positive benefits, so the last part of the claim that “The literature is very clear; 2C is when we start to get harm. Up until then we get benefit ” is clearly not true.

  69. dikranmarsupial says:

    Richard, so the figures given in the paper *DO* include the penalty for the number of parameters then? If so, what form did the penalty take?

  70. Dikran,
    Indeed, surely the biggest issue with this (as Andrew Gelman has pointed out) is that the datapoints are not independent and not equivalent in value. Some – I think – are from an updated versions of the same model, some are old and presumably have been superceded. This kind of analysis only really makes sense – I think – if you believe that the spread in data points is somehow representative of our level of uncertainty, rather than being partly because older models have been updated and calculations redone, and partly because some models have different underlying assumptions to other models.

  71. dikranmarsupial says:

    ATTP indeed, I think this is one of those occasions where fitting a model to the data is misleading and actually detracts from a simple plot of the datapoints, it makes it look as if we know more than we actually do by distracting from the data quality issues.

  72. I think the IPCC version did not fit anything to the datapoints. I wonder if that was why. Maybe Richard could let us know?

  73. @Wotts
    Old story. The IPCC can’t do original analysis. Fitting a curve is deemed “analysis” by the IPCC, and the discussion above suggests that indeed some people think this is a controversial thing to do.

  74. Richard,
    I see. They couldn’t use your 2009 paper’s results because there were new data points and some of the original ones were incorrect in your paper, and they couldn’t do an analysis of their own because that’s not their remit. On the other hand, if they know the form of the function (from your 2009 paper) and they know the new datapoints, then it’s hard to see how adding new datapoints and re-calculating the curve is somehow analysis, but anyway.

  75. dikranmarsupial says:

    I note that Richard STILL has not given a straight answer to my question (unless there is one in moderation).

    Richard, do the figures given in the paper include the penalty for the number of parameters, yes or no? If so, precisely what form did the penalty take?

  76. @Wotts
    Agreed. IPCC WG3 argues that computing an average is not new analysis, but correcting for sample selection bias is new analysis.

  77. snarkrates says:

    Richard, so you are saying you used the Bayesian information/Schwarz criterion? If so, do you have the scores, and why were they not published in the paper?

  78. Willard says:

    Econometry’s the new sophistry.

  79. dikranmarsupial says:

    O.K. looks like Richard is not willing to explicitly state whether the performance stats in his paper include a complexity penalty (and if so how it was computed). I suspect that means the answer is “no” and that he has made an elementary error after all.

  80. Dikran Marsupial says:

    Richard Tol (@RichardTol) says:
    June 14, 2016 at 7:31 am
    Listen very carefully. I will say this only once. I did answer that question. The fact that you did not recognize the answer behind a layer of sarcasm, does not mean the question was unanswered.

    O.K. Richard, as challenged, link to the URL on this thread where you stated what penalty term was used.

  81. Dikran Marsupial says:

    Still waiting for the URL Richard, I notice you are still posting on the other thread, so you are around. Could it be that there is no URL?

  82. dikranmarsupial says:

    I thought I’d re-read Richard’s paper, just in case he did come back and explain just how the piecewise linear model was constructed, including details of the penalty function, and I found a frankly hilarious torturing of data to support the conclusions. The paper says:

    “The 11 estimates for 2.5°C show that researchers disagree on the sign of the net impact: 3 are positive, and 8 negative.”

    If you check this out in Table 1, it turns out that 2 of the 3 “positive” ones are zero! LOL.

    Even without this, it is a rather nuanced presentation of the facts. The other positive one is 0.1% of GDP and the negative ones are -3.0%, -1.4%, -1.5%, -1.9%, -1.4%, -2.9%, -1.5% and -1.0%, so the 0.1% figure (which also comes from the same study as one of the zeros) isn’t nearly as confident of the sign as the other 8 which are quite definitely negative. I can see why Richard wanted to look at the sign, rather than the magnitude! Of course this is rather obvious if you just look at Figure 1, so it doesn’t say much for the reviewing standards of the “learned journal” that has conditionally accepted it.

  83. dikranmarsupial says:

    From the same paragraph “The welfare change cause by climate change is equivalent to the welfare change caused by a change of income of a few percent.” which ignores the uncertainty in the estimates, for instance, according to Table 1, Nordhaus 1994a gives a lower bound of -21%. Ignoring uncertainties is not good statistics. It is also notable that the low and high ranges suggest the uncertainties are not symmetric, which would invalidate e.g. least-squares fitting methods.

  84. I’m pretty sure (happy to be corrected) that Richard has said (or endorsed) that the impact is consistent with 0 out to 2K which – I think – is based on the 95% confidence interval intercepting 0. This seems a little odd, given that his analysis suggests an equal chance of an impact of -10%.

  85. dikranmarsupial says:

    It is important to bear in mind that “is consistent with” just means “is not contradicted by”, which is a rather weak statement in scientific parlance (of course to a non-scientific context it tends to imply rather more than that, i.e. that it is somewhere between plausible and likely). The bias of not pointing out the confidence interval only suggests that a benefit is more likely than not only up to about 1K (judging by eye from Figure 1) is rather obvious. And that is predicated largely on the one data point that suggests significant benefit, which just happens to be his own previous research. It is also predicated on the validity of the regression model, which is at best questionable as few of the basic assumptions of regression hold (e.g. data points are not independent, the uncertainty on the datapoints is not symmetrical (never mind Gaussian), the uncertainties on the datapoints are not equal etc.).

    “he uses statistics as a drunk uses a lamppost – more for support than illumination”. – Andrew Lang.

  86. dikranmarsupial says:

    Prof. Tol wrote “Your null hypothesis should therefore be that I do not make elementary errors.”

    I would venture that considering zero to be a positive number would be an elementary mistake. Richard, do you want to contest that?

  87. dikranmarsupial says:

    To be fair to Prof. Tol, in his correction he writes:

    “I nonetheless highlight two differences between the old and the new results. First, unlike the original curve (Toi 2009, Figure 1 ) in which there were net benefits of climate change associated with warming below about 2°C, in the corrected and updated curve (Figure 2), impacts are always negative, at least.”

    But that does seem somewhat at variance with statements made outside the peer reviewed literature (see above).

  88. dikranmarsupial says:

    Prof. Tol, I note in your most recent paper, table 1 gives lower bounds for Nordhaus 1984a as -21.0, for Plambeck and Hope 1996 as -13.1 and for Hope 2006 as -3, but Table 1 in your “Correction and Update: The Economic Effects of Climate Change” gives figures of -30, -11.4 and -2.7 respectively (some of the best estimate figures also differ slightly). I couldn’t find an obvious explanation for these discrepancies, please could you comment?

  89. dikranmarsupial says:

    Looking at Figure 1 again, it is interesting that the confidence interval is skewed downwards for warming, but skewed upwards for cooling. I suspect that this is a consequence of forcing the regression to pass exactly through zero, which is likely to be problematic where the datapoints in the study were apparently not obtained using exactly the same methodology (so they may not all agree in exactly what was regarded as the origin in terms of current economy/climate). Is there any reason to suppose that the estimate in d’Arge (1979) has higher uncertainty above than below, unlike all of the other studies? Can Prof. Tol comment on this please?

  90. dikranmarsupial says:

    Well given that Richard posted on another thread immediately after my post reminding him about the error in his paper, I think it is reasonable to assume that he saw the reminder but that he doesn’t care that there is an error in his paper, which is a tad ironic given his relentless pursuit of others over far more minor issues. Poor attitude IMHO.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s