2+2=4, therefore Einstein is wrong!

This is a post I’d thought of writing for a while, but given the small furore over the error in the recent Cawley et al. (2015) paper, I thought I’d do so now. Let me first explain my title (which I had thought of calling “the 2+2=4 fallacy”). What I sometimes encounter are those who think that if you are to critique their work, you need to find some kind of actual mistake in one of their calculations. If you can’t, then they conclude that they’re right. The problem is, that it’s quite possible to do a calculation that is correct and then draw conclusions that are not; hence the title.

A research study involves a number of important steps. You need to define the problem you want to solve. You need to set up the problem and define your assumptions. You need to collect your data and carry out your calculations/modelling. You then need to analyse your results, interpret your analysis, and draw your conclusions. However you do need to interpret your results and draw your conclusions in light of the assumptions that were made at the beginning. So, there are many aspects of a study that could be criticised. Just because a study has no explicit errors, doesn’t mean the conclusions are correct. Similarly, just because a study has an error, doesn’t mean the conclusions are wrong; it depends on the significance of the error and how it would influence the conclusions.

Now we come back to the motivation for this post: the error in the recent Cawley et al. paper. Cawley et al. (2015) was mainly a comment on a paper by Craig Loehle called A minimal model for estimating climate sensitivity. Let’s first consider Loehle (2014). It attempts to use a three-component model for the surface temperatures since 1880. The model has two cyclical functions, one with a 20-year period and the other with a 60-year period, and a linear trend that is meant to represent the recovery from the Little Ice Age (LIA). After 1950, these no longer properly fit the temperature data, so another linear trend is introduced – starting in 1942 – which is meant to represent the temperature responding to the increased atmospheric CO2 concentration.

Using this model, Loehle then does a calculation to determine climate sensitivity. The problem is that the model and the assumptions don’t make any physical sense. What does recovery from the LIA even mean? Our climate isn’t a blow-up ball that returns to its original shape after being compressed. Our climate largely responds to changes in external forcings. The LIA was a period with reduced solar insolation and increased volcanic activity. The period after that, warmed in response to increases in external forcings. There aren’t really 20 and 60-year cycles in the external forcing datasets. Anthropogenic forcings didn’t start in 1942, they started when we started increasing atmospheric CO2 concentrations in the 1800s. The model is basically a curve fitting exercise with almost no physical basis whatsoever. I don’t need to really go any further. It doesn’t really matter if the subsequent calculations have errors or not; the study doesn’t really make physical sense.

What about Cawley et al. (2015)? They show that the 20 and 60-year cycles aren’t really supported by the observations. They show that Loehle et al. (2014) underestimated the uncertainty in their climate sensitivity analysis. They then present a minimal model of their own (that actually has fewer parameters than the minimal model presented by Loehle et al.) to show how one might develop one that is physically motivated, rather than one that is simply a curve-fitting exercise. They also discuss how Loehle et al. (2014) only used the forcing due to CO2, rather than due to all anthropogenic influences. Consequently, Loehle et al. underestimated the change in forcing by about 13%.

Here’s where there was a mistake. Rather than pointing out that this would have reduced Loehle’s estimate for climate sensitivity (making it even more unrealistic), they suggested it would have increased it. A mistake. However, this didn’t influence the minimal model that they presented and it didn’t really influence their discussion of Loehle’s model. It was simply a silly mistake. Just because Cawley et al. (2015) made a silly mistake doesn’t make Loehle’s model any more realistic. Just because Cawley et al. (2015) made a silly mistake doesn’t invalidate the rest of their paper.

So, if people were serious about discussing papers like this, they’d focus on more than whether or not they can find some kind of silly arithmetic error (especially as it’s normally easy to establish if such errors are significant or not). They’d focus on the setup of the problem, the assumptions, the analysis of the data, and the conclusions that are drawn. Of course, there are probably reasons why people are focusing on a minimal error in Cawley et al. (2015) and ignoring a completely unrealistic model presented by Loehle (2014); it would be inconvenient to do otherwise.

This entry was posted in Climate change, Climate sensitivity, ClimateBall, Science and tagged , , , , . Bookmark the permalink.

120 Responses to 2+2=4, therefore Einstein is wrong!

  1. Here is an example of a mistake on Cowatn’s part. This is taken from his interactive server.

    Set all the anthro forcings to zero except for the well-mixed GHG’s, which has to be mainly the influence of CO2

    The TCR that comes out is 1.137 C for doubling (see the yellow highlighted part). But if you look at the lowest panel, it is showing close to 0.8C increase for a 100 PPM change in CO2 since 1880. Do the numbers on that and try telling me that they haven’t low-balled that estimate.

  2. Sorry, I must have missed it (I tend not to visit fake-sceptic blogs without cause) but can you drop a hint or two where we can read these people, “focusing on a minimal error in Cawley et al. (2015) and ignoring a completely unrealistic model presented by Loehle (2014)”? Thanks.

  3. russellseitz says:

    While the Tyneside coal baron’s court jester ,Josh, has drawn a literally juvenile cartoon celebrating this gaffe, the WUWt commentariat continues to demand, and get, lower forms of humor.

  4. WHT,
    That’s because the aerosol, and all the forcings below the aerosol forcing, are set to zero. Hence it’s over-estimating the change in anthropogenic forcing and under-estimating the climate sensitivity. It’s interactive. Set them back again to 1, choose one of the forcing datasets, and you’ll get a different answer. You can also do more boxes in the model. If you choose a 2-box model and set all the forcings to 1, you get a TCR of just below 1.7.

  5. BBD says:

    johnrssell

    Here’s Nic Lewis commenting on the error at Lucia’s. Not sure where else this has got to, but doubtless there will be much premature and inappropriate noise.

  6. Russell,
    Yes, someone on Twitter pointed out that Josh can draw but doesn’t seem very bright.

    BBD,
    More noise than anything else, I suspect.

  7. jsam says:

    Watson and Crick’s paper’s errors voided the concept of DNA for a generation, right?

  8. johnrussell,
    BBD’s put the link to Nic Lewis’s comment at Lucia’s. There’s a Bishop-Hill post, and something on WUWT.

  9. Of course, I set them all to zero to do a sanity check on the basic formulation which is
    TCR= dT * ln(2) / ln(CO2/CO2(0))

    They low-balled the number somehow.

  10. Was is necessary to write a reply to a paper like Loehle et al. (2014)?

    It is not as if such a paper could trick a real scientist into thinking it was legit, seeing the problems with such a paper does not require any expertise. If you do not see it, you’d better stop doing science.

  11. WHT,
    No they don’t. Their model works by first fitting to the observed temperature dataset. Then they run a step response to determine the TCR for the model that best fits the observed temperature. By setting all those to zero, you’ve assumed there is no aerosol forcing and hence that the anthropogenic forcing we’ve experienced since 1880 is very high. Therefore, to fit the observed temperatures, the climate sensitivity of their model has to be low. They’re not low-balling, you are. Even a GCM that was forced to fit the observed temperatures, and to do so without any aerosol forcings, would need to have a low TCR.

    Victor,
    I don’t know. It’s quite good to respond and I really like the model presented in Cawley et al. (2015). The problem, though, is that it allows for a classic ClimateBallTM move if there’s a silly mistake (as there is).

  12. Peter Jacobs says:

    Victor Venema (@VariabilityBlog) writes: “It is not as if such a paper could trick a real scientist into thinking it was legit, seeing the problems with such a paper does not require any expertise. If you do not see it, you’d better stop doing science.”

    The journal it was published in, Ecological Modelling, is pretty well-respected within the ecological and environmental science communities. However, it is obviously not the kind of place you’d expect to see a paper purporting to constrain one of the most notoriously elusive properties in the climate system.

    If you look through the other papers in the journal and the backgrounds of the researchers who publish there, I think it is evident that there are not a lot of physical climate scientists.

    The journal is then in a position wherein its readers may be misled, or else it has published, unanswered, something its readers can see is wrong.

    I have read many papers published in that journal, so I consider myself a reader (if an occasional one). A formal response did seem to be appropriate, at least to me.

  13. Okay, if the model presented in Cawley et al. (2015) has value itself, that is a good reason to publish.

    At least 20% of all papers contain small inconsequential mistakes. Those are the ones I notice normally reading articles. If you prod more and would ask domain experts, you probably find much more. Telling is also that most of the points mentioned in a peer review are mentioned only by one reviewer. This suggests that a 3rd, 4th or 5th reviewer would again find other points.

    Picking on inconsequential mistakes and suggesting that they are important is indeed ClimateBall and is not the behavior one would expect from a scientist.

  14. Victor,

    At least 20% of all papers contain small inconsequential mistakes. Those are the ones I notice normally reading articles.

    That’s why I’m glad I don’t publish climate science papers. If people are willing to look hard enough they can often find some kind of silly mistake to crow about.

    Picking on inconsequential mistakes and suggesting that they are important is indeed ClimateBall and is not the behavior one would expect from a scientist.

    Indeed.

  15. KR says:

    I think of the nitpicking tactic as akin to a blowfish – something tiny, inflated all out of proportion.

  16. “Just because X made a silly mistake doesn’t invalidate the rest of their paper.”

  17. [Mod : Try taking your own advice, or explain yourself.]

  18. verytallguy says:

    Richard,

    at last! Is that what was in the mirror?

    Well worth the wait.

    I’ll get it framed

  19. KR says:

    Richard Tol – (odd, you usually use your initials, is this the Real Tol?) that requires _judgement_. If the conclusions are dependent on that error, yes, those may be invalidated. Conclusions _not_ dependent on that computation/statement (in the Cawley et al case, the majority of them) are not affected.

    If you are arguing, as I suspect, that your own work is by the same measure intact, I would disagree. In your case, one could point out your ‘gremlins’ inverting signs on primary data (hence invalid conclusions), incorporation of those conclusions into AR5 (resulting in amended text prior to publication removing them), the appalling bad math re: Cook et al leading to a requirement of 300 rejection abstracts that simply don’t exist (again, negating many of your conclusions there, along with the many other problems that leave essentially none of your conclusions intact), etc.

    Case by case – any errors noted should be considered as to their impact, and the work judged accordingly.

  20. @KR
    You cracked the riddle!

    Indeed, the disputed correction mechanism for Cook’s consensus is a single sentence in that paper and did not make it into the abstract.

    Yet, you and others use it to reject the rest of the paper.

  21. They should not have plotted the forced response in the lowest panel then (see the figure in the first comment in this thread). That is an estimated temperature change based on an input change in CO2 (+GHGs) concentration. No other factors play in to the simulation. That’s why I intentionally set them to zero — to do a sanity check on what their outcome would be if it was isolated to just CO2. What they get is 1.137 C for a doubling of CO2. I would suggest the actual value is closer to 2 C than the 1.137 C that they show.

    Someone is digging a hole deeper. Who is it?

  22. Richard,

    Indeed, the disputed correction mechanism for Cook’s consensus is a single sentence in that paper and did not make it into the abstract.

    I don’t understand what you’re suggesting. Your paper suggested that following the error correction in Cook et al. to its logical conclusion suggested a consensus of 91%, not 97%. Consequently, your paper was suggesting the existence of 300 abstracts that reject the consensus, rather than around 80, that Cook et al. found. This is almost certainly wrong. Are you suggesting that you made a mistake in your paper, or not? This isn’t a complicated question to answer.

    Yet, you and others use it to reject the rest of the paper.

    No, this isn’t true. There are many reasons to reject your paper. I’ve discussed it many times; for too often in fact. I think most are using your paper’s suggestion that there are 300 reject abstracts to mock you, since not only is it particularly silly, but the only time you tried to justify it, you said something particular silly too. Of course, maybe you were just pulling our legs, but it is really hard to tell.

  23. WHT,
    No, you really should think about this a little more. What they do is to take the forcings and develop a model that most closely matches the observed temperatures. Then they take that model and do a 1% per year CO2 increase to determine the TCR for that model. This is, formally, the correct way to determine the TCR and is what is typically done for GCMs.

    What you’ve done is assume that the anthropogenic forcings that produced the observed temperature rise were high (by setting the aerosol and other forcings to zero). Therefore, the model that fits the observed temperatures has a low TCR. You did this. Not them. Just look at the forcings figure. Well mixed GHGs have a change in forcing of almost 3 Wm-2. If you then compute the TCR, you get TCR = F_{2x} \Delta T/\Delta F = 3.7 \times 0.8/3 = 1 K.

  24. verytallguy says:

    ATTP,

    your attention to Richard is unwarranted; he’s already told us he gets all he needs from the mirror.

    Allow the stones to fall unheard

  25. “Then they take that model and do a 1% per year CO2 increase to determine the TCR for that model.”

    That seems funny since Cowtan is using realistic dates starting in 1880. If they had applied a 1% increase in CO2 each year starting from 290 PPM, it would get up to 290*(1.01)^130 by 2010, which would make the CO2 exceed 1000 PPM by now !

    Such a hypothetical model as you describe is not dependent on absolute calendar dates so I would think they would start from year 0 if it was an artificial 1% increase type of profile — just to hammer the point home that it was indeed a hypothetical increase.

    I really believe that they are using estimates of actual CO2 values in that figure, which means it is not modeling a 1% yearly increase in CO2.

  26. Tom Curtis says:

    WHT, you say:

    “Set all the anthro forcings to zero except for the well-mixed GHG’s, which has to be mainly the influence of CO2

    You are correct. The forcing from well mixed GHG is mainly (= “>50%”) from CO2. In fact, based on the data from NOAA, CO2 contributed 65% of the forcing from well mixed GHG as of 2013. Multiplying out, we therefore have a total forcing from CO2 of WMGHG of 1.63 W/m^2 for a 100 ppmv increase, but a 2.51 W/m^2 from all WMGHG. With a TCR of 1.137 that equates to a 0.77 C increase in temperature.

    So, having done the numbers, a “close to 0.8C increase” is not low balling the estimate.

  27. Tom Curtis says:

    WHT, you have misunderstood Ander’s description of the Cowtan model. Specifically, the model varies certain parameters with realistic forcings to get a best fit to the temperature. It then reruns the model using the parameters determined from the realistic settings, but with a 1% per annum increase in CO2, and no other changes in forcings. They then take the temperature change in the 70th year as the TCR, as per the definition of TCR.

  28. Oh, so they use the temperature change from the 70th year, which would be 1950.
    That only makes sense in a contrived sort of way since 580 = 290 * (1.01)^70, which is the doubling point for CO2 given a 1% increase per year..

    This indicates that everything about that calculation is essentially meaningless when contrasted to the actual situation. In the real world, CO2 did not double by 1950.

    The battle of the numbers between Cawley and Nic Lewis in this case is simply a contrived example that has little meaning in comparison to reality. The reality is that CO2 is not increasing by 1% per year, but much less than that — which likely means that the transient sensitivity can more easily keep up with an observed value closer to 2C for a doubling of atmospheric CO2.

    According to Lacis in 2013, the forcing of CO2 plus water constituents is well above half. This means that the CO2 is an effective control knob.

  29. Here is a comment using my Twitter account. Notice the signature. Notice the avatar.

    Here is a technical comment on Loehle’s model.

    Here is a technical comment on Cawley’s model.

  30. WHT,
    What?

    Oh, so they use the temperature change from the 70th year, which would be 1950.
    That only makes sense in a contrived sort of way since 580 = 290 * (1.01)^70, which is the doubling point for CO2 given a 1% increase per year..

    This indicates that everything about that calculation is essentially meaningless when contrasted to the actual situation. In the real world, CO2 did not double by 1950.

    No, come on. Given the forcings, they develop a model that best matches the observed temperature change. Once they have that model, they then run a simulation in which the only change in forcing is CO2 and which is doubled at 1% per year for 70 years. This allows them to determine the TCR for the model that best fit the observed temperature.

    What you did is to set the forcings so that the change in anthropogenic forcing between 1880 and now was much larger than we think it actually is. Therefore, the model that best fits the observed temperature is one with a low climate sensitivity. It’s your choice that did this, not their model.

  31. Marco says:

    There are three types of mistakes in scientific papers:

    a) Those that are so small that they have no discernable impact on the results of the paper.

    b) Those that have a clear impact on one aspect of the study, but not on the main conclusions of the paper.

    c) Those that alter major aspects/conclusions of the study.

    The error in Cawley et al is at worst in category b. Tol’s gremlins were clearly in category c, as there were several errors that led to a significantly different main conclusion. I am sure Richard agrees with me.

  32. Marco,

    Tol’s gremlins were clearly in category c, as there were several errors that led to a significantly different main conclusion. I am sure Richard agrees with me.

    I believe I’ve seen Richard argue that the errors were not significant because the updated result is statistically consistent with the result in the original paper. Personally, I would argue that a new result being statistically consistent with a result that is wrong, doesn’t really allow one to suggest that the original conclusions stand. If it did, you could publish a paper with lots of errors and draw erroneous conclusions. You could then publish another paper that corrects some of these errors but not enough to make the new result statistically inconsistent with the original result; hence arguing that the original conclusion stands. You then publish another paper correcting other errors, but not enough so that the second update is statistically inconsistent with the results for the first update; hence arguing that the original conclusion still stands (since it is the conclusion for the first update). You keep doing this until the final result is completely different to the original result, but in such a way that each update is statistically consistent with the previous update and hence that your final conclusions are the same as your original conclusions. It’s clever, but not really correct 😉

  33. Richard S.J. Tol says:

    @Marco
    No. The difference between the corrected and original results is qualitatively zero, small in size, and statistically insignificant.

  34. Richard,

    The difference between the corrected and original results is qualitatively zero, small in size, and statistically insignificant.

    But, the original result is wrong (or is based on a number of incorrect data points), so how does the updated result being statistically consistent with a wrong result allow one to conclude that the conclusions haven’t changed? IMO, you should at least draw new conclusions based on the analysis using the corrected data. If they end up being the same as the original conclusions, fine, but arguing that the results are statistically consistent therefore the conclusions are the same, just seems the wrong way to do this.

  35. Kevin O'Neill says:

    Statistically consistent with ‘wrong’ does not seem like much of an achievement.

  36. It seems as if the crutch in the argument is that the modeler only has to reference the 1% per year increase in CO2 to get a pass. The reality is that since 1880, the rate of growth of CO2 has only been approximately 1/4 of 1% per year if averaged over that time-span. This is a much slower increase in CO2 so any lags in response have a better chance to catch up to the transient change.

    ” Once they have that model, they then run a simulation in which the only change in forcing is CO2 and which is doubled at 1% per year for 70 years. This allows them to determine the TCR for the model that best fit the observed temperature.”

    So the Cowtan model is only able to eak out a 1.137C temperature change given that only CO2 is changed? According to Hansen & Lacis, the CO2 acts as a control knob that will pull water vapor along with it, and so in the equilibrium situation the H2O can have a ~4x more powerful warming effect than the CO2 alone can. The direct question that I have is whether an increase in H2O scales with the CO2 increase in the Cowtan model, and how they handle that in the transient case.

    I think only Cowtan or Cawley can answer this question as it is buried in their code, which we can not see.

  37. Speaking of statistical insignificance:

    In any case, I think Tol needs to get his story straight: in one place he said the qualitative conclusions did not change; in another place he points to some changes and say they are relevant for policy.

    http://www.washingtonpost.com/blogs/monkey-cage/wp/2015/05/23/the-gremlins-did-it-iffy-curve-fit-drives-strong-policy-conclusions/

  38. WHT,
    Firstly, what the Cowtan model presents is equivalent to what is presented for a GCM. If you were to do this with a GCM, the comparison with the observed temperatures would be run with all forcings included (anthro, solar, volcanoes). However, the TCR/ECS that are presented are determined by the running the model with only CO2 changing and increasing it at 1% per year. So, there is nothing unusual about what the Cowtan model is presenting.

    The reason you’re getting a low-ball TCR is because you’re forcing the Cowtan model to fit the observed temperatures with a very high change in anthropogenic forcing. This is mainly because you’re setting the aerosol forcing to zero and hence (probably) overestimating the likely change in anthropogenic forcing. The range for the aerosol forcing is actually quite large. It could be almost zero and it could be almost twice as big as the expected value. So try this. Rerun the Cowtan model with the aerosol set to 2. You’ll get a TCR of almost 2.2K. So, the Cowtan model is producing a range for TCR that is very similar to the IPCC range. A lower limit of 1K, a best estimate of around 1.7K (everything set to 1) and a maximum of around 2.2K (aerosol forcing doubled).

    You’ve got to remember that the model is simply fitting to the data with the external forcings that you’re choosing. That determines the resulting TCR. If you decide to ignore one of the forcings when fitting to the observed temperatures, you’ll get a different TCR value.

    If you want to know more about the Cowtan model, it’s mainly explained in my post of Cawley’s minimal model, where we started this discussion.

  39. In summary, the Cowtan interactive model for a 1% change per year in CO2 taken over 70 years leads to a doubling of CO2. The transient response modeled at 70 years is 1.137C degrees if all the other forcings are set to zero.

    So the next question is what Cowtan’s model for the equilibrium case (ECS) will be. Can a 1.137C transient increase transform into a 3C increase if we wait long enough?

    This is an important question as the observational evidence is that land warming alone has already increased by 1.2C since 1880 while the CO2 has only increased by 40% so far.

    Something about the TCR metric does not seem to capture the actual physical evidence, even though the claim is that TCR reflects actuals better than ECS does!

  40. We got to give Richard that there is at least one result in his paper that does NOT seem to significantly change:

    [C]onsiderable uncertainty about the economic impact of climate change […] negative surprises are more likely than positive ones. […] The policy implication is that reduction of greenhouse gas emissions should err on the ambitious side.

    Op. Cit.

  41. The disconnect in the Cowtan model is exemplified by the bottom panel in the figure that I linked to in the 1st comment at the top of this thread.

    Notice that the model shows an approximately 0.8C increase in temperature due to CO2 alone (the effect of H2O is ostensibly included). This is due to an increase of CO2 from approximately 290 PPM to 395 PPM over the time span from 1880 to 2010. This is a type of transient response and the back-of-the-envelope estimate of a log response would be

    TCR = 0.8C * ln(2)/ln(395/290) = 1.8C

    My problem is that 1.8C is a far cry from the 1.137C shown in the figure and labeled as the TCR.

    The title of this post is 2+2=4. Explain again why the difference in this case.

  42. WHT,
    I’m going to give up in minute. You can’t assume that the 0.8oC that we’ve had since 1880 is due only to CO2. There are more external forcings in reality. If you want to estimate the TCR from the figures that you posted, the denominator needs to be the change in external forcing that you assume has taken place over the last 100 years. By setting the aerosol forcing to zero, you’re assuming that the change in external forcing is almost 3 Wm-2. That’s why the model is giving a low TCR.

    I’ll repeat what I said before. This is not a consequence of Cowtan’s model. It’s a consequence of what you’re assuming when you run the model. It’s how you’re using the model, not the model itself. You would get the same result if you set up a GCM and tried to get it to match the observed temperatures with aerosol forcings set to zero. It would also give a low TCR.

    The title of this post is 2+2=4. Explain again why the difference in this case.

    I’ve been trying. I don’t think you’re listening.


  43. I’m going to give up in minute. You can’t assume that the 0.8oC that we’ve had since 1880 is due only to CO2. There are more external forcings in reality.

    Sure, but these are all zeroed out by setting Cowtan’s model weightings. Something is generating the 0.8C increase, and all I am doing is pointing out that CO2 is the only free variable available. My current assumption is that the extra is due to the H20 that is carried along as moderate positive feedback by the CO2 increase.

    For a TCR of 1.137, the approximate amount of scaled temperature increase is dT = TCR*ln(Co2/CO2(0))/ln(2) = 1.137*ln(395/290) = 0.5 C.

    Yet the model shows 0.8C due to CO2. So where does the extra 0.8-0.5=0.3C come from? You are saying it is due to the 3 W/m^2 somehow. No one understands this in the context of the well-known log sensitivity of warming to CO2 concentration.

    One thing is for certain — Nic Lewis is not going to come and bail me out on this issue. This is why he is thriving — on the FUD created by focusing on a contrived model metric which does not reflect the real world observations. He absolutely loves these low-ball estimates as they play in to the game that he is playing.

  44. WHT,

    Sure, but these are all zeroed out by setting Cowtan’s model weightings.

    No they’re not because there are GHGs other than CO2 in the well-mixed GHG forcing. So, the forcing that you include in the model is much bigger than the forcing due to CO2 alone (look at the blue line in the second bottom figure that you include in your first comment). The change in external forcing in the model (almost 3Wm-2) is bigger than the change due to CO2 only (5.35 ln(395/280) = 1.84Wm-2).

    One thing is for certain — Nic Lewis is not going to come and bail me out on this issue. This is why he is thriving — on the FUD created by focusing on a contrived model metric which does not reflect the real world observations. He absolutely loves these low-ball estimates as they play in to the game that he is playing.

    Yes, I suspect you’re right, but I’m not sure why that’s relevant. We’re talking about Cowtan’s model (which is a physically motivated box model) not Nic Lewis’s calculations.

  45. Eli Rabett says:

    No. The difference between the corrected and original results is qualitatively zero, small in size, and statistically insignificant.

    The result, of course, remains useless.

  46. Marco says:

    What Willard said.

  47. The combined effect of CO2+water is only 1.84/3 = 0.61 of the total forcing? That is too low.

    and the rule is that the methane and nitrous oxide contributions are exaggerated, with the realistic contributions much lower than that shown, especially considering they don’t accumulate as strongly as CO2 does . Realistically, CO2+H2O dominate GHG forcing as per the Lacis link that I cited earlier

    And this still doesn’t get to the fundamental issue that the definition somehow has to include the fact that industrial output of CO2 emissions scales the rest of the gases so that CO2 acts as a convenient metric with which to gauge the total GHG growth.

    Consider that at some point in the future, do we believe that the other minor GHGs will grow as CO2 grows? That is all I am getting at, CO2 as a reference scale. And that will cause the TCR to scale from 1.6C to closer to 2C if those continue to grow alongside CO2, if that in fact is what Cowtan is doing.

    Force Nic Lewis to scale his own TCR estimates to include the other GHGs in that case. Make him do it and be as persistent as I am about it.

  48. WHT,

    The combined effect of CO2+water is only 1.84/3 = 0.61 of the total forcing? That is too low.

    Yes, because you set the aerosol forcing to zero. The likely net anthropogenic forcing is about 2.3Wm-2 with CO2 alone being about 1.7Wm-2. By setting the aerosol forcing to zero when you ran the Cowtan model, you forced the net anthropogenic forcing to be about 30% bigger than we actually expect it to be, hence forcing the model to have a low TCR.

  49. Have you ever seen Lord Lawson or Lord Ridley saying that “reduction of greenhouse gas emissions should err on the ambitious side” as Richard advises, Marco?

  50. Arthur Smith says:

    WHT, if it helps, the “+water” is there for EVERY forcing – it’s CO2 + water, methane + water, ozone + water, aerosols + water (or – water as the cooling effect reduces water vapor), etc. This is a statistical fitting procedure, it’s not a real physical model, and the only thing considered is the forcing numbers. Every forcing comes with its collection of feedbacks (water + lapse rate + clouds etc). If the net forcing is assumed large (as you are doing in playing with the parameters) then for the statistical fit to work the feedbacks must be very small and you get a small TCR. If the net forcing is small (if you bumped up the aerosol numbers) then the feedbacks must be large and you get a large TCR.

  51. I disagree, Eli: there is al least one meaningful result: the initial benefits are sunk benefits, irrelevant for policy. This seems quite incompatible with the narrative sold by the GWPF and the Copenhagen Consensus. The only outlier, Tol 2002, does not matter anyway.

    When will Benny and Bjorn issue corrections, Richard?

  52. Tom Curtis says:

    WHT, once again, you are falsely assuming that the CO2 forcing is approximately equal to 100% of the WMHGH forcing. It is actually closer to 65%. Once you allow for that fact, there is no discrepancy between Cowtan’s model and reasonable expectations.

  53. This site (http://cdiac.ornl.gov/pns/current_ghg.html) shows the issue. In this case CO2 is only 57% of the total forcing effect. I suggest that the recent Lacis paper in Tellus is a good contrast: http://pubs.giss.nasa.gov/docs/2013/2013_Lacis_etal_1.pdf

    This shows CO2 as a ~80% factor in comparison to the other GHGs

    The skeptics such as Nic Lewis count on isolating single issues and ignoring the rest. The other anthro GHGs will likely grow along with CO2 and not including this scaling factor as an encompassing metric has a downside.

    So if we use the 0.57 factor as a scaling growth, then the TCR should be 1.137/0.57 = 2C, which is what I have been saying all along as an all-purpose metric where CO2 is a leading indicator for all the GHGs pulled together. A value of 2C is what I get in the CSALT model if I don’t break out all the other GHGs and simply use CO2 as a leading indicator.

    Alternatively, consider that the 0.57 factor is real. Does that mean that the best estimate of ECS of 3C should actually be 3 C/0.57 = 5.2C if the other GHGs continue to grow alongside CO2?

    Time to provide a unified explanation as to the origins of TCR and ECS.

  54. Some have also said that CO2 is a control knob for methane and nitrous oxide. Consider that as the development of the Bakken fields progresses, more and more natural gas is vented directly to the atmosphere. Same goes for outgassing of methane as the world warms. These factors are intricately tied to CO2 as a leading indicator of fossil fuel combustion.

    That leaves ozone and halocarbons as less dependent on the trajectory of fossil fuel emissions, but still these are the result of industrialization, which will continue to grow in the future.

    Nic Lewis is as single-minded on narrowing the focus as I am on opening up the focus to show where the holes in the argument are.

  55. WHT,

    Alternatively, consider that the 0.57 factor is real. Does that mean that the best estimate of ECS of 3C should actually be 3 C/0.57 = 5.2C if the other GHGs continue to grow alongside CO2?

    Well, I think, relative to the most likely net anthropogenic forcing today, the CO2 forcing is 73% of the the net forcing. I think your 0.57 comes from not considering the aerosol forcing. Technically, though, you would be right if you defined the ECS as the equilibrium temperature after CO2 alone has doubled and all other GHGs have increased in the same ratio as they are now. However, the problem is that we don’t necessarily expect all the other forcings to track along with CO2. We’d expect the aerosol forcing to reduce (as a fraction at least).

    Time to provide a unified explanation as to the origins of TCR and ECS.

    Formally, these are model metrics which are defined as being the transient and equilibrium temperature changes in a simulation where only CO2 is increased to double what it was at the beginning and is does at 1% per year. Therefore, in such a simulation, the change in external forcing would be about 3.7Wm-2. Therefore, when considering observational estimates, they are normally defined as being the transient and equilibrium temperature changes when the forcing changes by an amount equal to that due to a doubling of CO2 alone.

  56. So far I have seen estimates of the CO2 portion of the total GHG forcing load at 57%, 61%, 65%, and 80%. My idea for using CO2 as a “leading indicator” has much merit. When climate sensitivity is done that way, then the uncertainty behind CO2 attribution is irrelevant. The fractional attribution is what it is and the model can assume that it will continue at that proportion for the foreseeable future, we simply don’t have to know what it is, as I have demonstrated with the 1.137/0.57 = 2C derivation.

    The leading indicator approach is not a lot different conceptually than what the Dow Jones Industrial Average accomplishes, which is to take 30 stocks and use those as representative of the rest. Or like weight gain is anticipated by how many cans of Coke one slams down, it works as a rough leading indicator. In the CO2 case, all we do is take the CO2 and assume the rest of the anthro GHGs follow to some degree.

    In reality the approach describes the implicit assumptions behind all the numbers that get bandied about. That is of describing BAU=business as usual, as long as CO2 is increasing, methane etc will continue to increase at a dead-reckoning pace.

  57. WHT,
    One problem is that if you look at the RCP scenarios, they’re plotting net anthropogenic forcings, not just CO2. You can delve into the data to find the CO2, or the GHGs only, but working with forcings is somewhat easier. So, if you know the transient response to a change in forcing of 3.7 Wm-2 you can estimate how much we’ll warm along different future emission pathways.

  58. In that case, any mention of the TCR for CO2 is irrelevant as the answer is based on the collective outcome of the scenario pathway, which includes increases of all the anthro GHGs.

    Nic Lewis has a focus on the TCR because it tells a misleading narrative, and one that appears perfectly logical to the unsuspecting reader but is in fact a low-ball for the collective impact,

    Sorry that it took so long to get to this point but I hope we are all square now.

  59. Marco says:

    “Have you ever seen Lord Lawson or Lord Ridley saying that “reduction of greenhouse gas emissions should err on the ambitious side” as Richard advises, Marco?”

    Of course not.

    “When will Benny and Bjorn issue corrections, Richard?”

    Oh, Richard will just tell you they base their conclusion on *other* (undisclosed) papers, and that they are free to do so.

  60. WHT,

    In that case, any mention of the TCR for CO2 is irrelevant as the answer is based on the collective outcome of the scenario pathway, which includes increases of all the anthro GHGs.

    Yes, in a sense that’s true. The TCR could be seen as the transient response to a change in forcing that is equivalent to a change due to a doubling of CO2 only.

    Sorry that it took so long to get to this point but I hope we are all square now.

    Good to get there eventually 🙂

  61. Steve Bloom says:

    WHT: “This is why (Nic Lewis) is thriving — on the FUD created by focusing on a contrived model metric which does not reflect the real world observations. He absolutely loves these low-ball estimates as they play in to the game that he is playing.”

    But he’s numerate, polite(ish) and puts on a real posh feed, too. Surely that is sufficient grounds to vote him onto the climate science island. Scientific politesse requires that his obvious bad motivations be elided.

  62. Eli Rabett says:

    Webby,

    First, a significant portion (note not all) of the methane and nitrous oxide flux comes from biologicals or ocean outgassing and/or some combination of both, and both are temperature sensitive in the same direction, so welcome to Arthur’s comment.

    Second, the usual way you account for the amount of forcing from each source is the single subtraction or addition table, which is Table I from Lacis et al.The table you showed is a rough breakdown for snowball earth (1/8 CO2), Earth, and Venus (256 CO2).

  63. @Willard
    Read carefully: This was not an erratum but an erratum and update. The erratum is immaterial, but the update does change the conclusions.

  64. Tom Curtis says:

    Richard Tol, the erratum changed the policy implications, though not in the direction desired by AGW “alarmists”. The update included two new values so widely varying in their implications that they called into question the entire methodology. It is statistical nonsense to use just two data points to fix policy outcomes when they are in such obvious disagreement with each other.

  65. Tom Curtis says:

    WHT, you keep referring to Table 3 from Lacis et al 2013. It, however, shows the “structural forcing”, ie, the percentage contribution to the “total greenhouse effect” as defined in Lacis et al, 2010. It does not show the change in forcing relative to preindustrial values such as is shown by the IPCC or in the RCP scenarios, or other reconstructions of historical forcings as used by Cowtan (and your) programs. It is, therefore, not even relevant to the discussion. There is not even a simple way to convert one into the other (due to lapse rate feedbacks) even if Table 3 included preindustrial and current values (although that would at least allow a ball park estimate).

  66. Peter Jacobs says:

    Tom Curtis,

    I don’t disagree with your above, but as an interesting side note, At December’s Fall AGU meeting, Lacis presented an attempt to derive the ECS value from the structural percentages of the total greenhouse effect on Earth, arriving at an ECS of ~3K. The abstract and poster are still online, but don’t have the level of detail necessary to evaluate how sound the methodology is.

  67. Peter,
    I think Tom discussed that here.

  68. “First, a significant portion (note not all) of the methane and nitrous oxide flux comes from biologicals or ocean outgassing and/or some combination of both, and both are temperature sensitive in the same direction, so welcome to Arthur’s comment.

    CO2 is a leading indicator for methane, either through direct fossil fuel emissions or due to secondary feedback processes. How else to explain its growth throughout the era of the oil age? I do get the fact that H2O will create positive C-C feedback off any GHG.


    Second, the usual way you account for the amount of forcing from each source is the single subtraction or addition table, which is Table I from Lacis et al.The table you showed is a rough breakdown for snowball earth (1/8 CO2), Earth, and Venus (256 CO2).

    Table 1 is called “Planetary Greenhouse Parameters” which doesn’t have anything to do with other GHGs. Perhaps you mean Table 2? I still get 80% contribution for CO2 using Table 2.

    I trust Lacis because he is the radiative transfer guy and he knows this stuff inside and out.

    So which attribution percentage do you trust? The attribution sources that claim CO2 at 57% or those at 80%? It makes a significant difference to the TCR. That’s why I absorb it as a CO2 “leading indicator” in the CSALT calculation. In particular, no one knows how each of the GHG’s has evolved since 1880, except for CO2.

  69. Rob Nicholls says:

    “The model is basically a curve fitting exercise with almost no physical basis whatsoever. I don’t need to really go any further. It doesn’t really matter if the subsequent calculations have errors or not; the study doesn’t really make physical sense.”
    For me, this is one of the most fascinating things about the fake debate over climate change. Repeatedly it seems that fairly complex statistical techniques are used to attempt to demonstrate things that don’t have any basis in reality (in my unqualified opinion). To add to the confusion, I think that very often the people engaging in these attempts sincerely believe in what they’re doing, and this makes them all the more believable. For novices unfamiliar with the evidence I think it’s very difficult to spot what’s going on, unless it’s been pointed out by a statistician or by a physicist or by someone who at least knows something about the huge volume of contrary evidence.

  70. Rob,
    Indeed. Maybe you should try this gem. I haven’t been able to make head nor tail of it yet, but the author list is a real doozy.


  71. I don’t disagree with your above, but as an interesting side note, At December’s Fall AGU meeting, Lacis presented an attempt to derive the ECS value from the structural percentages of the total greenhouse effect on Earth, arriving at an ECS of ~3K. The abstract and poster are still online, but don’t have the level of detail necessary to evaluate how sound the methodology is.

    Here is an interesting exercise. Assume that the Lacis value for ~3K is CO2 and that the 80% attribution is in place. That would place the all-GHG ECS at roughly 3.0/0.8 = 3.75K assuming that the other GHG’s will scale with growth of CO2.

    Now do 8 doublings of CO2 starting with 1 PPM. That would get to 256 PPM and an increase of temperature of 30K = 8*3.75. This is close to the generally accepted value of +33C for the pre-industrial climate.

    There are many sanity checks that support the idea that ECS of 3K is a good estimate ever since it was first suggested in the Charney report in 1979.

  72. WHT,

    So which attribution percentage do you trust? The attribution sources that claim CO2 at 57% or those at 80%? It makes a significant difference to the TCR.

    I think the issue is that in the atmospheric greenhouse effect, there isn’t any influence from anthropogenic aerosols (which have a negative forcing). Today, there is. Hence, the CO2 fraction is not a constant. In fact, we wouldn’t expect it to remain constant into the future since the aerosol forcing depends on our emissions (how much coal we will use) and the aerosols aren’t persistent, so they will precipitate relatively quickly and their relative contribution to the anthropogenic forcing will decrease with time.

  73. Rob Nicholls says:

    Thanks ATTP; this one is slightly more blatant than some that I’ve seen, but all the more entertaining for it.

  74. Steven Mosher says:

    I’m glad they fixed it.
    sometimes there are small problems that should be fixed but people fight over them for no good reason rather than just fixing the problem.

    I could make a list.

    it would start a fight.

    so I won’t.

  75. Willard says:

    Stand aside, here comes an evidence-based doctor:

    As with many interventions intended to prevent ill health, the effectiveness of parachutes has not been subjected to rigorous evaluation by using randomised controlled trials.

    http://www.ncbi.nlm.nih.gov/pmc/articles/PMC300808/

  76. Steven Mosher says:

    “But he’s numerate, polite(ish) and puts on a real posh feed, too. Surely that is sufficient grounds to vote him onto the climate science island. Scientific politesse requires that his obvious bad motivations be elided.”

    interesting.

    green line material.

  77. Eli Rabett says:

    The effect of methane is much higher than the effect of the CO2 it degrades to, to the point that you can ignore the later and only consider the CO2 directly produced. It’s the O2/CO2 issue all over again.

  78. Joshua says:

    Occasionally mosher does have a point.

  79. Steven Mosher says:

    Thank you Joshua.

    In private I’ve been criticized for the most interesting things.

    1. having dinner with Anthony.
    2. Suggesting that to a fellow warmist that we do a paper with Mann.
    3. being a libertarian.

    etc.

    not a single one related to the science.

    both sides do a lot of boundary policing. grooming. picking the gnats off their other tribe members.

  80. both sides do a lot of boundary policing. grooming. picking the gnats off their other tribe members.

    Sure, but I would certainly hope that if someone like Nic Lewis came here and made a non-provocative, thoughtful, reasonable comment, that people would respond in kind. That would be my intent, at least. Not always easy to achieve it in practice, mind you.

  81. Brandon Gates says:

    ATTP, I’m curious what you would find difficult in the scenario you propose. For myself, I most struggle with the thoughtful comment or question which has already been endlessly rehashed but that my interlocutor seems to think is novel. Of course, mine is the perspective of a guest wherever I go, and I do tend to slum a bit.

  82. Brandon,

    I’m curious what you would find difficult in the scenario you propose.

    Oh, nothing really. If someone were to come here and make a decent and reasoned comment I would intend to respond in kind, even if we had clashed somewhere else before.

    I most struggle with the thoughtful comment or question which has already been endlessly rehashed but that my interlocutor seems to think is novel.

    This would eventually become true for me, but – much to some commenters annoyance 🙂 – I have a lot more patience for those who try to remain polite, than those who don’t, even if they are just regurgitating regularly debunked myths.

  83. BBD says:

    ATTP

    I have a lot more patience for those who try to remain polite

    By and large, you are being sealioned. Please read the (short) link even if you are bored with the cartoon 🙂

  84. BBD,

    By and large, you are being sealioned. Please read the (short) link even if you are bored with the cartoon 🙂

    No, sealioning, I do find annoying 😉

  85. Brandon Gates says:

    ATTP, responding in kind is pretty much how I work too. Back when I did Usenet it often happened I’d have two conversations running with the same person on the same day, one completely acrimonious and one adversarial but reasonable. It confuses the hell out of some folks.

    BBD, I had not seen that cartoon before. This is fantastic. Now I have a word for … THAT!!!! … and a visual to go with it. Also a reminder to self not to do it because, well, I think sometimes I do.

  86. Brandon Gates says:

    ATTP, PS: I’m remiss for not thanking you for the actual topic of this post. I had previously not been as clear on some stuff herein. Your original post and the commentary has been helpful.

  87. Brandon,
    Thanks, glad someone gets something out of this 🙂

    P.S.: I think emoticons have stopped working for some reason, so some of my responses may seem more serious than intended.

  88. Rachel M says:

    I think emoticons have stopped working

    I can see all your smileys.
    😱😎👨👩👍👀🌟💩😃

  89. Hmm, I can’t for some reason. Maybe my laptop needs its monthly reboot.

  90. Eli Rabett says:

    Perhaps ATTP, or Steve M might have a word with Willard Tony. Eli has been putting a bunch of well reasoned (e.g. have you considered this) posts into the hopper only to see them triaged with extreme predjudice. It is enough to make a bunny want to take up snarking.

  91. Willard says:

    > I had not seen that cartoon before. This is fantastic.

    H/T Michael Tobis from planet3.org.

  92. josh says:

    I’m not convinced that the Cawley et al paper made a mistake. If you look at Figure 3c, which the relevant section references, they show CO2 forcing and total anthropogenic forcing. In 1950 the total is below the CO2, which seems to me to show negative forcing from non-CO2 effects (probably aerosols?). Around 1990 the curves pass through each other, so non-CO2 would after that be contributing a net positive forcing on top of CO2 in recent years. But for the period 1950 to present the net effect from the non-CO2, i.e. the area of the difference between total anthropic and CO2 curves, looks negative. Which would be consistent with the claim that using only CO2 for the observed temperature change in that period underestimates the sensitivity.

    However, I’ve seen an alleged tweet from one of the authors, Jokimaki, that seems to acknowledge a mistake so maybe I’m missing something.

  93. Josh,
    It’s the change over the time interval (1950 – now) that matters. So, Loehle used a smaller change than probably happened in reality.

  94. harrytwinotter says:

    “What does recovery from the LIA even mean?”

    The only “recovery” I can think of would be a regression to the mean. But this implies that the whole LIA was some sort of statistical fluke – sounds improbable.

    Climate change deniers used the term recovery from the LIA sometimes. If they think at all, I assume they mean a “recovery” from a colder climate state, one of those mysterious “natural cycle” things perhaps.

  95. Harry,
    Indeed, it implies some kind of special state to which we will always return. No actual evidence for such a special state, though.

  96. josh says:

    …and Then There’s Physics,
    How does this address my point/question? I’m talking about the alleged mistake in the Cawley paper. Loehle used the change in temp over the change in CO2 (after isolating the change in temp he assumes is anthropogenic). Cawley et al point out various problems with that, but the point in question is, assuming his temp and change in CO2 are accurate still doesn’t give you sensitivity to CO2 if there are other anthropogenic factors in play. If non-CO2, anthropogenics give you a net positive(negative) forcing then the CO2 sensitivity is over(under) estimated. The alleged mistake is that there is a net positive but Cawley and co. wrote that this under-estimates the rate. From the graph however, it looks to me like a net positive over the last 20 years but net negative over the past 60.

  97. josh,

    How does this address my point/question? I’m talking about the alleged mistake in the Cawley paper.

    Okay, the TCR is formally a model metric and is the change in temperature in a simulation in which you double CO2 only, at 1% per year. It takes 70 years to double and the TCR is the temperature change at the time when CO2 has doubled (actually, it’s normally the average of the temperature between years 50 and 70 (or 60 and 80 – I can’t quite remember)).

    In the real world, however, the change in anthropogenic forcing is not due to CO2 alone. Therefore if you want to determine the TCR from observations, you normally use the change in anthropogenic forcing (which is CO2 plus other GHGs plus aerosols), not just the change in forcing due to CO2. In Loehle (2014) only the change in forcing due to CO2 was used. Cawley et al. (2014) were simply pointing out that the actual change in anthropogenic forcing was slightly greater than the change due to CO2 alone and, hence, that if Loehle had used the change in anthropogenic forcing (rather than just the change due to CO2) his TCR estimate would have been reduced slightly. The error in Cawley et al. (2014) is the latter claim (they said the estimate would go up, when it would have gone down).

  98. josh says:

    Thanks for replying. If TCR is defined with respect to CO2 doubling, and you care about the difference between CO2 and total anthropogenic as Cawley et al do, then why would you advocate using total anthropogenic? How would you even use it? It’s true, the change in instantaneousforcing for total anthropogenic is greater than the change in CO2 alone. But I would think that the relevant factor is the integral over time, i.e., forcing is given in Watts per area, you need to multiply by time (and area) to get a net energy. Net energy is relevant for temperature changes. Again, I’m eyeballing things but the area under the total curve is less than under the CO2 curve, indicating a net negative effect from non-CO2 sources. If that’s true then looking at the observed change in temp compared to the observed CO2 underestimates the sensitivity due to CO2.

  99. josh,

    Thanks for replying. If TCR is defined with respect to CO2 doubling, and you care about the difference between CO2 and total anthropogenic as Cawley et al do, then why would you advocate using total anthropogenic?

    The problem is that the TCR is presented as being due to a doubling of CO2, but in reality what we are interested in is how much we will warm due to changes in anthropogenic forcings. Of course, CO2 is dominant, but the others do have an effect. Total anthropogenic is really what we are interested in. The reason why it’s often presented as CO2 only is numerous. It’s is the dominant emission. The dominant GHG in the greenhouse effect and in past climate changes (Milankovitch, for example) has been CO2. When the TCR/ECS is determined for models, it’s typically done by changing CO2 only. However, in the coming century, we can’t ignore that other anthropogenic emissions will influence how we warm. That’s why the RCP scenarios are often presented as CO2 equivalent (or CO2e) – everything else is presented in terms related to CO2.

    Net energy is relevant for temperature changes. Again, I’m eyeballing things but the area under the total curve is less than under the CO2 curve, indicating a net negative effect from non-CO2 sources.

    The area under the curve doesn’t really matter (at least not on long timescales). What matters is the change. Consider the following. 1. Slowly increase the anthropogenic forcing from 0 – 3.7Wm-2 over a period of 100 years. 2. Do nothing for 90 years and then rapidly increase the anthropogenic forcing to 3.7Wm-2 over then next decade. Of course, during that initial century, the warming will be different, but the temperature to which we will tend will be the same. The equilibrium state depends on the change in forcing, not on how we get to that change.

  100. josh says:

    Okay, I’m going to try one more time. I feel like we aren’t quite talking about the same subjects and I’m sorry if I’m just being obtuse.

    “in reality what we are interested in is how much we will warm due to changes in anthropogenic forcings.”

    Of course that’s what we care about (actually it’s the sum of anthropogenic and non-anthropogenic effects). But we are talking about modeling. In order to know what will happen in the future we need to separate out the different effects. CO2 will continue to increase at a certain rate under various scenarios, other gases will do other things, land-use changes will do whatever. We are making an argument for how CO2 will affect things. Part of the problem here is that the Loehle model is stupidly simple, but it only takes into account CO2 on top of what they think is the ‘natural’ contribution. So the question in the Loehle model is only about CO2.

    “The area under the curve doesn’t really matter (at least not on long timescales). What matters is the change.”

    You’re now talking about the ECS. I’ll be a little more careful: what matters is the area of between the total forcing curve and a curve for the additional outgoing radiation due to an increase in Temp. If forcing levels off then you hit ECS when the two curves meet and there is no net energy flux. The change in temp will be due to the net energy absorbed up to that point which is the area under the curve I’m talking about.

    Anyhow, Loehle’s model is not one of forcings, it’s too simplistic for that. They just take the (supposed) anthropogenic temp trend vs the change in CO2 over that period and extrapolate it to a doubling of CO2. Basically delta T = B * delta(CO2) (I’m suppressing a logarithm on CO2) and they fit to find ‘B’. But the observed temp difference is, crudely, delta T = B*delta(CO2) + NA where NA is a term for the net effect of non-CO2 anthropogenics over the observation period. The question is whether NA is positive or negative for the period used. The difference in forcing at the end of that period is irrelevant. That affects the future in a real model but the observed temp arises from the history, i.e. an integration over time.

    I think I’m spending too much effort trying to fit a stupid model onto reality 🙂 I don’t disagree with your general point that whether there is a mistake or not on this one criticism is irrelevant.

  101. Josh,

    We are making an argument for how CO2 will affect things. Part of the problem here is that the Loehle model is stupidly simple, but it only takes into account CO2 on top of what they think is the ‘natural’ contribution. So the question in the Loehle model is only about CO2.

    This is the point, isn’t it? If you want to consider CO2 only, then it only contributes a fraction of the change in anthropogenic forcing and hence only contributes to part of the observed warming. Loehle assumed that CO2 was the only anthropogenic forcing that produced a change in temperature, therefore he overestimated (by about 13%) it’s effect. That’s really all that Cawley et al. were trying to point out.

    Basically delta T = B * delta(CO2) (I’m suppressing a logarithm on CO2) and they fit to find ‘B’. But the observed temp difference is, crudely, delta T = B*delta(CO2) + NA where NA is a term for the net effect of non-CO2 anthropogenics over the observation period.

    Yes, which is essentially what Cawley et al. were trying to point out.

    The question is whether NA is positive or negative for the period used. The difference in forcing at the end of that period is irrelevant. That affects the future in a real model but the observed temp arises from the history, i.e. an integration over time.

    Except that the transient response is relatively fast (a few years) and so the temperature change due to some change in forcing is broadly independent of how that forcing changes. Of course, how the temperature changes will depend on how the forcing changes, but if you want to estimate the TCR (for example) it is really just the change that matters, not the manner in which it changes.

    I have one caveat, though. What Loehle and others (such as Nic Lewis) have done is assume that the transient temperature change is instant. What Cawley et al. did (which is more realistic) is assume a slight lag (4 years) which gives a slightly higher TCR because essentially the temperature now is influence more by the change in forcing a couple of years ago, than the change in forcing today.

    I do think we are talking slightly at cross-purposes, so we’re probably broadly in agreement even if that isn’t obvious 🙂

  102. -1=e^ipi says:

    Hi, I am a novice at understanding climate science and I do not specialize in climate science, so please forgive my ignorance. I saw this webpage today, and thought I might be able to add to the discussion since I recently performed some corrections to Loehle’s paper.

    I read Loehle’s paper back in July when someone brought it up and a number of things in the paper annoyed me. Recently, I had some free time so I relaxed 3 assumptions in Loehle’s paper and showed that the result is an ECS of 2.95 C not 1.99 C.

    http://www.mapleleafweb.com/forums/topic/24202-what-is-the-correct-value-of-climate-sensitivity/

    Warning: the link is a political forum, so expect the quality of posts to be poor. Forgive me in advance if I have made incorrect claims about climate science.

  103. -1=e^pi,
    I’m surprised you managed to get your estimate quite as high as you did. Cawley et al. got 1.66oC when they wrote a response to Loehle, but you’re correct that there are a number of issues with Loehle’s paper.

  104. -1=e^ipi says:

    @ and Then There’s Physics

    The main issue is that Loehle’s ‘definition’ of transient climate response is incorrect and isn’t the same as the IPCC’s definition. This confusion of the definition is the primary reason for the underestimation. Atmospheric CO2 has not been increasing by 1% since WW2, it has been increasing by less than that and the logarithm of CO2 concentrations has been accelerating and is not linear. Loehle tries to correct for this by dividing by 0.326, but it isn’t a sufficient compensation.

    Actually, use of incorrect definitions of transient climate response or equilibrium climate sensitivity may be a major reason why there is so much discrepancy in the scientific literature. People are making assumptions that what they observe is either the TCR or the ECS because they don’t want to bother with calculating what fraction or multiple of the TCR or ECS they are measuring.

  105. -1=e^ipi says:

    sorry, I mean to say ‘by 1% per year’ not ‘by 1%’.

  106. -1=e^pi,
    Yes, you’re right. Formally his definition is wrong. It’s only one of the problems with his paper, though.

  107. Tom Curtis says:

    -1=e^pi, you write that “Temperature = A + B*ln(CO2 concentration)”. The relationship defined by the IPCC is that ΔF = 5.35*(C/Ci), where C is the current CO2 concentration, and Ci is the initial CO2 concentration; and ΔT is an approximately linear function of ΔF for small changes of ΔF. Most simply, that would appear to set the 0 point for the formula at Ci, invalidating your analysis in the first section.

    Further, that formula is itself only an approximation, which does not hold for very large or very low levels of CO2. Using the UChicago version of modtran, the approximation appears to be valid from 8 to 4000 ppmv Again, I think this creates problems for your reasoning in the first section.

  108. Tom,
    The UChicago version of Modtran is not necessarily valid enough for concentrations very different from the present ones, because it’s a rather crude band model tuned to work for the present atmosphere.

  109. -1=e^ipi says:

    @ Tom Curtis-

    What you write is incorrect. The IPCC relationship is ΔF = 5.35 W/m^2 * ln(C/Ci), not ΔF = 5.35 W/m^2 *(C/Ci). (I am assuming you mean C = CO2 concentration here)
    http://www.ipcc.ch/ipccreports/tar/wg1/222.htm

    From there it is basic algebra to get that F = X + 5.35 W/m^2 * ln(C), where X is some constant. From there, one simply has to use radiative equilibrium, the Stefan-Boltzman relationship and a Taylor approximation to get that Temperature = A + B*ln(C). Then various feedback effects multiply this temperature response due to the change in CO2 to get an Equilibrium Temperature = D + S/ln2*ln(C), where D is some constant, and S is the equilibrium climate sensitivity.

    In post #12 for the forum link that I gave earlier, I give a brief justification for this. I also gave links to other justifications/derivations of the logarithmic approximation in that post.

    The roughly logarithmic relationship is accepted by the IPCC, James Hansen and others (heck even Craig Loehle and Christopher Monckton; not that I would put Loehle in the same category as Monckton). It’s due to this logarithmic relationship that values such as equilibrium climate sensitivity have so much meaning. If you still disagree, then you can present your justification for your disagreement if you wish, but you will be taking a position that goes against the mainstream scientific community.

    With respect to the logarithmic approximation vs the modtran, it is a very good approximation for the CO2 concentrations that we care about (say 256 ppm to 1024 ppm). If you look at the link that you just provided, you will notice this.

  110. -1=e^ipi,

    Seems I can’t post on that comment thread you linked. I simply wanted to ask TimG if he’s the TimG56 who comments from time to time at Judy’s, e.g.:

    one of my favorite phrases is life is hard and then you die.

    It has held true for 90 some percent of humans who have ever drawn breath. That 97 percent or so of the people who comment here have never experienced the hard part is a fact one should always keep in mind.

    Natural internal variability: sensitivity and attribution

    Perhaps you recognize the style?

    In any case, please ask him. If that’s the TimG I know, send him my regards and tell him he forgot one of his pom-poms at Judy’s.

  111. Arthur Smith says:

    Just a note from me – -1=e^ipi’s math looks fine to me, and in part develops an argument I was tempted to include in my recent Monckton article (but it was already long enough), that while CO2 emissions from human activity have followed a near-exponential path, that is on top of the pre-industrial level. At least so far the result is not a linear increase in forcing but an accelerating increase. I thought -1=e^ipi’s quadratic expression from a Taylor expansion was a nice touch, I was just going to plot the curve.

    What I hadn’t appreciated was the issue that -1=e^ipi also elaborated on, that the relation between the warming we’ve seen and TCR has a strong dependence on forcing history. Not just the fact that we’re not increasing at the steady 1%/year that TCR assumes, but that the acceleration of forcing increase has an impact. Now -1=e^ipi’s assumption that the response is a simple exponential with a single time-constant is overly simplistic, but I think it does give a nice view of how those various factors could impact the relation between observed warming and TCR, beyond the simple current forcing ratio.

  112. Arthur,
    I’m impressed. I must admit that my eyes rather glazed over and I didn’t really work through it. I think it does highlight an interesting issue, that the TCR could depend on forcing history.

  113. Both TCR and ECS are well defined only in models, because the real world will never develop following the assumptions of the definitions. Linking observations to the parameters can be done only using models, and the results depend on the details of that model.

    Estimates of TCR are probably not very much influenced by the model as long as it is compatible with other knowledge, but they are to some extent. Estimates of ECS depend strongly on the model.

  114. Get rid of the natural variability, i.e. compensate for it, and plot deltaT against ln(CO2) and you get a realistic TCR. Not that complicated.

  115. -1=e^ipi says:

    ^ Is that the same Arthur Smith as the one who wrote the response to Monckton’s ridiculously claim of a 0.58C ECS?

    Click to access monckton_rebutted.pdf

    But yeah, I agree that the assumption of the constant decay rate towards equilibrium is overly simplistic; the rate most likely decreases with time.

    @ Willard, I asked TimG your question, and he says that he is not the same TimG.

    @ WebHubTelescope – It is more complicated with that. Making a simple plot as you described is what Craig Loehle did (admittedly, Craig Loehle tried to take into account natural variation as well). The TCR has a very specific definition according to the IPCC, and in reality atmospheric CO2 is not increasing at 1% per year and is accelerating, which is why Loehle underestimates TCR.

  116. -1=e^ipi says:

    I just wanted to add that my value of 2.95 C is probably an overestimate because I did not take into account other GHGs. Change in GHG radiative forcing since 1950 has been roughly 76% due to CO2. This would suggest that I should revise my estimate down to 2.25 C (of course there is still the issue of the assumption of constant natural variability resulting in this being an underestimate). Anyway, I’ve moved on to much better time-series approaches.

  117. Leave the 2.95C the way it is. The other GHGs will rise along with CO2 so that is considered a perfectly adequate leading indicator.

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.