Some thoughts on internal variability

Given that there’s been some discussion about internal variability in my previous post, and because there seems to have been interest elsewhere, I thought I would post some thoughts.

Figure 4 from Palmer & McNeall (2014) showing internally driven trends in temperature and system heat uptake rate.

Figure 4 from Palmer & McNeall (2014) showing internally driven surface temperature trends and system heat uptake rates.

A paper I was reading recently is Internal variability of Earth’s energy budget simulated by CMIP5 climate models by Palmer & McNeall (2014), which uses multi-century pre-industrial control simulations from the fifth phase of the Coupled Model Intercomparison Project (CMIP5) to investigate relationships between: net top-of-atmosphere radiation (TOA), globally averaged surface temperature (GST) ….. on decadal timescales. The interesting figure is probably the one of the right which shows the range of internally driven surface temperature trends and system heat uptake rates, plotted against time interval. For periods of about a decade or less, these can be quite substantial.

Such internally driven variations could have implications for energy balance calculations – in particular the transient calculation – since the internal forcings could have a substantial influence on the temperature change. As Tom Curtis points out, however, an assumption of the energy balance method is that the change in outgoing flux due to temperature changes resulting from internal variability match those due to temperature changes due to response to a forcing. If so, this wouldn’t influence the equilibrium calculation. However, as Pekka suggests, regional variations means that this may not always be the case. This appears to be consistent with this paper, which suggests that changes in temperature and system heat uptake rate only correlate on average – there is a large amount of variability.

Credit : Roberts et al. (2015)

Credit : Roberts et al. (2015)

On a similar note, there was another recent paper – also including Palmer & McNeall – on quantifying the likelihood of a continued hiatus in global warming (Roberts et al. 2015). You can read more about it on Doug’s blog, but the core result is probably illustrated in the table on the left. It shows the probability of internal variability offsetting a trend of 0.2oC per decade, for different time intervals – dropping to less than 1% for periods exceeding 20 years. The interesting result is that the probability of it continuing to offset such a trend for an additional 5 years, is actually quite high if it has already done so for 15 years (although, I don’t think this is necessarily all that surprising).

There’s a related post on RealClimate called climate oscillations and the global warming faux-pause. It discusses a recent paper by Steinman, Mann & Miller called Atlantic and Pacific multidecadal oscillations and Northern Hemisphere temperatures. It

applied a semi-empirical approach that combines climate observations and model simulations to estimate Atlantic- and Pacific-based internal multidecadal variability (termed “AMO” and “PMO,” respectively).

and concluded that

the AMO and PMO are found to explain a large proportion of internal variability in Northern Hemisphere mean temperatures.

As Robert Way points out, however, there are probably also other contributing factors, such as updated forcings for volcanic activity and the weak solar cycle, and that using these updated forcings would [probably?] reduce the total role of multidecadal variability.

I was going to finish this rather convoluted post with a quick mention of a paper (H/T Kevin Anchukaitis) called spectral biases in tree-ring climate proxies. I did read the paper, but am not sure I quite got the significance, but it does say

We find that whereas an ensemble of different general circulation models represents patterns captured in instrumental measurements, such as land–ocean contrasts and enhanced low-frequency tropical variability, the tree-ring-dominated proxy collection does not…….temperature-sensitive proxies overestimate, on average, the ratio of low- to high-frequency variability. These spectral biases in the proxy records seem to propagate into multi-proxy climate reconstructions for which we observe an overestimation of low-frequency signals. Thus, a proper representation of the high- to low-frequency spectrum in proxy records is needed to reduce uncertainties in climate reconstruction efforts.

If I’ve understood this properly (and I might not have) this seems to be suggesting that multi-proxy climate reconstructions overestimate the ratio of low- to high-frequency variability and, hence, might be suggesting that it’s not capturing all the variability. If someone else understands the significance of this, it would be interesting to get it clarified.

Anyway, that’s all I was going to say. This is all rather longer and more jumbled than I had intended, but hopefully there’s something for everyone.

Advertisements
This entry was posted in Climate change, Climate sensitivity, ENSO, Global warming, IPCC, Michael Mann, Science and tagged , , , , , , , , , . Bookmark the permalink.

90 Responses to Some thoughts on internal variability

  1. Everett F Sargent says:

    IANACS, however, with regards to the last paper, I think they are right (have not read it, go figure).

    When we look at the SAT, for example, we see a lot of spatial autocorrelation. When dealing with proxies for that temperature (sans say the ice cores), it would appear that there isn’t a lot of spatial autocorrelation (very glad to be corrected on that one) throughout the frequency domain representation of those individual proxy time series.

    So that in ‘averaging to the mean’ through whatever means of reconstruction, the low frequency portion is overestimated (smaller error bars) as opposed to the high frequency portion.

    That would be my rather uninformed SWAG.

  2. Certain proxy records have a bias toward low-frequency simply because the data is smeared due to diffusion and loss of resolution in for example core data. That all acts as a low-pass filter, which removes the higher frequency information.

    The exception to this I believe is in coral and tree rings. These have an automatic yearly calibration in terms of dating and a sort of diffusion barrier delineated by the rings.

    From what I have seen of the calibration of corals to the modern ENSO data it is good. Reconstructions of ENSO from many centuries have about the same amplitude and structure as today. The Southern Oscillation is definitely one of those Energizer bunny-like phenomena that keeps going and going. Just waiting until someone figures out conclusively how to model it.

  3. WHT,

    The exception to this I believe is in coral and tree rings.

    Except, this paper suggests the issue is with tree rings. My understanding was that it might have something to do with properly removing the precipitation signatures so as to extract the climatic-only signal.

  4. It’s common that high frequencies are damped, but the paper of Franke et al discusses also spurious low frequency variability as another source for spectral biases. It doesn’t appear surprising at all that tree-ring proxies may be affected by low-frequency variability that gets erroneously interpreted as temperature signal. Changes in precipitation are one plausible cause for that but so are also other environmental factors that affect growth.

  5. Kevin says:

    Hi Anders, all,

    A few thoughts on the Franke paper:

    1. There are definitely precipitation (or, probably more properly, drought)-sensitive proxies that have been included in existing multiproxy reconstructions of the global or hemisphere-scale annual or growing season temperature (because these pass statistical screening procedures, even if the original authors interpreted their proxies differently from a mechanistic and/or statistical point of view). I doubt that this is the cause of near-centennial scale overestimation of the temperature spectrum, but it might contribute to decadal-scale injections of (non-temperature) variance.

    2. My guess would be that some (much? most?) of the overestimation of the low frequency when considering precipitation-sensitive trees is because many (although not all) of these trees are probably more accurately thought of as drought- or soil moisture-sensitive — Franke et al. show in their Figure S12 that PDSI (which incorporates the combined effects of precipitation, evapotranspiration, and soil moisture storage terms) observations and reconstructions have more similar beta distributions.

    3. The overestimation of the low frequency in temperature-sensitive chronologies is most strongly associated with tree-ring width, as opposed to maximum latewood density (their Figure 2i and Figure S11). This suggests to me that the shifted beta distributions are at least partly associated with temporal autocorrelation in tree-ring width chronologies (which is larger than for MXD chronologies, typically). There are standardization methods that attempt to model and account for this, but often the ‘standard’ (non-AR modeled) chronologies are utilized in temperature reconstructions. Regional curve standardization (RCS) also has the potential to impart low frequency artifacts to a chronology (on the otherhand, it can preserve low frequency that other methods cannot). If I had to guess, then, I’d say that detrending and standardization issues are the most important ones in the spectral mismatch between temperature observations and reconstructions.

    An overarching message — whether we are talking about tree rings, ice cores, speleothems, lakes sediments, etc. is that all proxies reflect a filtered record of climate system variability. We can start to understand this and incorporate it using proxy systems modeling (http://www.sciencedirect.com/science/article/pii/S0277379113002011)

    cheers,
    Kevin

  6. Tom Curtis says:

    Anders, looking to McNeal’s blog on Roberts et al (2015), I note that there is a 14% of an at least 5 year continuation of a 10 year hiatus. Given that there is a 10% probability of a 10 year hiatus, and assuming independence, that indicates a 1.4% probability of an at least 15 year hiatus, and a 0.21% probability of an at least 20 year hiatus. Of course, these probabilities are unlikely to be independent, which would suggest slightly higher probabilities. Do you have more precise figures for a 15 year hiatus (such as we are supposedly in)?

    More importantly, the decreasing probability of a “hiatus” from 5 to 10 year periods suggests Roberts et al’s definition of “hiatus” is a period with at most a trend of 0 C per decade, not a period in which the trend is not statistically significant. Is that correct, or is it some third option such as a set (absolute) reduction in the trend? The reason I ask is that, on the first definition, observed temperature trends have been in hiatus since 2007 at most, whereas on the second they have been in hiatus since 1997 (possibly 1996). The difference in definition makes a substantial difference in how the paper should be interpreted with respect to current circumstances.

  7. jyyh says:

    I’ll just mention the generally larger variability over land vs. ocean and think this has been already accounted for in the papers in question by the writers. The larger variability over land is of course a result of water vapour, which is pretty well mixed in humid oceanic climates (all through the opean sea arctic) but not at all well mixed over continents. Guessing this is too trivial a reason to have caused such discrepancy in the interpretation of the proxies vs. models study.

  8. Take a look at this model fit to global temperature variability and explain the parts that you think are weak.

    The only criticism not allowed is that it is overfitting, because it is definitely not a case of overfitting 🙂

    I cringe at the piling on of Mann at the skeptical blogs over the Steinman paper, mainly because I can’t properly defend their results other than to defer to what they say. As it turns out, I can usually do a better job defending a scientific explanation that I can derive myself. IOW, I lack the talents of a great BS artist.

  9. Kevin,
    Thanks. As you can probably tell, this is well outside my area of expertise. I guess the bigger issue is that if we want to better understand forced and unforced variability on millenial timescales, we need to find ways to extract these signals from the proxy data?

    Tom,

    Do you have more precise figures for a 15 year hiatus (such as we are supposedly in)?

    I’m not sure. Maybe Doug or Chris Roberts will come along and provide a better answer.

    More importantly, the decreasing probability of a “hiatus” from 5 to 10 year periods suggests Roberts et al’s definition of “hiatus” is a period with at most a trend of 0 C per decade, not a period in which the trend is not statistically significant. Is that correct, or is it some third option such as a set (absolute) reduction in the trend?

    Yes, I think you’re right that their definition of a hiatus is a period where the forced trend (which they assume to be 0.2K per decade) is completely offset by internal variability. So, this paper isn’t – strictly speaking – trying to explain our current situation – it’s more general than that.

  10. ATTP wrote in the original post:

    “There’s a related post on RealClimate called climate oscillations and the global warming faux-pause. It discusses a recent paper by Steinman, Mann & Miller called Atlantic and Pacific multidecadal oscillations and Northern Hemisphere temperatures. It
    applied a semi-empirical approach that combines climate observations and model simulations to estimate Atlantic- and Pacific-based internal multidecadal variability (termed “AMO” and “PMO,” respectively).
    and concluded that
    the AMO and PMO are found to explain a large proportion of internal variability in Northern Hemisphere mean temperatures.”

    Since this thread is specifically about internal variability and thus in part about what the oceans do, I’d like to include a paper not included in the mix of papers cited: In my comment
    https://andthentheresphysics.wordpress.com/2015/02/27/some-thoughts-on-climate-sensitivity/#comment-49646
    on March 1, 2015 at 10:58 am in “Some thoughts on climate sensitivity” I linked to this paper
    “Surface warming hiatus caused by increased heat uptake across multiple ocean basins”
    http://onlinelibrary.wiley.com/doi/10.1002/2014GL061456/abstract
    and this very good article about this paper:
    “Heat uptake by several oceans derives pause says study: Major new research explains how increased heat retention by a number of oceans has driven the Pacific Ocean to maintain the so called pause in global warming.”
    http://www.reportingclimatescience.com/news-stories/article/heat-uptake-by-several-oceans-drives-pause-says-study.html

    In my comment above I quoted extensively from these two links, especially the article on the paper, including a number of comments by the lead author Sybren Drijfhout. I also gave a quote by Forster on the Chen and Tung paper from a link to comments by a number of experts on that paper.

    This paper above by Drijfhout, Blaker, Josey, Nurser, Sinha, and Balmaseda has similar results to the paper by Steinman, Mann & Miller in that they both say that multidecadal variation such as that of the AMO has had a lot to do with internal variability, specifically the overall slower rate of growth since 2000.

    This Wikipedia article
    http://en.wikipedia.org/wiki/Global_warming_hiatus
    is I think a good resource on the global atmospheric warming slowdown. In the section
    http://en.wikipedia.org/wiki/Global_warming_hiatus#Effects_of_oceans
    of this article, this paper by Drijfhout et al. is one of the many cited by this article in this section, and this paper by Steinman et al. is also cited immediately afterwards.

    Quote from this section of the Wikipedia article citing the papers by Drijfhout et al. and Steinman et al.:

    “A study published in December 2014 found that it is likely that a significant cause of the hiatus was increased heat uptake across the Atlantic Ocean, Southern Ocean, and Equatorial Pacific Ocean.[26][27]

    A study published in February 2015 found that Atlantic Multidecadal Oscillation and the Pacific Decadal Oscillation substantially accounted for the hiatus, and stated that these cycles would soon begin to exert the opposite effect on global temperatures.[28][29]”

    This section of this article, as long as it continues being updated, seems to be a good place for a summary of all the seemingly important new papers that relate the oceans to internal variability, specifically the most recent multidecadal hiatus as well as to perhaps all of them in the record. It contains citations of all the talked-about articles from 2014, including the Chen and Tung paper and those two papers by the NASA Sea Level Change Team that found more heat than expected in the Atlantic and Southern Oceans.

    It should be interesting to see the models incorporate not only updated forcing information but all this new information on (especially multidecadal) internal variability via the oceans, and then to see new probability calculations of all types based on these new and improved models.

  11. K&A, I agree that the oceanic dipoles are the key. One would think that a dipole would cancel as one end of the dipole is warmer and the other end cooler, but the asymmetry of how they tap into the thermocline is what causes the significant temperature fluctuations. The “faux pause” is evident just by looking at the last decade of the ENSO Southern Oscillation dipole, which has been on the cool side. Dipoles in other oceans as you suggest can easily make up any residual pause.

    My chart above only goes completely through 2013, yet one can see a bit of a divergence between model and data (model says 0.7 and GISS stuck at 0.6). Yet 2014 was a record warm year so that this is not even much of a concern as GISS will likely break the 0.7C threshold in coming months.

  12. Christian says:

    hi @ all,

    I do not fully agree with Roberts et al. 2015, because its a little bit more. To get real, the “Hiatus” is strong based on Winter-Temperature decrease and this is not only because of ENSO, PDO or AMO. There is also a shift in in northern Dipol (AO) and some sort from WACCy, where as first is more based (medium confidence) to decrease UV-Radiation, which partly can effect troposphere-stratsophäre Interplay, on the other side, the decreased sea ice, which can lead weaking the zonal Circulation by increased planetary wave propagtions into polar stratospare during wintertime.

    Just look back to Feb 2014 and look forward to Feb 2015. You will wondering about, how warm february 2015 will be. And for this, Roberts et al. 2015 is not enougth on the topic to natural variablity.

    Greets

  13. Christian,

    I do not fully agree with Roberts et al. 2015, because its a little bit more.

    To be clear, I don’t think Roberts et al. are trying to explain our currently slowdown specifically; they’re simply trying to determine how internal variability might influence forced trends and the probability of it doing so for different time intervals. I suspect that they would agree that our current slowdown is probably due to more than simply natural variability.

  14. Christian says:

    @ ATTP,

    “I don’t think Roberts et al. are trying to explain our currently slowdown specifically; they’re simply trying to determine how internal variability might influence forced trends and the probability of it doing so for different time intervals.”

    Yes, its a bit iif you conclude this on a statistical way, just only adding variability as noise you get (on 1000 Series of Noise) that if variability is near to noise, you can get have arround 10 Years of without warming, at the same time, GHGs leads 0.2K/Decade warming. But this is not for real, because the climate is not really noise and shoud be more looks like a step function.

    “I suspect that they would agree that our current slowdown is probably due to more than simply natural variability.”

    Hmm, the question is, what do you want to point out? “Hiatus” or Slowdown can be simpel explained by ENSO, if you adjust the data to ENSO, the Trend is now really near the same since 1970. If you want to point out, how to explain the difference between Model-Projections and our temperature, then you have to get more like just ENSO, e.g the solar bias in Models (Forcing is to high to what we have meassured) or Aersols, or other Stuff.

  15. JCH says:

    February 2015 just might be the warmest February in the GISS record. Nick Stokes has NCEP for Feb at .277C. The 11 months to Jan 2015 is .705C.

  16. Christian says:

    JCH,

    Normally, GISS should be arround 0.8-0.9K +- SD but if you ask me, i would say, it will be go to the warmer side.

  17. Eli Rabett says:

    Trees take a long time to recover from bad stuff like insect damage, storms, and whatever, so it is not surprising that there will be long term variability. A marker of this would be a sudden impulse followed by a long tail recovery (e.g. the width of the ring would decrease sharply followed by several years of slow return. A decade or more would not be a surprise.

    OTOH, this is obvious even to a dendrologist and with the exception of volcanic eruptions, the effects would be local.

  18. Evelyn says:

    To better understand the global temperature you need to first understand the evolution of the human activity here on Earth.
    For example, I saw a chart regarding the world energy mix and a prediction for the year 2035 made for each continent, region or big country (big considered by population number).
    If you study the chart, you will understand what are the regions that might affect and how other regions could be hit.
    The chart is here
    http://www.alternative-energies.net/a-prediction-regarding-the-global-electricity-mix-in-2035/
    and the conclusions will come after a deep study.

  19. Kevin says:

    Hi ‘Eli’,

    Not sure what prompted the offhand sneer in the direction of ‘dendrologists’ [sic], but if you’re interested in what is ‘obvious even to’ dendrochronologists, a good place to start is Hal Fritts, 1976, Tree Rings and Climate. Happy to provide additional sources if you’re interested.

    cheers,
    Kevin

  20. Joseph says:

    I wonder how the paper below relates to the paper mentioned by Keefe and the one cited by ATTP. Does one approach do a better job of capturing internal variability than the others?

    Recent global-warming hiatus tied to equatorial Pacific surface cooling
    http://www.nature.com/nature/journal/v501/n7467/abs/nature12534.html

  21. As JCH says, 0.7C anomaly might be it. This would easily compensate for the recent “faux pause” as long as TCR is ~2C.

    However, if someone is imaging that the TCR is much larger than 2C, then all bets are off. Or should I say, the “equivalent” TCR would have to remain at 2C while the other GHGs and aerosols combine in interesting ways to maintain that value.

    The chart I show is for a TCR of 2.1 C, and note how it does a reasonable job of accounting for temperature trend and variations (including pause) over the last 130+ years.

    Asking whether the current GCMs are too hot is a different question than variability and accounting for the pause.

  22. Lucifer says:

    Well, below is a chart I made of temperature ( NCDC land/ocean ) versus NOAA greenhouse gas forcing. Make your own by using data from:
    http://www.esrl.noaa.gov/gmd/aggi/AGGI_Table.csv, and
    http://www.ncdc.noaa.gov/cag/time-series/global/globe/land_ocean/ytd/12/1880-2014.csv

    Actual Climate Response is around 1.58 K per effective 2xCO2:

  23. Lucifer, that is clearly wrong as it is way too low and the time span is too short.

    In an earlier comment, I was asking if someone could see “parts that you think are weak” in the chart I show. Well, one obvious place is the point highlighted in 1993. The GISS value is about 0,1C cooler than the model value for a brief span. This aligns with the Pinatubo and subsequent Cerro Hudson volcanic events of 1991. This would involve a cooling and the question is whether the model added enough cooling..

    I used Sato’s aerosol data as a regressor for volcanic activity. This data is the best estimate as it scales the amount for each volcano and then incorporates it in a time series which is very conveniently applied as a forcing. It is possible that Sato’s Krakato estimates were too large and the Pinatubo contribution to cooling was too small.

  24. anoilman says:

    Lucifer
    To use short time frames one must take into account short term weather effects and volcanoes, etc.
    As per Foster and Rahmstorf;
    http://iopscience.iop.org/1748-9326/6/4/044022

    Here’s the pretty picture version;

  25. Joseph said:


    I wonder how the paper below relates to the paper mentioned by Keefe and the one cited by ATTP. Does one approach do a better job of capturing internal variability than the others?

    That Kosaka and Xie paper provided some of the inspiration for what I am trying to do. They made the recommendation that perhaps climate scientists should incorporate the Pacific oscillations “as is” in their model formulations. Their basis was that no one was doing a good job of modeling ENSO so why not admit it is what it is. Works for me.

  26. AnOilMan, that’s a great illustration by F&R. The essential difference I contribute is that I extend this all the way back to 1880 and show that it works everywhere in the time series. Once the variability is compensated for, you don’t see pauses anywhere along the timeline of aCO2 being added to the atmosphere.

  27. Robert Way says:

    The issue with internal variability associated with teleconnections is that the impacts cannot be accounted for directly using linearized indexes… However that is the best we have so we do it that way. The problem that I discussed in a previous comment is that using climate models to simulate the forced response requires up-to-date forcings. This is particularly the case when discussing the AMO because it is in the North Atlantic which is a region that strongs a very strong response to volcanic forcing.

    In one of our papers (Way and Viau, theoretical and applied climatology) we find that climate models underestimate the regional warming in Labrador over the past decade and yet these have not been updated for reduced forcings. This implies a greater role for multidecadal variability than suggested by the approach in the paper myself and Dr. Mann have been discussing. There are important synoptics which have to be considered as well – large-scale eruptions cause prominently positive Arctic Oscillation anomalies in the years following but we’ve seen a large set of negative Arctic Oscillation patterns which have promoted rapid warming. Disentangling the forced and unforced drivers of warming can be done but it does require improvements in models and updated forcings.

    I am certain that at the regional level the updated forcings will show a greater AMO index over the past decade which is probably now transitioning to a negative phase in the near future if the gyre activity is any indication. There are papers like Suo et al (2013) which argue that external forcing can fully explain the mid-century warming in the Arctic – this is tough to reconcile with the recent paper. To be honest I feel that robust attribution and disentangling natural and anthropogenic factors requires more than simple statistics and I believe that this is an interesting avenue for future research.

  28. Lucifer says:

    WebHub,

    Lucifer, that is clearly wrong as it is way too low and the time span is too short.

    It’s clearly accurate as a reflection of what’s actually occurred.

    To be sure there are such things as volcanoes, but they occurred in the first half of this particular record – no sign of an acceleration since then.

    As for duration, this is more than a third of a century – what is supposed to happen on a longer duration? The radiative effect is already in place. Water vapor responds within a month on the season basis. What are we waiting for?

  29. Lucifer, that is clearly wrong as it is way too low and the time span is too short.

    Yes, I don’t think one can state that it is way too low. In fact, I don’t think I’m aware of an energy-balance calculation that uses CO2e or total anthropogenic forcing (as I think they should) that gets much higher. I think Grant McDermott and Cawley et al. have used Bayesian approaches that produce TCR values are 1.6K. Of course, we’re ignoring ranges here, so this is simply some kind of best estimate.

  30. JCH says:

    Robert Way – what do you believe the AMO is going negative? What is the AMO? Why do you believe its going negative, whatever that means, means anything?

  31. Christian says:

    Robert,

    ” …which is probably now transitioning to a negative phase in the near future if the gyre activity is any indication. ”

    I dont think so. The Problem is here (feel free to look at Rapid MOC-Data) that currently decrease of meridonal overtuning is driven by the upper-mid-Ocean transport. (D. A. Smeed et al. 2014). And that is making a Problem for AMO (i would say AMV) for transitioning, because we got now a saisonal Pattern, Stuart A. Cunningham el al. (2013) explaind it perfectly:

    “This cooling driven by the ocean’s meridional heat transport affects deeper layers isolated from the atmosphere on annual timescales and water that is entrained into the winter mixed layer thus lowering winter sea surface temperatures”

    The Thing is, that the mixed Layer is only on NH-Winter-Saison deep enough to effecting the SST. Its well seen to the AMO-Index (based on detrend)


  32. Lucifer says:
    WebHub,
    It’s clearly accurate as a reflection of what’s actually occurred.

    No its not. Can’t you see the chart I put up? That is a TCR of 2.1C, from 1880 to now.
    You just can’t be making these assertions in the face of actual observations.

  33. WHT,

    No its not. Can’t you see the chart I put up? That is a TCR of 2.1C, from 1880 to now.

    I can’t see a chart, but I think we’ve been through this before. I think you’ve defined TCR as being CO2 only. Lucifer is defining it as CO2e (although am not sure if the aerosol forcing is included). It’s straightforward to do a conversion between the two and I think you’d find that both your answers are consistent with each other. Also, the standard is to define it in terms of a change in external forcing equivalent to a doubling of CO2, and not due to a doubling of CO2 only.


  34. Yes, I don’t think one can state that it is way too low.

    What is that like? Yes, we have no bananas?

    Let’s get real about this. Lucifer is clearly here to impede progress and mislead and he has no other role.

    His “CO2” derivation is shown below. Of course he adds all the other GHGs into the mix and calls that “CO2”.

    Yet everyone else talks about CO2 as if it is the leading indicator. If that is the case, treat it like a leading indicator and don’t be mislead by the filler.

    Pay attention to the 2.1C and not the 1.58C because that is what people are focused on.

  35. WHT,

    Let’s get real about this. Lucifer is clearly here to impede progress and mislead and he has no other role.

    Sorry, but some of his comments have been reasonable, so let’s not assume motive.

    Yet everyone else talks about CO2 as if it is the leading indicator. If that is the case, treat it like a leading indicator and don’t be mislead by the filler.

    Except, you still need to define what you mean by your TCR. If you mean measured with respect to CO2 only, then you will get a different answer to those who define it in terms of net anthropogenic forcing or CO2e. In reality, you would reach your TCR later than the the TCR defined through CO2e (or net anthropogenic forcing) – i.e., CO2 itself will double later then when we would we reach a change in anthropogenic forcing equivalent to a doubling of CO2. It’s all consistent as long as we know how it’s defined. Your definition is also non-standard.

  36. Robert Way says:

    Christian,
    I was just thinking in terms of this paper:
    http://onlinelibrary.wiley.com/doi/10.1002/2014GL060420/epdf

  37. So we have to explain to everyone that 290 PPM is not the pre-industrial value of CO2 because the addition of the other GHG’s will put it somewhere in to the middle or 300 range to start with? So then we have to rename Bill McKibben’s 350 organization into 450? How about 451?

    If you want to start redefining what these terms mean, you are really opening up a big can of worms.

    Next concern is that you have given favor to the other GHGs and have placed the reflecting aerosols as second class citizens. I thought that those were significant in terms of reducing the effective TCR from what it would actually be in the absence of the aerosols. Why didn’t Lucifer mention this, eh?

    The best approach is to let the other GHGs and the reflecting aerosols effectively cancel each other and simply use the CO2 measure as the leading indicator yardstick. That is the way that the layman understands it and is the way it has been presented for years.

  38. WHT,

    simply use the CO2 measure as the leading indicator yardstick. That is the way that the layman understands it and is the way it has been presented for years.

    We’ve had this discussion before, so I don’t really want to go through it again, but anyone who publishes an energy balance calculation is not using CO2 only, so it isn’t the way it has been presented for years. Of course, if you’re talking about the formal definition of the TCR for a climate model then it is CO2 only, but that’s because it is a controlled experiment in which the only change is CO2. In all published observationally-based estimates that I’m aware of, it’s not typically CO2 only.

  39. Lucifer says:

    And, of course, to play devil’s advocate ( with myself ),
    the variability on the chart I posted above is from .16 to .69 K / (W/m^2).

    Multiplied by the nominal 3.7W/m^2 comes to a range of 0.59K to 2.56K.

    Web’s number is well within that range.

    Such is uncertainty.

  40. Lucifer, Why didn’t you include the reflective aerosols when you decided to do what you did?

    See how they can potentially cancel leaving CO2 effectively by its lonesome?

    for shame.

  41. Tom Curtis says:

    WHT:

    1) It appears to me that your model is over fitted in that it includes a Length Of Day (LOD) adjustment. I can easily think of a physical mechanism whereby global temperatures effect the LOD, ie, warming climate melts Arctic ice that preferentially accumulates at the tropics shifting the mass away from the center of rotation, and therefore slowing the rotation rate to conserve angular momentum. In contrast, I can think of no mechanism with the reverse effect, and nor have I seen one suggested.

    2) As I understand it, your model lacks anthropogenic forcings other than CO2 and aerosols(?). CO2 contributes approximately two thirds of the greenhouse effect, and excluding other anthropogenic forcings therefore understates the forcing, thereby overstating the TCR. Including all anthropogenic aerosols, as with Kevin Cowtan’s model, results in a TCR around 1.6 C

    (In passing I will note that unlike most similar models, Kevin Cowtan’s works out the TCR by taking the fitted responses and applying them to a forcing consisting of CO2 increasing by 1% per year for 70 years, ie, it gives the actual TCR as defined by the IPCC.)

  42. Tom,

    (In passing I will note that unlike most similar models, Kevin Cowtan’s works out the TCR by taking the fitted responses and applying them to a forcing consisting of CO2 increasing by 1% per year for 70 years, ie, it gives the actual TCR as defined by the IPCC.)

    Yes, a very good point. His model fits to the observed temperatures using all the forcings, but calculates the TCR doing – as you say – a 1% per year increase simulation.

  43. Lucifer says:

    Web,
    Lucifer, Why didn’t you include the reflective aerosols when you decided to do what you did?
    Notice the time range on the chart ( 1750-2000 ).

    SO2 emissions peaked around 1979.
    On that basis, aerosol effect of clearer skies is likely to have been positive, not negative since that time:

  44. Lucifer, You haven’t seen China apparently.


  45. 1) It appears to me that your model is over fitted in that it includes a Length Of Day (LOD) adjustment. I can easily think of a physical mechanism whereby global temperatures effect the LOD, ie, warming climate melts Arctic ice that preferentially accumulates at the tropics shifting the mass away from the center of rotation, and therefore slowing the rotation rate to conserve angular momentum. In contrast, I can think of no mechanism with the reverse effect, and nor have I seen one suggested.

    The earth can gain angular momentum or lose angular momentum. Winds in the atmosphere, different density ocean volume moving upwards and downwards, etc. That is indeed all reversible. What does this have to do with overfitting? It is a single factor that is a proxy for long-term natural variability.

    OTOH, Kevin Cowtan’s model is a prime example of over-fitting. His Agung volcano contribution around 1963 must be totally exagerrated! Remove the ENSO contribution, and then dial the volcanic contribution to 0 and watch what happens. Somebody ought to tell him that Agung wasn’t bigger than Pinatubo. Also tell him about adding LOD as a factor.

    Having said that, my CSALT model and Cowtan’s model is basically the same thing as far as showing variability. So you should be happy!

  46. Tom Curtis says:

    WHT, looking at the forcings shows Agung to have been smaller than Pinatubo, but to have been immediately followed by two further significant eruptions. That fully account for the smaller initial response, but prolonged volcanic cooling for from 1965. Stating that LOD is a proxy for natural variability is no defence in that you have not shown it to correlate with natural variability, or why it should, nor why (as a proxy of natural variability) it needs to be lagged 8 years. Including it constitutes over fitting in that you have included a variable solely because it improves the fit, and with no theoretical justification.

  47. Eli Rabett says:

    Kevin, au contrair, Eli was pointing out that such thing are undoubtedly known since the year dot to dendrologists, it’s the left fielders coming in from who need to think about that.


  48. Stating that LOD is a proxy for natural variability is no defence in that you have not shown it to correlate with natural variability,

    It is not me saying this, it is NASA JPL that is making the correlation:
    Dickey, Jean O., Steven L. Marcus, and Olivier de Viron. “Air temperature and anthropogenic forcing: insights from the solid earth.” Journal of Climate 24.2 (2011): 569-574.

    And they essentially revisited this study:
    Lambeck, K., and A. Cazenave, 1976: Long-term variations in the length of day and climatic change. Geophys. J. Roy. Astron. Soc., 46, 555-573.

    I am simply incorporating what they are proposing.

  49. The fat-tails on Cowtan’s volcanic response functions is a factor that I do not include. I understand the idea behind this: the reduction of solar insolation during the time of stratospheric reflective aerosols is locked into the ocean depths and this is only slowly compensated for via diffusional eddy processes. This is very similar to being able to cool a heat sink temporarily and benefiting from the thermal inertia. That produces the long tails that quickly fatten up when a succession of volcanic events occurs.

    It also explains the strength of the suppression from Agung.

    Yet my model does not require such a fat-tail.

  50. -1=e^ipi says:

    @ Tom Curtis –
    “Stating that LOD is a proxy for natural variability is no defence in that you have not shown it to correlate with natural variability, or why it should, nor why (as a proxy of natural variability) it needs to be lagged 8 years.”

    I was under the impression that a 6 year lag is best (possibly 7).
    I tried testing this two weeks ago. Take HadCRUT4 temperature data and regress it on solar irradiance, greenhouse gas forcing and volcanic aerosol forcing under a simple exponential decay towards equilibrium model. If you take the residual of this, you will find that a lag of 6 years on LOD fits the best to the residual (compared to other lags). You can also test to see if the residual is a damped response to LOD; but when I tested this I found that it wasn’t.

  51. jyyh says:

    it appears my comment was lost somewhere to the bit heavens. it was something asking about AMO and this new (is it completely statistically derived?) NMO presented in the RC article. about the affected areas and such. never mind. should find the relevant maps myself, if I ever get to that.

  52. The standard problem of curve fitting is that in almost all cases the total number of possible variables is large, and reduced to a small set based on the success in producing a fit. That approach makes it virtually impossible to tell, how significant the success is as the selection process corresponds to an unknown number of extra degrees of freedom consumed.

    In net discussions we have seen many apparently extremely successful fits, some of them have also been published in journal articles. Such success seems to be made possible by combining sufficient smoothing with some additional flexibility. Smoothing makes the real effective number of freedom small enough to allow for good fits using only a few free parameters, when the components are first preselected from a large enough base of components or functional forms.

    It’s always much better that the components have some known connection to atmospheric phenomena (a counterexample is based on the orbital properties of Saturn), but that’s not enough, there should be a real reasonably well understood causal link to make it more convincing. Furthermore the parameters determined from the fit should fall into a range that can be justified based on physical arguments.

    It’s of little help (or perhaps of no help at all) that someone has observed the correlation earlier, if the correlation remains only observational and without plausible enough physical causal explanation. The earlier observation may make it more likely that the variable helps in getting a good fit, just because anything that has an observed correlation may help in that.

    It’s known that many climatic phenomena are correlated. That’s natural as they reflect the same atmospheric history. Figuring out the actual physical processes that cause these correlations is the scientific question rather than the search of formulas where the correlations happen to produce a nice fit. Such fits may provide hints that someone can follow to gain deeper understanding, but that’s perhaps more an exception that the rule.

  53. Tom Curtis says:

    WHT:
    1) Cowtan’s “fat tail” is a consequence of thermal inertia slowing the initial response to the cooling from volcanoes, then slowing the rate at which the cooled ocean rewarms to its previous values. If you do not have a “fat tail” in your volcanic response, that is equivalent to asserting that there is no thermal inertia (or equivalently, ECS = TCR); or that the net TOA energy imbalance always approximates to zero).

    2) I have now downloaded the Meinhausen 2011 forcings as specified in Cowtan’s model and compared them with 5.35 times the natural log of the ratio of current to initial CO2 concentrations for the years 1880-2004. CO2 concentrations are from the Law Dome/Cape Grim spline fit for 1-2004 AD. Over the period 1880-2004 the CO2 only forcing varies from 60.42 to 122% of net anthropogenic forcings. Currently they are weaker then net anthropogenic forcings, but were stronger as recently as in the 1960s. Overall, using CO2 forcing rather than net anthropogenic exaggerates TCR by 23.4% on a linear fit. Using the CSALT model, which uses only CO2 forcings therefore exaggerates TCR by a similar amount.

    3) Using the Meinhausin 2011 data and GISS LOTI from 1880-2010, and regressing temperature against forcing, I found a TCR of 1.25 C/ doubling of CO2. That compares with the TCR of 1.65 C/doubling from Kevin Cowtan’s model. The reason for the low value from the simple regression is the large negative volcanic forcings coupled with much smaller temperature variation (as a result of thermal inertia). This difference underlines the superiority of Cowtan’s method in calculating the TCR.

    As a side note, using only anthropogenic forcings greatly reduces the discrepancy because anthropogenic forcings do not have the sudden change in values such as those resulting from volcanic eruptions. At 1.75 C/doubling, the anthro only regression overstates the TCR, probably due to excluding the net positive natural forcing trend.

    4) Dickey et al rely on Gross et al (2005) to quantify changes in the LOD due to changes in sea level. Gross et al, in turn write:

    “Here it has been shown that redistribution of mass within the oceans do not excite
    decadal polar motions to their observed level. However, the ocean model used in this study was not forced by mass changes associated with precipitation, evaporation, or runoff from rivers including that from glaciers and ice sheets, and hence has a constant total mass. So, this study
    does not address the question of the excitation of decadal polar motion by processes that change the total mass of the oceans, such as nonsteric sea level height change associated with glacier and ice sheet mass change. As mentioned earlier, using a climate model, Celaya et al. [1999] showed that changes in Antarctic snowpack are capable of inducing decadal polar motion variations of nearly the same amplitude as that observed. Realistic estimates of mass change in glaciers and the Antarctic and other ice sheets, along with estimates of the accompanying
    nonsteric change in sea level, are required to further evaluate this possible source of decadal polar motion excitation.”

    In short, Gross et al, and hence Dickey et al do not address the mechanism I describe although they do acknowledge it is a valid mechanism. Until the effect is quantified, you may be (and I suspect you are) merely including a consequence of temperature change as a cause of temperature change.

  54. Tom Curtis says:

    -1=e^ipi, I mentioned an 8 year lag only because WHT uses a seven or eight year lag for LOD in in his model, or at least did in 2013.

  55. Tom Curtis says:

    Lucifer, with regard to aerosols, forcing is not just a matter of emissions, but also latitude of emission and (I suspect) areal extent of emissions. Therefore the peak of forcing does not necessarily correspond with peak emissions. Indeed, in the Meinhausen 2011 data, minimum forcing (bearing in mind it is a negative forcing) is in 2004, with a value 27.8% lower than the lowest value from 1960-1979.


  56. Tom Curtis says:
    I mentioned an 8 year lag only because WHT uses a seven or eight year lag for LOD in in his model, or at least did in 2013.

    The lag is actually best fit for 4 years for LOD as per my latest results. In that blog post of 2013, I had referenced 8 years as the number that was referenced in the literature by Dickey and others. The cycle length of LOD varies ~50 year cycles, so that this is a very broad window for optimizing.

  57. About the fat tails on the volcanic data, this appears to be a surprisingly knotty problem. About the time I first created the CSALT model, this is what Gavin Schmidt said about the Pinatubo “fat tail”


    With respect to the Pinatubo ‘tail’, I don’t think this is an accurate characterisation of what is happening in Troy’s analysis. Rather, Pinatubo occurs in a climate that has not yet recovered from previous volcanic eruptions, and the post Pinatubo rise is better characterised as the recovery from the initially cold temperatures, rather than Pinatubo per se. This tail is longer and deeper than you see in GCMs. For reference, the figure below shows the response to volcanoes only in Troy’s EBM and the GISS-E2-H 5-member Ensemble mean. – gavin – See more at: http://www.realclimate.org/index.php/archives/2013/02/2012-updates-to-model-observation-comparions/comment-page-2/#comment-319737

    Note that the GCM doesn’t reveal a fat-tail after Pinatubo, just a strongly damped cooling response.

    I took this under some consideration when I came up with the CSALT model. It just doesn’t appear to be that strong an effect. Now, it could be that my using the LOD has something to do with it but that is not so clear.

    I would like to see the latest thinking on this subject. I am a big fan of fat-tails because that is the real physics of thermal diffusion, so would like to get this right.

  58. Tom Curtis says:

    WHT, Cowtan’s model uses two time constants, Τ1 and Τ2, and has the capability of a third. Under default parameters, Τ1 = 1 and Τ2 = 30. This results in a temperature response to volcanoes shown in Figure 4 (dark green line). As you can see, the response to volcanic eruptions in the sixties results in three successive troughs of diminishing size. In contrast, in Troy Masters’ EBM, the “fat tail” is so fat that the first trough is smaller than the second, even though it comes from a volcano with a larger forcing. In shape, therefore, Cowtan’s model on default settings is far closer to the GISS-E2-H model than to Toy Master’s EBM (which seems absurd to me). It does, however, have a longer tail than the GISS-E2-H. It can be forced to take on characteristics similar Troy Master’s EBM, but only be setting Τ1 to values around 10. It can also be forced to have characteristics similar to the GISS-E2-H by eliminating the second time constant (ie, setting Τ2 to zero).

    With regard to the discrepancy with GISS, it is fairly evident that current GCMs overstate the initial cooling from volcanoes. If we assume that they have got the forcing correct, I think it follows that they overstate the rapidity of heat loss and gain by the ocean. That is also suggested by the fact that much of the additional discrepancy with observational temperature trends between CMIP3 and CMIP5 comes from inclusion of an excessive volcano induced trends in the later for short intervals. That is, the typical rapid response to volcanoes in GCMs does not fit with observations, whereas, of course, Cowtan’s model with default settings fits very well.

  59. Pekka said blah-blah-blah about curve fitting as if we were born yesterday.

    The fact of the matter is the approach I am advocating is not that much different than what Cowtan has done that Tom Curtis referred to, that Tamino Foster and Rahmstorf has done that JCH referred to, what Kosaka & Xie have done that Joseph referred to, what van Hateren has done that Arthur Smith referred to, what J. Lean et al have done, what J. Hansen et al have done,and on and on.

    My bringing this up apparently opened up a can of worms for an alternate approach that some seem to have problems with. The point is that simple methods are needed to explain variability, and the ability of this approach to resolve fine structure of the temperature time series via ENSO matching is not something that should be swept under the rug.

    Just consider the matter of fat-tail responses to volcanic events that we are discussing. Note that no one is debating the ENSO responses. These show up in the GISS time series as short 5-6 month lagged thin-tail response to the -SOI events. There is no fat-tail because this is a purely transient phenomenon that has less to do with bulk atmospheric or ocean response than with the physical mechanism of ocean sloshing.

    I just looked at the van Hatteren paper, and he basically made many of the same decisions I made in setting up the model. He came up with an observed CO2 climate sensitivity of 2C like I did. He also came up with a 1.5C TCR for that artificially contrived model of doubling CO2 in 70 years. I don’t do the latter because I don’t see a doubling of CO2 in 70 years according to the real-world emissions data.

    As far as I am concerned, this is the kind of model that we can continue to use to openly discuss mechanisms without having to worry about the inscrutability of GCM models.

  60. Tom Curtis says:

    WHT, I only have two problems with your model. First is use of CO2 forcing only, rather than total anthropogenic. That is an error, pure and simple in my view, because even if the two have closely approximated at some recent times (1970 is most recent for Meinhausen, but AR4 had them closely approximating in the early 2000s), that need not have always been true and certainly is not for Meinhausen. Consequently they cannot be used as substitutes for each other in time series analysis.

    The second is LOD which is an intriguing but not physically justified choice, IMO. The later is key, in that it adds two parameters for a minimal increase in correlation, and a significant correlation over just one cycle. Absent a clear physical mechanism, it is therefore not justified and does lead to over fitting. That does not make it a mistake, in that science involves exploring “what ifs”, in this case what if changes in the LOD drive changes in GMST. It does mean, however, that models such as Kevin Cowtan’s that restrict themselves to known causal factors are superior.

  61. verytallguy says:

    WHT

    if you are genuinely convinced of the superiority of your model over published work, you should submit for publication to see what your peers think of it, and for critical review for improvement.

  62. Nothing wrong with doing CO2 only as that is the leading factor, and is the only GHG that can be estimated rather accurately since 1880.

    Kevin Cowtan’s model is not as good as the CSALT model in terms of an Akaike Information Criteria. Way too much of an overfitting exercise !

    The LOD is known to reflect accurate thermodynamic information and real physical mechanisms.

    After hearing out these ideas, I think I may experiment with fat-tail responses on the volcanic events.

    I won’t use the brain-dead two-box model but instead use a real dispersive diffusional thermal response that I developed for OHC evaluation http://theoilconundrum.blogspot.com/2013/03/ocean-heat-content-model.html. That generates a real fat-tail from the physics as opposed to a heuristic. I really don’t expect to see a strong impulse response

  63. WHT,

    Nothing wrong with doing CO2 only as that is the leading factor, and is the only GHG that can be estimated rather accurately since 1880.

    Yes, there is, because if you consider only the change in CO2 when you’re calculating the change in forcing, but use the full change in temperature, then you’re ignoring that some fraction of the temperature change has been due to changes in non-CO2 external forcings. If you want to use CO2 only, then you should adjust the change in temperature to compensate for some of the temperature change being due to other external forcings.

  64. WHT,

    I do not mean that you should not pursue your approach. By that you are doing more than most of us are doing.

    Accepting the above does not mean that I could be easily convinced by everything that you present. The issues that I mentioned are generic. They affect all kind of comparison of models with data, in some cases the problems are very severe, in others less. They are rather severe for all simple models of the Earth system, when it’s claimed or implied that the agreement with observations tells that the model is correct at the level of the achieved agreement.

  65. ATTP, If you want to do it your way, then every time that someone mentions that 1.2C is due to a doubling of CO2, then they have to go through the laundry list of the other GHG’s and what their responses are. I hope you realize that these are not all the same.

    So starting from the top you will have to specify that (1) methane has a ?.? C to doubling (2) N2O has a ?.? to doubling (3) SO2 has a -?.? to doubling, etc etc until you have exhausted all of the possible GHG contributions as well as all the reflective aerosols.

    And what about that dreary specification of defining a TCR as a 1% increase over 70 years for a doubling?

    It is so much easier and convenient to use CO2 as a leading indicator. In other words, how much has the temperature changed in the past 130+ years while atmospheric CO2 has increased by almost 40%

    That is the way that the layman thinks about it and so we put it in layman’s terms.

  66. -1=e^ipi says:

    @ Tom Curtis – Thanks for the link to Cowtan’s model. What is the basis for the two decay times? Is the 1 year decay time associated with land + atmosphere + upper ocean heat capacity and the 30 year supposed to correspond to the response time of the deep oceans? If so, 0.5 year and 34 year decay times might be better (based on my own calculations), although this doesn’t change the result by much.

    @ WHT –
    “The fact of the matter is the approach I am advocating is not that much different than what Cowtan has done that Tom Curtis referred to… I just looked at the van Hatteren paper, and he basically made many of the same decisions I made in setting up the model. ”

    I know I said I would leave you alone. But I just briefly want to point out that the Van Hateren and Cowtan approaches have impulse response functions, where CSALT does not. If the CSALT model is correct then that basically suggests that TCR=ECS=ESS. I suggest using an impulse response function.

  67. WHT,

    ATTP, If you want to do it your way, then every time that someone mentions that 1.2C is due to a doubling of CO2, then they have to go through the laundry list of the other GHG’s and what their responses are. I hope you realize that these are not all the same.

    No you don’t, because 1.2oC is how much we’d warm if we doubled CO2 and there were no feedbacks. I’m not suggesting that CO2 isn’t important, but I don’t think you can just ignore that about 30% of the change in external forcings is not CO2.

  68. ImaginaryNunber guy, You don’t even know what the real impulse response function is. Here are a couple of my earlier blog posts where I define what the impulse response for CO2 and for heat look like:
    http://theoilconundrum.blogspot.com/2011/09/fat-tail-impulse-response-of-co2.html
    http://theoilconundrum.blogspot.com/2012/01/thermal-diffusion-and-missing-heat.html

    CSALT is looking at the first-order transient of the factors leading to the overall trend and to natural variability. The impulse response I chose are fast lags. These are damped exponentials convolved with the input stimulus. The fat-tail will slowly creep up over time. My problem with you is that you keep trying to play stump the chump with me, but that ain’t gonna work. I cut my teeth formulating impulse response functions in solving diffusion problems in semiconductor research so find someone else to butt heads against.


  69. I’m not suggesting that CO2 isn’t important, but I don’t think you can just ignore that about 30% of the change in external forcings is not CO2.

    I believe that you forgot to mention that contributions due to emitted reflective aerosols may cancel that 30%.

    That has always been the underlying rule-of-thumb in relying on CO2 as a leading indicator heuristic. Lots of stuff cancels each other out leaving the CO2. It acts as a useful approximation.

  70. The central estimate of IPCC AR5 is that CO2 forcing up to 2011 is 1.68 W/m^2, while the total anthropogenic forcing is 2.29 W/m^2, i.e. 36% higher. That takes the reflective aerosols into account.

  71. Tom Curtis says:

    -1=e^ipi, as I understand it, τ1 represents land, ice and atmosphere, while &tau2 represents the upper 700 meters of the ocean, with the deep ocean being neglected in the two box model. The deep ocean is then represented by τ3 in the three box model.

    Applying your values for $tau;1 and τ2 reduces TCR and r^2 by 0.001 in the Cowtan model. I suspect, however, that using a different forcing series or temperature series in an equivalent model could quite well require small changes in the values for best fit. I also don’t consider so small a change in r-squared to be significant beyond indicating that the modal value for τ1 would be greater than 0.5 if we determined an empirical estimate using Cowtan’s model. With default forcings and temperatures on Cowtan’s model, the r squared is constant to three significant figures for values of τ2 from 28 to 52, with τ1 constant at 1; with values of τ1 from 0.7 to 1.2, with τ2 constant at 30. That illustrates that the result is more sensitive to τ1, which is no surprise, but also that it is fairly robust to reasonable changes in either. Further, if we used the model to estimate empirical values for τ1, even a value of 0 (ie, a fortnight or less) is not excluded, yielding as it does an r squared of 0.93.

  72. Tom Curtis says:

    WHT, here are the running differences between Meinhausen anthropogenic and CO2 forcing differences – first as Meinhausen minus CO2, then as (Meinhausen minus CO2)/Meinhausen for each years values. Not only is CO2 forcing not approximately equal to total anthropogenic, but they do not even consistently scale together. The difference, therefore, not only exaggerates TCR, but also changes the fit of the model.

    If the sole purpose of your model is didactic, and you take the time to explain in the model description that this is a source of inaccuracy in the model, and why, then fair enough. If you actually want to use your model to determine TCR, or to argue that some other estimates of TCR are too low because they disagree with your model, then your model is fatally flawed due to inaccurate forcings. It is your choice what you use your model for, but as you appear to want to treat it as providing a serious estimate of TCR, the use of CO2 only rather than all anthropogenic renders the model sufficiently flawed as to be irrelevant.

  73. Try to say that over the entire range from 1880 to now!

    That’s the problem with your contribution Pekka, you aren’t actually trying to use the historical data, and modeling the results year by year!

    What was the refective aerosol content in 1888? What was the methane concentration in 1888?

    You just don’t know.

  74. WHT,

    That doesn’t help. TCR is defined as it is. If you calculate something neglecting significant contributions, then you are calculating something that you have defined, not, what others are talking about. Then you cannot meaningfully claim that your results tell that others are wrong. Then you cannot either expect that others are interested in your values.

  75. Christian says:

    On Topic to EBM

    EMBs are usefull, but i have doubts about them, i think they are just too simpel, to given us real ECS/TCR. Brian E. J. Rose et al. (2014) has shown that regional heat uptake can dominates global Temperature, they found that high latitude heat uptake result in a 3 times bigger effect on global temperture then heat uptake on low latitudes.

    The Authors conclude:

    “. Results imply that global and regional warming rates depend sensitively on regional ocean processes setting the OHU pattern, and that equilibrium climate sensitivity cannot be reliably estimated from transient observations.”

    So it could be a bit problematic to eastimate with EBMs.

  76. WHT,
    I agree with Pekka. What you’re doing is calculating something that is not the same as would be understood by others doing the same work.

  77. Thanks for the Meinshausen (sic) toy model of contributions. So that explains the variability peak at around 1940? It certainly doesn’t show itself on Cowtan’s plot if volcanic and ENSO contributions are suppressed. Still looks pretty smooth to me.

  78. I have no idea of how to use real world data to model a 1% change in CO2 every year over 70 years. That situation of a doubling in CO2 over 70 years has not occurred in the last century as far as I have heard. All I can use is the data available.

    As a truce, I will stop calling what I calculate TCR. I will now call it Temperature Change Ratio, and say that Temperature Change Ratio is dT / ln(dCO2) * ln(2)

    How is that?

  79. -1=e^ipi says:

    @ WHT –
    Thank you for the links.

    I have tried fitting your impulse response function to one created by F. Joos to see how well the fit is. It is a very good fit except for the first ~70 years. If one is trying to measure climate sensitivity based upon the past ~130 years of instrumental data, that could be a problem. Any suggestions on resolving this discrepancy?

  80. As long as we are on a roll and making progress, what are the thoughts of removing the WWII years form consideration in the global time series? In the paper by Van Hateren, he claims to totally exclude the years from 1942-1950 from his model. By the same token, I add a correction from 1942 to 1946 to improve the model fit. It is well known that the WWII years report temperature differently and that’s enough to complicate time series analysis.

  81. Cripes, we have already doubled atmospheric methane concentration since 1880 ! That’s not so good. But at least methane has a short residence time should we start cutting back on FF emissions.

  82. What discrepancy?

    The point is that the complex multi-exponential BERN model of CO2 adjustment time reduces to a diffusional form. Increase the number of boxes in a box-model and it becomes a diffusion model.

  83. Pakka said this


    The standard problem of curve fitting is that in almost all cases the total number of possible variables is large, and reduced to a small set based on the success in producing a fit. That approach makes it virtually impossible to tell, how significant the success is as the selection process corresponds to an unknown number of extra degrees of freedom consumed.

    Yet the evidence is this:

    How can every peak and vallley be accounted for by ENSO and volcanic effects, yet someone claim that this may not be significant? Plus, all the anecdotal evidence by weatherman and scientists over the years say that El Nino will bring warming and volcanos temporary cooling, and then you have someone say that it is “virtually impossible to tell”

    There is something wrong with scientists who wish to make things more complicated than they need be.
    #WHUT’s up with that?

  84. -1=e^ipi says:

    @ VeryTallGuy – “860ppmv”

    Yes, methane is well known. N2O is a bit less certain. There seems to be a slight discrepancy between snow pack data and ice core data from like 1850-1940. Anyway, for WHT is a link that gives estimates of CO2e 1765-2005 http://www.pik-potsdam.de/~mmalte/rcps/data/20THCENTURY_MIDYEAR_CONCENTRATIONS.xls.

    @ WHT – “I have no idea of how to use real world data to model a 1% change in CO2 every year over 70 years.”

    You fit a model to the data, and then you use the fitted model to get the TCR.

    “what are the thoughts of removing the WWII years form consideration in the global time series? In the paper by Van Hateren, he claims to totally exclude the years from 1942-1950 from his model. By the same token, I add a correction from 1942 to 1946 to improve the model fit.”

    If you add a correction and you don’t take into account the uncertainty of that correction, then you will be underestimating uncertainty. By omitting the WWII years, Van Hateren doesn’t overstate uncertainty. Though maybe it is better just to do a weighted regression. The HadCRUT4 temperature data set gives the % global coverage for each month; that could be used as a weight (and WW2 and WW1 would have less weight than surrounding years).

  85. JCH says:

    All I know is curve fitting is often invoked as a pejorative, so it must have been responsible for some spectacular failures.

  86. Fitting something to a straight line is world’s apart from fitting an erratic quasi-cyclic time-series. To match something to the first is easy enough that you can be likely mislead. On the other hand, iIf you can match something to the second with the first factors you try, you know you are on to something. That’s the situation with ENSO and the global temperature. There is no way that fit can fail.

  87. WHT,
    Volcanism creates a real forcing that has also been measured independently. The causal mechanisms are fairly well understood. It satisfies all the requirements that allow for studying further the related signal in GMST and removing it without much more further assumptions than additivity of the effects. The same is true for TSI. In both cases the number of events helps also in the analysis.

    ENSO has similarities, but is not quite as straightforward. It’s not an externally determined forcing, but one indicator of the internal state of the Earth system. The short term variability of ENSO cycles may perhaps be considered external for the most immediate processes that determine GMST, but that’s at best a crude approximation. The longer term variability in statistical properties of ENSO is likely related to other forms of internal variability.

    Anthropogenic forcings are external to the climate system, but how they affects GMST is what we want find out.

    The problems I discuss concern in this case the longer term variability. Several indicators are available to choose from. Parameters like lags are introduced, also some freedom in the functional forms. The total number of effective number of freedoms is small for the long term variability. This is a combination that allows for overfitting. When overfitting is possible in the approach, it’s usually impossible to tell, how much it affects the outcome making the significance of the success also impossible to judge.

    Curve fitting may be a very useful exploratory technique even, when the problems I discuss cannot be avoided, but stronger conclusions requires that effective degrees of freedom are so well in control that significance can be tested.

  88. Pekka, There is only one way to describe your response and that is pedantic.

    The topic is assigning attributions to the global natural variability in temperature.

    The role of ENSO over the year-to-year variability is overwhelming. Everyone knows this and when the data is compiled this is easily shown, as in the chart I showed.

    Where the ENSO does not quite fit, the odd volcanic eruption straightforwardly fills the gap. The only question left with respect to the strength and duration of the transient cooling is what effect the fat tail has over the fine details. This is a second order at best.

    Another second order effect which should be there and CSALT picks out at the +/-0.025C level is variation in TSI.

    What is not natural variability is the over-riding rising secular trend in temperature. And I showed this as a consequence of the rising atmospheric concentration of aCO2. A simple approximation is to take the log of the CO2 concentration and plot that to prove attribution.

    The only remaining question is what the slow gradual +/-0.1 C variation over 60 year periods is due to. As has been observed since 1976, phenomenologically this variation aligns with the long term changes in LOD, which measures the kinetic energy changes in the earth and lithosphere. This is an excellent proxy as it apparently doesn’t measure the rising secular trend as Dickey et al have shown.

  89. Pingback: Record Warmth | …and Then There's Physics

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s