Models don’t over-estimate warming?

I thought I might write about the new paper by Jochem Marotzke and Piers Forster called Forcing, feedback and internal variability in global temperature trends. It’s already been discussed in a Carbon Brief post called claims that climate models overestimate warming are unfounded.

Having read the paper, I’m not sure I quite agree with the Carbon Brief title. I think (although happy to be convinced otherwise) that a fairer assessment would be that the paper shows that internal variability can explain the discrepancy between forced model trends and observed trends for periods of 15 years. It also shows that for longer period (62 years) the impact of internal variability is small and the forced model trends are a good match to observations.

I’ll briefly try to explain. They use a very basic energy balance-like model

\Delta T = \dfrac{\Delta F}{\alpha + \kappa} + \epsilon,

where \alpha is the climate sensitivity, \kappa is the ocean heat uptake efficiency, and \epsilon is a term representing internal variability. I’ll be honest and say that I’m a little confused by this equation since, I think, \kappa should be time-dependent. I think, though, that for short enough timescales (when the system is not in equilibrium) it’s probably okay to assume it’s constant.

Credit : Marotzke & Forster (2015)

Credit : Marotzke & Forster (2015)

What they do then is write the above in a perturbed form (relative to the means of each variable)

\Delta \bar{T} + \Delta T' = \frac{\Delta \bar{F}}{\bar{\alpha} + \bar{\kappa}} + \frac{1}{\bar{\alpha} + \bar{\kappa}} \Delta F' - \frac{1}{\bar{\alpha} + \bar{\kappa}} \alpha' - \frac{1}{\bar{\alpha} + \bar{\kappa}} \kappa' + \epsilon,

where the barred terms are the means, and the primed terms are perturbations from the mean. They then do a multiple linear regression of \Delta T' against \Delta F', \alpha'. If I understand what that means, they select from the range of \Delta F', \alpha', and \kappa', and then try to fit the model trend to the observed trend, defining \epsilon as the residual [Edit (6/2/15) : I think this may be wrong. I think they use this to determine the externally forced and internally forced (\epsilon) trends in the models, and then compare that to the observed trends, showing that the combined externally and internally forced trends in the models can explain the observed trends for all timescales, with internal forcing dominating for short time intervals.]

The results are shown in the figure on the right (Figure 2 from Marotzke & Forster 2015). The top panel shows then 15-years trends from observations (black line), the 15-year trends from models (red line), the range for the forced model trends (difficult to see coloured region around the red line), and the residual (bars). The second panel shows the range for the forced 15-year trends, relative to the mean. The third panel shows the range of the residual, representing internal variability. It’s clear that for these 15-year trends the magnitude of the internal variability is bigger than the range of the forced trend. The bottom panel shows the distribution of the internal variability trends for different start years.

The paper then repeats the above for trend lengths of 62 years, finding that the impact of internal variability is much smaller, with forced trends producing a good match to observed trends. Maybe the most interesting figure is the one below, which shows that if one considers different start years, the 15-year model trends can have a tendency to be both larger and smaller than the observed trend (depending on the start year), but if one considers all start years, the distributions are very similar.

Comparison of observed and modelled 15-year trends for different start years (left and middle panel) and for all start years (right-hand panel) (credit : Marotzke & Forster 2015).

Comparison of observed and modelled 15-year trends for different start years (left and middle panel) and for all start years (right-hand panel) (credit : Marotzke & Forster 2015).

So, as far as I can tell, this paper is illustrating that one can explain the discrepancy between observed and modelled trends as being primarily due to internal variability, with the impact of this variability decreasing as the time interval increases. I notice that there has also been quite a lot of interest in this recently. This press release from Duke University says

It just means the road to a warmer world may be bumpier and less predictable, with more decade-to-decade temperature wiggles than expected.

Ed Hawkins also has a post discussing variability in climate models. However, if I understand this properly, there is no suggestion that this variability will have a significant impact on long-term trends. It might be difficult to robustly predict decadal trends, because of this variability, but it will tend average out over many decades and hence the forced trend will dominate on these longer timescales. Additionally, as pointed out in Roe (2009),

If the temperature reconstructions reflect high natural variability of global mean temperature, then odds are that the climate system is even more sensitive to external forcing (i.e., the positive feedbacks are even larger).

So, if anything, if internal variability is large, it might suggest a higher, rather than lower, climate sensitivity.

Anyway, this post has got rather long and convoluted, which may be a good thing as it would be nice to have a break from moderating somewhat contentious comment threads 🙂 . Of course, if I have misunderstood something about this paper, feel free to point it out. I suspect this type of paper also has implications for the energy balance models (EBMs) favoured Nic Lewis, for example. If internal variability is as large as suggested by some of this work, then – given that EBMs essentially assume that there is no internal variability – climate sensitivity estimates from EBMs would depend strongly on the time interval considered.

Advertisements
This entry was posted in Climate change, Climate sensitivity, Global warming, Science and tagged , , , , , , , , , . Bookmark the permalink.

729 Responses to Models don’t over-estimate warming?

  1. tl;dr: No.

    As far as I’m aware this isn’t anything really new what they’re saying (individual model runs show this behaviour). The only thing they’ve have done is tackle it from a slightly different angle. Though I haven’t had the time yet to dive into it for more detail.

  2. Collin,

    tl;dr: No.

    Good, it worked 🙂

    I think you’re right, although what they seem to have done (which I haven’t seen before) is show that the impact of internal variability is the same across the entire instrumental record.

  3. Thought I would be helpful and summarize all of the above. 😉

    I would be surprised if it wasn’t seen in the entire instrumental record. Even with the more spotty records that are more regional you still should see it. The hardest part is figuring out what that meant in the context of global trends and underlying mechanisms.

  4. Collin,

    I would be surprised if it wasn’t seen in the entire instrumental record. Even with the more spotty records that are more regional you still should see it. The hardest part is figuring out what that meant in the context of global trends and underlying mechanisms.

    Yes, I agree, but I think it’s interesting to show that for a given trend length, the residual is about the same across the whole instrumental temperature record. Hence, one can argue that it’s the same physical process (internal variability) that is the cause of the difference between the model and observed trends on these timescales.

  5. BBD says:

    If internal variability is as large as suggested by some of this work, then – given that EBMs essentially assume that there is no internal variability – climate sensitivity estimates from EBMs would depend strongly on the time interval considered.

    Exacterly!

    Some methods of estimating sensitivity are more equal than others.

  6. ATTP,

    Thank-you for this post. It’s *very* informative.

    It’s very good that they did it for that 60+ year period, and it’s very telling that they got better matches with the models for this longer period. It seems that with that 60+ year run, they chose to go ahead and address those recent papers last year that speak of up to 60 year cyclic behavior from the oceans rather than wait for their confirmations. Good idea. It shows that even if these papers in question are confirmed, then the models still hold up well. The only thing that matters in the end is the underlying long term trend curve.

    This is a good response to the claims that the models are “failing badly” and “being falsified” and so on.

    Side Note: Many might not know this: In addition to Tung and Chen suggesting cycles up to 60 years, the statistical analysis I cited before by Lovejoy also uncovered suggestions of cycles up to 60 years. See
    http://www.physics.mcgill.ca/~gang/Lovejoy.htm
    for this – please scroll down to see all the graphs – and much more, including more links to more of his falsifications of what deniers say, in addition to that paper I cited in another thread.
    Quote:
    “Several are shown, from their number we may roughly deduce that the return period of unsigned “pauses” is about 25-30 years, for a signed pause (double: 50- 60 years).”

    ATTP, you wrote,

    “So, if anything, if internal variability is large, it might suggest a higher, rather than lower, climate sensitivity.”

    Exactly. With this idea that it might suggest higher rather than lower sensitivity, it shows the falsity of such claimed implications as that being on the “down” side of a multi-decadal (up to 60 year) cyclic fluctuation around an underlying increasing curve implies that that curve must be less steep than what models say, and, especially so, it shows the falsity of such claimed implications as that finding ourselves longer and longer on the “down” side of a multi-decadal (up to 60 year) cyclic fluctuation around an underlying increasing curve implies that that curve must be less and less steep than what models say.

    In other words, we have this claim out there by some that there exists a multi-decadal (up to 60 years) cyclic behavior in the temperature record and that that which causes it – some sort of internal variability – somehow argues against the mainstream climate science view that the warming since the late 1800s is caused at least almost entirely by the GHG effect. Those who make this claim against mainstream climate science seem to have a great desire to see such large internal variability confirmed. But this point that such a large internal variability might suggest higher rather than lower sensitivity seems to make it probable that this object of their desire will blow up (or now has already blown up) in their faces.

  7. KeefeandAmanda,
    Thanks. A major issue with all these claims of a 60 year cycle – IMO – is that not only do we not really have a suitable dataset (the instrumental temperature record is only 130 years long and has large errors at early times), there is also no physically plausible mechanism that can explain this type of cycle. If you had a good enough dataset without a physical model, you might consider it plausible if the data showed a cycle. Then you’d try to work out what’s causing it. If you had a physical mechanism without a suitable dataset, you might consider it plausible, given that you have a mechanism. Without either, it just seems to be clutching at straws.

  8. Willard says:

    > If internal variability is large, it might suggest a higher, rather than lower, climate sensitivity.

    Why?

  9. Willard,
    Because the physical processes associated with internal variability are essentially the same as those associated with feedbacks. This paper by Palmer & McNeall is quite good. Essentially internal variability can influence surface temperatures which can then produce a feedback effect (clouds and water vapour, for example) which then produces some short-term warming. It can’t persist, I think, because the feedback response is smaller than the negative Planck feedback (so it would eventually cool or warm back to the externally forced equilibrium), and because it is also sensitive to changes in the other direction. However, since internal variability is essentially the same physical processes as those associated with feedbacks, if internal variability is large, it might suggest that feedbacks, too, are large.

    At least, that’s how I understand it.

  10. Eli Rabett says:

    secretarial note: for fractions in some versions of LaTeX use dfrac{}{} rather than frac{}{}. In that case the numerator and denominator will appear the same size as normal rather than smaller.

    Experience, sad experience, speaks.

  11. Eli Rabett says:

    Many of the 60+ year trends are linked to solar cycles for which the evidence is quite weak. Astrology can make you go round in circles

    http://www.landscheidt.info/

  12. That first equation you give is completely hosed up. If you work out the algebra, deltaT turns out to be a constant. Maybe the second deltaT (ion the RHS) is meant to be a deltaF? I have no idea.

    In any case, these kinds of approaches are second-rate to using the historical data as is. Yesterday I posted an alternative analysis which extracts the constituent factors of the global temperature trends.

    http://contextearth.com/2015/01/30/csalt-re-analysis/

    One can do the same for the land temperature data alone and extract an ECS of close to 3C. I am with BBD on this stuff and recommend start pushing the painfully obvious historical observations, instead of this roundabout reasoning that never goes anywhere.

  13. Eli,
    Thanks. You’d think I’d know that by now 🙂

    WHT,

    That first equation you give is completely hosed up. If you work out the algebra, deltaT turns out to be a constant. Maybe the second deltaT (ion the RHS) is meant to be a deltaF?

    Yes, I messed up there.

    In any case, these kinds of approaches are second-rate to using the historical data as is.

    Depends what you mean. The point of the paper is to compare model trends where \alpha, \Delta F, and \kappa (where both mean and perturbed quantities come from the models) with observations. Bear in mind that any forcing estimate comes from models anyway.

  14. If the purpose is to tell something about variability of period of 60 years, then looking at 60 year trends tells absolutely nothing (a full period produces zero variability in trend). Analysis based on 30 year trends might apply to that question.

    Where would we see most clearly the signal of models running too hot in this kind of analysis?

    I think the best place to look are the latest points of the 60 year trend as they correspond to the period of strongest growth in forcing over a long enough period. Looking at the tiny upper right corner of Figure 3a appears to tell that this most significant test indicates that the models have probably been running a little too hot, but not so much that the evidence would be conclusive.

  15. Take any interval from 1880 to now and you will find that the TCR has been stationary at 2C and the ECS at 3C, assuming that we treat log(CO2) as the leading indicator of forcing.

    Do we need to wait another 100+ years to verify this rule? How much more of a time series do we need to pin this down?

    We understand very well that the natural variability is due to ENSO, volcanic aerosols, and TSI cycles. Less well understood is that the longer term multi-decadal variation follows from processes linked to changes in angular momentum of the earth . Whatever these latter processes may stem from, the measurements effectively allow us to mathematically separate the modulation from the trend. That is the benefit of using thermodynamic proxy measurements, in that they remove the noise and retain the signal of interest. In statistics, this is known as getting rid of the nuisance parameters

    What remains is the GHG secular trend. Kind of frustrating that the pros do not want to engage at this basic and fundamental a level.

  16. -1=e^ipi says:

    “So, if anything, if internal variability is large, it might suggest a higher, rather than lower, climate sensitivity.”

    Forgive my ignorance, but could you please explain why you think is the case? According to the Stefan-Boltzman law, temperature variability acts as a negative feedback mechanism in the increase in the amount of radiation emitted to space for a high variability climate should be less than the increase in the amount of radiation emitted to space for a low variability climate given that both of these climates have the same initial average temperature and underwent the same change in forcing.

    Is it related to the albedo feedback mechanism?

    Oh, I read your response to Willard but I’m not sure if it completely explains things for me. Are you saying that there is a positive correlation between variability and climate sensitivity, thus high sensitivity suggests high variability?

    Does this correlation agree with empirical evidence? I was under the impression that climate variability was much higher during ice ages than during interglacials, and that climate variability is expected to be higher under ice house conditions than hot house conditions. Yet isn’t it expected that climate sensitivity is higher when the planet is colder (due to the feedback mechanisms of ice caps and clouds)?

  17. Willard says:

    > Astrology can make you go round in circles

    I don’t know of any 60 years cycles in astrology.

  18. -1=e^pi,

    Are you saying that there is a positive correlation between variability and climate sensitivity, thus high sensitivity suggests high variability?

    In a sense, it’s the other way around. If there is high internal variability, then we’d expect a higher feedback response, and higher climate sensitivity, than if variability is low.

    According to the Stefan-Boltzman law, temperature variability acts as a negative feedback mechanism in the increase in the amount of radiation emitted to space for a high variability climate should be less than the increase in the amount of radiation emitted to space for a low variability climate given that both of these climates have the same initial average temperature and underwent the same change in forcing.

    I’m not quite sure what you’re getting at here, but internal variability is not just associated with changes in temperature, but also with changes in radiative forcing (i.e., increased water vapour and clouds).

    I was under the impression that climate variability was much higher during ice ages than during interglacials, and that climate variability is expected to be higher under ice house conditions than hot house conditions.

    Yes, this is true, but I think is mainly because of ice-sheet instabilities, which don’t apply in our current climate. So, given our current climate, if we find that variability is high, that would be inconsistent – I think – with low climate sensitivity.

    Not sure if that quite answers your question, but I need to go and cook dinner 🙂

  19. Willard says:

    > This paper by Palmer & McNeall is quite good.

    Interesting. That would deserve due diligence, or at least a post. It does seem to go against what has been said about Mister T.

  20. izen says:

    @ -1=e^ipi
    ” According to the Stefan-Boltzman law, temperature variability acts as a negative feedback mechanism in the increase in the amount of radiation emitted to space for a high variability climate should be less than the increase in the amount of radiation emitted to space for a low variability climate given that both of these climates have the same initial average temperature and underwent the same change in forcing.”

    Not sure if this is the point you are getting at, but there is an issue around temperature variability and the T^4 S-B law.

    Comparing a body emitting with a static surface temperature with one with a variable temperature shows that because small excursions above the ‘static body’ temperature radiate more energy on the variable temperature body, the excursions below the static temperature will be greater. If both bodies are emitting the same amount of energy, (equal inputs) then the average temperature of the variable body will be lower than the static temperature body.

    I think.
    My ignorance exceeds most here so hopefully someone can make sense of this?!

  21. I think that the problem with using these models is that they are relying on non-deterministic Monte Carlo-derived outcomes to use as a comparison benchmark, instead of using known values.

    Clearly the volcanic disturbances can’t be predicted, so they correctly include these events in their runs. But they should do the same thing with ENSO then. Put the ENSO and the other known variability factors and then you will greatly reduce the variability if these effects are properly compensated for in the temperature time-series.

    This is an example from the Marotzke paper:

    Why do that when you can compensate out the known variability much better than that?
    It is really frustrating to watch these scientists spin their wheels


  22. If both bodies are emitting the same amount of energy, (equal inputs) then the average temperature of the variable body will be lower than the static temperature body.

    Yes, this is one of those inequalities that derives from the mean-value theorem. Placing an even power on a value will change the bias if a mean value is estimated.

    It must be somewhere in this list http://en.wikipedia.org/wiki/List_of_inequalities

  23. Joseph says:

    ATTP, if this paper is correct, could you be more confident in saying that, individual cycles like ENSO may see cycle periods where they don’t exactly average out (terminology?) but over around a 60 year period they do tend to cancel out.

  24. izen,

    Not sure if this is the point you are getting at, but there is an issue around temperature variability and the T^4 S-B law.

    I’m not quite sure I’m getting it either, but one point is that internal variability is not just temperature fluctuations. There are also feedback responses to those fluctuations (water vapour, clouds, …) and so internal variability can be regarded as internally forced.

    Joseph,
    I’m not sure. I think the general view is that these cycles do tend to average out, but I imagine it’s not exact. We probably can’t say that internal variability will exactly average out over some particular time interval.

    Willard,
    Yes, I should probably write about that. I had considered emailing Doug McNeall to ask if there was a way to do something like what they’ve done in their paper to try and address the whole Mister T issue, but I haven’t done so and since it would almost certainly take a model, some still wouldn’t believe it anyway.

  25. Joshua says:

    –Snip–

    Nic Lewis
    JANUARY 31, 2015 AT 8:01 PM
    Ed,
    Please stick to using HadCRUT4. Cowtan & Way are SkS activists and it is far from clear that their GMST timeseries is to be preferred to HadCRUT4 even over shortish timescales.

    –Snip–

    http://www.climate-lab-book.ac.uk/comparing-cmip5-observations/

    Hmmm. So much unintentional irony, so little time.

  26. Marco says:

    Joshua, I guess with that argumentation, Nic has (unintentionally) told us to ignore all of his analyses, since he is a GWPF activist and it is far from clear that his ECS analyses are to preferred over any other analysis (which he, however, demands we all do). There. Done. Thanks for playing, Nic!

  27. Joshua says:

    Marco…yup

  28. Joshua and Marco,
    Funny, I’ve just pointed that out in my response to his comment on Ed’s blog.

  29. Joshua says:

    Clearly, matt is certain that nic has lost the argument.

  30. BBD says:

    “So, if anything, if internal variability is large, it might suggest a higher, rather than lower, climate sensitivity.”

    Forgive my ignorance, but could you please explain why you think is the case?

    My understanding of this (which I write here to be corrected, if it is mistaken) is that the extent to which feedbacks net positive determines the sensitivity of the climate system to radiative perturbation. This perturbation can be internal or external – it makes no difference as feedbacks are engaged either way. The stronger the net positive feedback, the more internal (internally forced) variability is manifest.

    If someone argues that past and even present climate change is substantially driven by internal variability, then they argue for a sensitive climate system made so by positive feedbacks.

    Such a climate system would – must – be sensitive to all radiative perturbation, including that from a rapid and substantial increase in GHG forcing.

  31. BBD,
    Radiative perturbation, that’s the term I was looking for when I responded. Yes, what you say is how I understand it too.

  32. JCH says:

    Th PDO data on Wood for Trees, 114 years, has this trend: slope = -0.00152414 per year.

    http://www.woodfortrees.org/plot/jisao-pdo/plot/jisao-pdo/trend

  33. Th PDO data on Wood for Trees, 114 years, has this trend: slope = -0.00152414 per year.

    Can one infer a long-term cooling trend from that (0.15oC per century) or is it more complicated than that? I presume this is only the Pacific too.

  34. JCH says:

    The trend just rocks gently positive or negative depending on where the cycle ends. It’s been going downward for around 27 years, but strongly up just recently. I think the PDO in the 29th century is pretty close to the GMT until around 1980, when Tsonis et al threw it under the bus.

  35. JCH,
    Okay, thanks, that makes sense.

  36. JCH says:

    20th.

    Oh well. One Science of Doom there are some interesting posts recently about this subject.

  37. One Science of Doom there are some interesting posts recently about this subject.

    Yes, I read some of those a while back. Interesting, but take some concentrating 🙂

  38. Cowtan & Way data set is simply a correction term that barely registers when the entire time series starting around 1880 is concerned. A much more significant correction is removing the warming level shift during World War II.

  39. Jsam,

    Nice! I stand corrected.

    ***

    Marco, Joshua, AT,

    Here may be David Young’s technical comment about Nic’s model:

    Modelers are rather clever and I believe they know the effect of their parameters and choices on important emergent properties like CS [climate sensitivity]. This is certainly true for turbulence models for example, where the effect of choices on important properties are well known to specialists. The real problem here is that there are only a finite set of parameters and a potentially almost limitless number of emergent properties.

    http://neverendingaudit.tumblr.com/post/108050690409

  40. WHT,

    Cowtan & Way data set is simply a correction term that barely registers when the entire time series starting around 1880 is concerned.

    How do you know this? If you’re Nic Lewis and 0.1oC matters, then Cowtan & Way increases your TCR estimate by 10%, and that might just be 10% closer to the IPCC values than is acceptable for someone associated with the GWPF.

  41. I know this because I actually work with the numbers. As I have said Cowtan&Way is a small correction, impacting TCR by 3 to 4% at best. In this post I said the correction was “subtle”
    http://contextearth.com/2013/11/19/csalt-with-cw-hybrid/

    They may have refined the time series since I looked at it.

  42. The PDO is not a strong contributing factor to modeling the temperature time series, especially its short-term fluctuations. OTOH, the underlying slow multidecadal fluctuation in the PDO is a factor that shows up as a common-mode in all the indices. I factor this out as the LOD term which serves as a proxy to this multidecadal fluctuation:

    http://www.pasadenastarnews.com/general-news/20110312/jpl-study-highlights-drastic-scale-of-human-induced-global-warming

  43. WHT,

    I know this because I actually work with the numbers. As I have said Cowtan&Way is a small correction, impacting TCR by 3 to 4% at best. In this post I said the correction was “subtle”

    Okay, 10% may be too much but, AFAIK, it only goes back to 1970, so I’m not sure what the effect over the whole instrumental temperature record would be.

  44. Everett F Sargent says:

    ATTP,

    “Bear in mind that any forcing estimate comes from models anyway.”

    So, does one change the external forcing to match the observations (which appears to be the preferred method, given the many papers that have been published to date to explain the ‘model to obersvation’ discrepancies) or does one change the internal parameterizations (which appears to suggest that the emergent property of climate sensitivity might be lower than the current estimate)? Hint the atmospheric/ocean convective parameterizations.

    Also, I’m not too sure about climate sensitivity being more/less than something given the frequency distribution of internal variability. The deniers have used daily and seasonal variations to argue for low climate sensitivities, but we know the system has significant temporal inertia (at least up through the annual cycle).

    Finally, if their 15-year averages are simple OLS regressions (note to self, read the effin’ paper), that is about the worst way possible to resolve the low frequency characteristics/distribution of either models or observations (been there, done that).

  45. Everett F Sargent says:

    Oh, and on the 62-year fits, same argument as for the 15-year fits, except 4X the problem, getting to the low frequency distribution given a observational record that is only ~135 years in length. I want a low frequency metric that is differentiable at least to the 2nd derivitive, so that one can somewhat ‘objectively’ determine the relative phasing when one states that something is ‘in phase’ with something.

  46. Everett F Sargent says:

    JCH,

    “Because I don’t see it.”

    Doesn’t matter, because Nicola can see it, it being Scafetta Graffiti, it’s a modern art form, of sorts.

  47. Scafetta is an interesting character, determined and almost desperate to find solar/planetary cycles in the climate time series. In his latest paper he is going far afield, trying to make a second-handed connection, using the frequency of significant earthquakes to tie climate variations to the planetary cycles.

    this is his directed graph:
    climate variation — earthquake frequency — “9-, 20-, and 50- to 70-year oscillations is likely astronomical “

    He also correlates to the LOD variability which can be caused by either (1) compensations in angular momentum during an earthquake, or (2) due to movements of water or shifts in wind during climatic phenomenon such as ENSO. Geophysics meets atmospheric and oceanic sciences full on.

    This is an interesting problem because it is definitely chicken-and-egg territory. It is hard to discern the direction of causality, even if the correlations are solid (which they aren’t IMO).

  48. Oh, and on the 62-year fits, same argument as for the 15-year fits, except 4X the problem, getting to the low frequency distribution given a observational record that is only ~135 years in length. I want a low frequency metric that is differentiable at least to the 2nd derivitive, so that one can somewhat ‘objectively’ determine the relative phasing when one states that something is ‘in phase’ with something.

    Right, trying to squeeze more information out of a record that does not have more information with the help of more complex and indirect methods tends to lead to spurious results.

    When the amount of information is as low as it is in a single time series of GMST, it’s probably best to look directly at the data. Some simple and very transparent curve fitting helps, but even it must be used with great care understanding that an agreement is very often the outcome of over-fitting in some sense.

  49. Everett,
    I don’t think they’re trying to fit cycles. I think they’re simply trying to compare trends. What they find, which is not that surprising, is that the long-term forced trends match observations well for most start years, but they don’t when the time internal is short (15 years).

  50. Eli Rabett says:

    For long term trends it is absolutely crazy to use short term records. If there is anything it has to be in the proxy’s which go back much longer, but looking for small details in indirect measurements is even crazier

  51. Eli,
    Hmmm, except these are trends with specific start years, rather than simply a long-term trend analysis of a relative short dataset.

  52. Everett F Sargent says:

    ATTP,

    I really do need to read, and try to understand, the paper.

    However, I’m not trying to “fit cycles” in any way-shape-manner-form. I am trying to get to a low frequency trend line in the IIR/FIR sense.

    I do have the CMIP5 global time series from KNMI, so that these are in-the-can as it were. RCP26 is the lowest emissions pathway, AFAIK, all CMIP5 ensemble means continue to ‘ramp up’ over the next few decades, given the current observational trend and that it appears to be diverging more with time, IMHO the observational trend must increase at a rate substantially greater then what the mean CMIP5 model trend is projected to be, even for the RCP26 emissions pathway.

    I currently have little faith in the AOGCM’s, except in the global mean temperature (GMT) sense, and that if the model GMT continue to increasingly diverge from the observational trend, then I can come to only one conclusion (the emergent property of climate sensitivity is somewhat high), sans any overtly downward ‘fudging’ of the external forcing time series.

    My current rather unfounded belief (conjecture) is that OGCM’s are currently underestimating mixing at depth (and even perhaps lateral mixing), that is, that the half point of the observed mixed layer is expanding at a greater rate then the that shown in the OGCM’s.

    In my 2nd life, I was involved in OTEC modelling in the late 70’s, under the rubric of environmental fluid mechanics (much of that has to do with understanding mixing in the shear layer (either symmetric or asymmetric)).

  53. EFS,

    My current rather unfounded belief (conjecture) is that OGCM’s are currently underestimating mixing at depth (and even perhaps lateral mixing), that is, that the half point of the observed mixed layer is expanding at a greater rate then the that shown in the OGCM’s.

    I think James Hansen has suggested something similar in one of his papers. If I understand this properly, though, it would suggest that they’re over-estimating the TCR, but not necessarily the ECS.

  54. Eli Rabett says:

    ATTP, that reminds Eli of the types who try and differentiate noisy numerical series without smoothing.

  55. Eli,
    Yes, that may be a fair point.

  56. Everett F Sargent says:

    ATTP,

    Yes, perhaps. In my view ECS could be the same, just kicked down the timeline a few centuries-millenia?

    I did do a lot of work on combined salinity and velocity profiling, so the half point isn’t the only metric, the overall profile may also expand about the half point.

    What really ‘scares’ me though, is the thought of losing the GIS, as we lose that freshwater source as a major source of downwelling in the NH (another long term conjecture).

  57. Actually, Eli, let me try and understand what you’re saying. If you take a noisy time series and simply try and differentiate it, you’ll get highly variable trends. If, however, you smooth it, you get a better idea of the underlying trend. Isn’t that sort of what this paper is illustrating? If you consider short time intervals, the only way to reconcile the model and observational trends is for there to be a large residual (internal variability). However, if you consider longer time intervals (equivalent to smoothing) the residual reduces, illustrating that internal variability plays less of a role over these longer time intervals.

    Does that sound about right?

  58. The global temperature anomaly time series contains a lot of what looks like noise, yet the variability is almost completely explained by just a few factors

    Look at each of the peaks and valleys in the residual curve above and they match well with the known factors — almost without exception.

  59. BBD says:

    EFS

    A simple alternative explanation to the alleged ‘hot models’ problem is that a combination of increased ocean heat uptake, volcanic aerosol loading and reduced solar output largely accounts for the recent discrepancy between models and observations.

    People seem oddly keen to discount the models on the basis of a very short divergence from observations.

  60. -1=e^ipi says:

    izen – sort of,

    Suppose that you have two otherwise identical black bodies of average temperature T. One with no variability in temperature, and one with a standard deviation of σ in temperature (where the temperature is normally distributed and the standard deviation is small relative to the mean). Suppose that the radiation emitted by the first black body is b*T^4, where b is some constant. Then the radiation emitted by the second black body is b*Integral(t= -infinity to infinity; (T + σt)^4*(2*pi)^-0.5*exp(-0.5*t^2) dt).

    Using a second order Taylor approximation of (T + σt)^4, one obtains that the radiation emitted by the second black body is b*Integral(t= -infinity to infinity; T^4*(2*pi)^-0.5*exp(-0.5*t^2) dt) + b*Integral(t= -infinity to infinity; T^3*4σt*(2*pi)^-0.5*exp(-0.5*t^2) dt) + b*Integral(t= -infinity to infinity; (Tσt)^2*6*(2*pi)^-0.5*exp(-0.5*t^2) dt).

    The first integral is simply b*T^4. The second integral is zero because it is odd.

    The 3rd integral is even so is:
    2*b*Integral(t= 0 to infinity; (Tσt)^2*6*(2*pi)^-0.5*exp(-0.5*t^2) dt)
    = 12*b*(Tσ)^2*(2*pi)^-0.5*Integral(t= 0 to infinity; t^2*exp(-0.5*t^2) dt)
    Let u = 0.5*t^2. Then du = 2tdt and the 3rd integral is equal to:
    12*b*(Tσ)^2*(2*pi)^-0.5*Integral(u= 0 to infinity; sqrt(2u)*exp(-u)*0.5du)
    = 6*b*(Tσ)^2*(pi)^-0.5*Integral(u= 0 to infinity; sqrt(u)*exp(-u)*du)
    But what remains in the integral is simply the gamma function of 1.5, so
    = 6*b*(Tσ)^2*(pi)^-0.5*0.5*sqrt(pi)
    = 3*b*(Tσ)^2

    So The radiation emitted by the second black body is b*T^4 + 3*b*(Tσ)^2.

    Now suppose that both black bodies are initially in radiative equilibrium and then they are both subject to a small forcing ΔF. As a result, the black bodies change in temperature by ΔT1 and ΔT2 for the black body with no variability and the black body with variability respectively.

    To calculation ΔT1 from radiative equilibrium:
    b*T^4 + ΔF = b*(T + ΔT1)^4

    If the change in forcing is small then we can use a Taylor approximation to get:
    b*T^4 + ΔF = b*T^4 + 4*b*T^3*ΔT1
    => ΔT1 = ΔF/4/b/T^3

    Now do the same thing for ΔT2:
    b*T^4 + b*T^4 + 3*b*(Tσ)^2 + ΔF = b*(T + ΔT2)^4 + 3*b*(T + ΔT2)^2(σ)^2

    If the change in forcing is small then we can use a Taylor approximations and omit higher order terms to get:
    b*T^4 + b*T^4 + 3*b*(Tσ)^2 + ΔF = b*T^4 + 4*b*T^3*ΔT2 + 3*b*(T)^2(σ)^2 + 3*b*2*T*ΔT2*(σ)^2
    => ΔF = (4*b*T^3 + 6*b*T*σ^2)*ΔT2
    => ΔT2 = ΔF/(4*b*T^3 + 6*b*T*σ^2)

    ΔT2 is larger than ΔT1 since 6*b*T*σ^2 is positive as b, T and σ are.

    Therefore, the blackbody with higher variability in temperature has the lower temperature change to an external forcing.
    Thus, the blackbody with higher variability in temperature has a lower ‘climate sensitivity’.

    ATTP, I think I get your argument. You are saying that higher sensitivity causes higher variability, therefore one could expect a higher sensitivity if higher variability is observed.

    However, at the same time, higher variability causes lower climate sensitivity. Thus, one could argue that one could expect a lower sensitivity if higher variability is observed.

    So you have two-way causality and the effects are in opposite directions. As a result, I am not sure I think your argument is very strong.

    Although if you look at empirical evidence, high sensitivity is correlated with high variability (ice houses vs hot houses). Still, it’s probably better to try to not read too much into the amount of variability without better analysis.

  61. BBD says:

    So you have two-way causality and the effects are in opposite directions. As a result, I am not sure I think your argument is very strong.

    But yours is flatly contradicted by paleoclimate behaviour.

  62. -1=e^pi,

    However, at the same time, higher variability causes lower climate sensitivity. Thus, one could argue that one could expect a lower sensitivity if higher variability is observed.

    I’m not sure I quite get this. Maybe you could explain it a bit more.

    Although if you look at empirical evidence, high sensitivity is correlated with high variability (ice houses vs hot houses). Still, it’s probably better to try to not read too much into the amount of variability without better analysis.

    True, but all I was really suggesting is that there is a relationship between the magnitude of the variability and the climate sensitivity. “Higher” is relative term, but I was just meaning relative to variability being lower.


  63. People seem oddly keen to discount the models on the basis of a very short divergence from observations.

    That’s why I use all the observations reaching back to 1880. And I would use numbers before that date, but for the fact that not all met and geophysical data is complete earlier than this.

    As an example of a divergence, take a look at the measurements around the time of WWII.

    The above chart shows how Kennedy attempted to do corrections to SST measurements based on short-term procedural changes dictated by war-time safety concerns.

  64. -1=e^ipi says:

    @ BBD – in paleoclimate data you just have the correlation between high sensitivity and high variability. correlation != causation.

    @ ATTP – It’s like this: Increasing climate sensitivity causes increased climate variability. Increasing climate variability causes decreased climate sensitivity. Other factors can also influence both climate variability and climate sensitivity. Therefore, I think that saying ‘climate variability is higher than we thought, therefore climate sensitivity must also be higher than we thought’ is a bit naive.

  65. -1=e^pi,

    Increasing climate variability causes decreased climate sensitivity.

    No, I don’t think this is right. Maybe you mean, it could cause reduced estimates of climate sensitivity, but I can’t see how it can decrease climate sensitivity.

    Therefore, I think that saying ‘climate variability is higher than we thought, therefore climate sensitivity must also be higher than we thought’ is a bit naive.

    That’s not quite what I said – although maybe I didn’t explain myself very clearly. It wasn’t with respect to what we thought, it was more with respect to arguments for low climate sensitivity. You can’t really argue for low climate sensitivity and high climate variability. That’s not really consistent. For example, a large fraction of our observed warming since 1950 can’t be due to internal variability because that would imply that feedbacks are not small, and therefore that most of the warming is anthropogenic (i.e., it’s logically inconsistent).

  66. BBD says:

    @ BBD – in paleoclimate data you just have the correlation between high sensitivity and high variability. correlation != causation.

    You don’t understand this topic.

    The only really major forcing change across the entire Cenozoic is CO2. Eocene CO2 ~1000ppm or higher. Pleistocene CO2 ~170ppm – 280ppm excepting the present. That’s >8W/m^2 decrease from Eocene hothouse to Pleistocene icehouse. There’s the demonstration of climate sensitivity to radiative forcing from paleoclimate (Hansen & Sato 2012).

  67. -1=e^ipi,
    The effect you describe is totally irrelevant. I’s very small over the potential range of variability and unobservable in practice.

    About the connection between climate sensitivity we can say:
    – the variability that results from a given strength of forcing from natural sources and from such types of internal variability that influence the climate in the same way as a forcing is the larger the larger the sensitivity
    – internal variability that affects the climate through the same mechanisms as forcing from CO2 must be of equal strength to the CO2 forcing to have the same effect.

    You may notice that I have included a lot of conditions in the above. I did that, because they are essential. The conclusions are not necessarily true, if the mechanisms differ differently. To take an example. If natural variability leads to a significantly different state of the ocean over a prolonged time, and if that leads further to a significant change in the distribution of cloud cover, then it’s quite possible that the same sensitivity does not describe well the strength of warming from the internal variability and from added CO2.

    In many cases it is true that high internal variability in GMST requires high sensitivity to CO2 forcing, but that need not be true for all cases.

  68. Gator says:

    e^ipi: Two things. I don’t think your model really works — you need to consider the timescales that describe the variability and the forcing. You are essentially saying that the variability is much faster than the forcing, so that T2 is representative of a the whole spectrum of T + sigma.

    In any case, look at the magnitude of the effect you find. Putting in realistic numbers you see the difference between DT1 and DT2 is essentially zero.

  69. -1=e^ipi says:

    “No, I don’t think this is right. Maybe you mean, it could cause reduced estimates of climate sensitivity, but I can’t see how it can decrease climate sensitivity.”

    ATTP, I showed this for a simple case two posts ago. Basically a more variable system radiates more radiation to outerspace for a given temperature and a more variable system will have a lower temperature change for a given change in forcing assuming that initial radiative equilibrium temperature was the same.

    “You can’t really argue for low climate sensitivity and high climate variability. That’s not really consistent.”

    High and low are subjective, which is why it is better to stick to comparative statements. It is also possible for a ‘low’ climate sensitivity and a high climate variability if there is a high variability in forcings. Maybe I should restate what I mean another way: Climate variability being ‘high’ or ‘low’ by itself doesn’t say much about climate sensitivity. You also need to look at variability in forcings.

  70. BBD says:

    You also need to look at variability in forcings.

    Which you aren’t doing.

  71. -1=e^ipi says:

    “The only really major forcing change across the entire Cenozoic is CO2. Eocene CO2 ~1000ppm or higher. Pleistocene CO2 ~170ppm – 280ppm excepting the present. That’s >8W/m^2 decrease from Eocene hothouse to Pleistocene icehouse. There’s the demonstration of climate sensitivity to radiative forcing from paleoclimate (Hansen & Sato 2012).”

    Are you serious? So changes in solar output, Milankovitch cycles, changes in glacier cover, and (probably most importantly) changes in the position of the Earth’s continents, which changes ocean currents are all negligible relative to forcing changes due to changes in CO2? Closure of the Tethys Sea, creation of the Antarctic circumpolar current and the creation of Panama aren’t the main reason why the climate is much colder in the Pleistocene than during the Eocene?

    It is interesting that you bring up that particular Hansen paper, as I have read it ~20 times. I would argue that the Hansen & Sato 2012 paper is very flawed for a number of reasons when trying to extrapolate Paleoclimate data from 34 million years ago. The 3 primary reasons are: Historically CO2 was not the main driver of changes in climate like it is today (rather CO2 changed due to feedback mechanisms), ocean currents are very different in the past and if the continents today were like they were for most of the Cenozoic then the Earth would be warmer, and the temperature required to go from a glaciated planet to a non-glaciated planet is higher than the temperature required to go from a non-glaciated planet to a glaciated planet due to the effect of albedo.

    In any case, I think trying to extrapolate paleoclimate data before the Pleistocene to today without taking into account the changes in the continents is a very bad idea. I have a few models in mind to perform corrections to the Hansen & Sato paper like I did with the Loehle paper, but I’ll have to find the time to do it.

  72. -1=e^ipi says:

    @ Pekka –

    “The effect you describe is totally irrelevant. I’s very small over the potential range of variability and unobservable in practice.”

    I guess you are right. The effect is about ~10^-5 at best. Thanks for pointing that out. 🙂

    @ Gator –
    “you need to consider the timescales that describe the variability and the forcing. You are essentially saying that the variability is much faster than the forcing, so that T2 is representative of a the whole spectrum of T + sigma.”

    Fair enough, but if we are talking about variability on the order of decades, then we can think about the change in temperature that I was referring to in my simple calculations as somewhat comparable to the change in the time-averaged global temperature over decades. But yes, you are right about the magnitude of the effect being essentially zero.

  73. BBD says:

    Are you serious?

    Yes. Please read the reference.

  74. BBD says:

    Closure of the Tethys Sea, creation of the Antarctic circumpolar current and the creation of Panama aren’t the main reason why the climate is much colder in the Pleistocene than during the Eocene?

    Discreet events like the opening/closure of ocean gateways cannot drive a ~50Ma long cooling trend. This really should be obvious. The only major forcing change was CO2. Stellar evolution goes the wrong way – an *increase* of about 1W/m^2 across the Cenozoic should have contributed to a slight warming trend.

  75. BBD says:

    Historically CO2 was not the main driver of changes in climate like it is today (rather CO2 changed due to feedback mechanisms)

    Reference this, please.

  76. -1=e^pi,

    It is interesting that you bring up that particular Hansen paper, as I have read it ~20 times. I would argue that the Hansen & Sato 2012 paper is very flawed for a number of reasons when trying to extrapolate Paleoclimate data from 34 million years ago. The 3 primary reasons are: Historically CO2 was not the main driver of changes in climate like it is today (rather CO2 changed due to feedback mechanisms)

    I think you have to be slightly careful here. You’re right that the trigger was often not CO2 but changes in solar insolation. However, globally these changes were often small and it was probably more a change in the distribution of the insolation that the change itself. So, yes, CO2 and albedo changes were in a sense a feedback, not a forcing. However, if you treat these as – combined – the external forcings, then you can get an estimate of our response to fast feedbacks and this is what gives an ECS of 3K.

    You’re right that the continent shapes and ocean currents would have been different at these times. That’s why the Pliocene is interesting. The same CO2 concentration as today, but temperatures 2 – 3K warmer and sea levels maybe > 20m higher.

  77. BBD says:

    To be clear, I agree of course that during orbitally-forced deglaciation (Plio-Pleistocene) CO2 / CH4 are feedbacks. But the high levels of CO2 during the Paleocene and Eocene appear to be the consequence of tectonics – aka tectonic forcing. The slow reduction across the Cenozoic apparently driven overall by weathering. The release of carbon from geological sinks (and its return to them) has been the principle driver of climate from across the Cenozoic because no other forcing has changed across the entire period to the same extent. Geologically brief and discreet changes in eg. ocean circulation cannot drive a ~50Ma long cooling trend.

  78. BBD says:

    ATTP

    I think you have to be slightly careful here. You’re right that the trigger was often not CO2 but changes in solar insolation.

    True of glacial cycles for the last ~2.75Ma but not obviously the case with pre-glacial Cenozoic climate change.

  79. BBD,
    Yes, you’re right. That’s what the “often” was meant to reflect, but maybe I should have been more explicit 🙂

  80. BBD says:

    Oh God, I’m turning into Pekka. (Sorry Pekka).

  81. BBD,
    I do my best to be nitpicking only on issues that are potentially significant for science even if many of them are not directly relevant for policy discussion.

    Here I might add that what SoD first reported about scientific papers that discuss the glacial cycles of the latest million years (or up to 2.75 Ma), and what I have read myself since has made me highly suspicious on the dominating role of the eccentricity variations for the glacial cycle. The precession and obliquity cycles have certainly significant influence, but I consider it likely that the main glacial cycle is controlled mainly by other factors internal to the Earth.

    (This is obviously another issue that’s totally irrelevant for climate policy considerations.)

  82. BBD says:

    Pekka

    I do my best to be nitpicking only on issues that are potentially significant for science even if many of them are not directly relevant for policy discussion.

    Yes, I appreciate that. Clarity is vital and I hope you will consider this an example of imitation as the sincerest form of flattery.

    but I consider it likely that the main glacial cycle is controlled mainly by other factors internal to the Earth.

    We need not revisit this here 🙂 but I would argue that the ~41ka obliquity cycle is clear from ~2.75Ma – 1.2Ma. As you know, the mid-Pleistocene Transition (~1.2Ma – 700ka) sees a shift to the ‘100ka world’ where deglaciations occur at times of high obliquity and either 2x or 3x multiples of the 41ka obliquity cycle. The implication being that the trigger is still orbital dynamics, but that it operated in conjunction with ice sheet dynamics in a colder world. The key being how far south the NH ice sheets had progressed. Beyond a certain point the vulnerability to orbital forcing increased to the point that the next peak in obliquity-driven forcing instigated instability and collapse. So a synthesis of both internal and external factors controlled the onset of deglaciation.

  83. -1=e^ipi says:

    @ BBD –

    “Please read the reference.”
    I’ve read it like 20 times already…

    “Discreet events like the opening/closure of ocean gateways cannot drive a ~50Ma long cooling trend. This really should be obvious.”

    Except it’s not discrete, it happens over millions of years. When the antarctic circumpolar current first formed, it wasn’t as strong as it is now. It’s slowly been getting stronger and stronger over millions as years as Antarctica and South America drift further apart. The Tethys Sea did not one day disappear. It got narrower and narrower before disappearing over millions of years. The creation of Panama also took a long time.

    “Historically CO2 was not the main driver of changes in climate like it is today (rather CO2 changed due to feedback mechanisms)

    Reference this, please.”

    Sigh, guess I’ll pull a Christopher Monckton argument.

    Oh look, there is very little correlation, therefore CO2 doesn’t affect climate! *sarcasm*

    Look, obviously over millions of years, whether a geological epoch is warm or cold depends greatly on the position of the continents. If you have land masses at the poles and ocean currents are restricted, then the earth is colder. If you have oceans at the poles and heat can easily travel from equator to poles then the earth is warmer. In addition you have the gradual increase in solar irradiance. Plus as I have mentioned before, whether or not an ice age occurs depends on the Milakovitch cycles. There are events in history such as PETM, where changes in CO2 can be argued to be the initiator of the change in climate, but in most cases it is not CO2 and CO2 changes due to the feedback response.

    “Geologically brief and discreet changes in eg. ocean circulation cannot drive a ~50Ma long cooling trend.”

    Except it is not brief, not discreet.

  84. -1=e^pi,

    Look, obviously over millions of years, whether a geological epoch is warm or cold depends greatly on the position of the continents. If you have land masses at the poles and ocean currents are restricted, then the earth is colder. If you have oceans at the poles and heat can easily travel from equator to poles then the earth is warmer. In addition you have the gradual increase in solar irradiance. Plus as I have mentioned before, whether or not an ice age occurs depends on the Milakovitch cycles. There are events in history such as PETM, where changes in CO2 can be argued to be the initiator of the change in climate, but in most cases it is not CO2 and CO2 changes due to the feedback response.

    It sounds like the distinction you’re making is between what initiates a change, versus what sets the final temperature. Ultimately, the main external forcings are albedo, CO2, Solar (and volcanic aerosols, but that’s short term). So, although, variations in ocean currents could drive ice sheet retreat, for example, it is still the albedo change and the CO2 increase that sets the temperature, even if they aren’t the initiator.

    Oh look, there is very little correlation, therefore CO2 doesn’t affect climate! *sarcasm*

    This was a joke, right?

  85. -1=e^ipi says:

    Sorry, I meant to say not brief [b]nor[/b] discreet in my last post.

    To correct myself, I didn’t read Hansen’s 2012 paper 20 times, I read his 2008 paper 20 times. I’ve only read the 2012 paper once before. It’s the same nonsense as his 2008 paper but now includes a doomsday prediction of 5m sea level rise by end of century. Sorry for my incorrect claim earlier.

  86. -1-e^pi,
    Okay, Hansen’s 2012 paper is wrong, but that’s a different topic.

  87. BBD,

    I do believe that the final trigger is orbital, but where my thinking deviates from orbital dominance is in the selection of which of the shorter term orbital peaks (which are often combinations of precession and obliquity) will be the trigger. Thus I think that the rough timing of deglaciation is determined by internal Earth processes with on accuracy of typically 20 ka. When the internal state has matured, the next suitable phase in orbital state initiates the deglaciation. In that view eccentricity has very little role.

    I don’t have any stronger arguments for my view than visual comparison of the data on the climate history of the Earth and the comparison of that with the orbital data. The change in the length of the glacial cycle does not seem too mysterious in that picture, but, of course, that means only that what has been seen does not seem to contradict any principle, not that I would have any real explanation.

    On the other hand my impression of the more purely orbital explanation is that it does not have any better support from data, and actually worse in my assessment as some of the phases of deglaciation have started at times not obviously consistent with the standard picture.

  88. BBD says:

    -1=e^ipi

    Well, we shall have to agree to differ. I will say that to the best of my knowledge you are out of step with current thinking on this. Nobody disputes that events like the opening of the Tasmanian Gateway and Drake Passage triggered the abrupt cooling at Oi-1. But how to account for the late Oligocene warming? Then the reversion to a cooling that hit bottom in the Mi-1 glaciation? No ocean gateway opening or closure occurred then. As I understand it, this is one part of the reason that overarching Cenozoic cooling from the peak of the Eocene Climatic Optimum ~50Ma to the Pleistocene is not now thought to be attributable solely to the opening and closure of ocean gateways.

  89. BBD says:

    ATTP

    NB, wrong Hansen paper. I was referencing Hansen & Sato (2012).

  90. BBD,
    Okay, thanks, it’s getting late. I should know that, I’ve referred to Hansen & Sato (2012) myself a number of times 🙂

  91. -1=e^ipi says:

    Oh looks like [b] [/b] doesn’t bold things. Sorry, I am unfamiliar with commenting on this website.

    @ ATTP –

    “This was a joke, right?”

    Yes, that is why I used *sarcasm*. 🙂

    “However, if you treat these as – combined – the external forcings, then you can get an estimate of our response to fast feedbacks and this is what gives an ECS of 3K.”

    You may be able to get a correct ECS of ~3K from paleoclimate data even for times in the Cenozoic when the ocean currents were very different. However, Hansen (for example in his 2008 paper) goes much further than that. He basically argues (unless I completely misunderstood the paper) that ~450 ppm is sufficient to deglaciate Antarctica because that was roughly CO2 levels when Antarctica started to glaciate. This is of course nonsense because a higher global average temperature would be needed to perform deglaciation than glaciation due to the albedo effect, and also because the positions of the continents were very different 34 million years ago so the ocean currents (no panama, much weaker antarctic circumpolar current) would not allow for as warm of a planet at a comparable CO2 level.

    He then goes even further to argue that because ~350 ppm is at the bottom of the 95% confidence interval, humans should aim for 350 ppm to avoid deglaciating Antarctica. Realistically, humans could go well beyond 450 ppm without Antarctic deglaciation from occurring due to the reasons I just mentioned.

    “You’re right that the continent shapes and ocean currents would have been different at these times. That’s why the Pliocene is interesting. The same CO2 concentration as today, but temperatures 2 – 3K warmer and sea levels maybe > 20m higher.”

    Yes. The creation of Panama changes a lot. Even if we do get comparable CO2 levels as the Pliocene, the Earth will not be as warm due to different ocean currents.

    “It sounds like the distinction you’re making is between what initiates a change, versus what sets the final temperature.”

    Yes, it is very important to distinguish this when talking about correlations between temperature and CO2. If CO2 is initiating the change then the ratio of the change in ln(CO2) vs the change in temperature should be higher than if something else is initiating the change.

  92. BBD says:

    Pekka

    Thus I think that the rough timing of deglaciation is determined by internal Earth processes with on accuracy of typically 20 ka. When the internal state has matured, the next suitable phase in orbital state initiates the deglaciation. In that view eccentricity has very little role.

    Yes, I’m not really pushing the role of eccentricity, but I’m not sure what is meant by Earth processes and internal state.

  93. -1=e^pi,

    Yes, it is very important to distinguish this when talking about correlations between temperature and CO2. If CO2 is initiating the change then the ratio of the change in ln(CO2) vs the change in temperature should be higher than if something else is initiating the change.

    I’m not sure I get this. The radiative forcing due to CO2 depends only on the change in CO2 concentration, not on what produces the change.

  94. BBD says:

    ATTP

    No worries. What you say above bears repeating:

    It sounds like the distinction you’re making is between what initiates a change, versus what sets the final temperature. Ultimately, the main external forcings are albedo, CO2, Solar (and volcanic aerosols, but that’s short term). So, although, variations in ocean currents could drive ice sheet retreat, for example, it is still the albedo change and the CO2 increase that sets the temperature, even if they aren’t the initiator.

    I have a sense this is where -1=e^ipi and I are thinking along different lines.

  95. BBD,
    The best proposal I know about is that of Abe-Ouchi and based on the deformation of the Earth crust under the continental ice sheets, but perhaps something else will also come out, or a modification of that.

  96. BBD says:

    Pekka

    I’m happy to agree that a synthesis of both internal and external factors controlled the onset of deglaciation during and after the MPT but that the internal factors are not yet clearly understood.

  97. -1=e^ipi says:

    @ ATTP –

    “I’m not sure I get this. The radiative forcing due to CO2 depends only on the change in CO2 concentration, not on what produces the change.”

    Think of it like this. Suppose that we have a positive feedback loop between temperature and CO2 that can be described by the following two equations.

    T(t) = A + B*ln(CO2(t-1))
    ln(CO2(t)) = C + D*T(t-1)

    where T is temperature in Celcius, CO2 is atmospheric CO2 in ppm, and t is the time. So suppose that we have two linear relationships between temperature and ln(CO2) that creates this positive feedback loop (I’m just doing this for the sake of simplicity).

    Substituting the first equation into the second yields:
    T(t) = A + B*C + B*D*T(t-2)
    which gives T(t) – T(t-2) = B*D*(T(t-2) – T(t-4)).

    So suppose that initially the climate is in equilibrium at time t=0 and then between t=0 and t=2 there temperature increases by ΔT due to some external factor that isn’t due to CO2 (example: due to an increase in solar irradiance). Then the change in temperature between t = 2n and t = 2(n+1) is:
    T(2(n+1)) – T(2n) = (B*D)^n*(T(2) – T(0)) = (B*D)^n*ΔT.

    Thus the total change in temperature in the long run is:
    T(infinity) – T(0) = sum(n = 0 to infinity; T(2(n+1)) – T(2n))
    = sum(n = 0 to infinity; (B*D)^n*ΔT) = ΔT*sum(n = 0 to infinity; (B*D)^n)

    Assuming 0 < B*D < 1 (which is a reasonable assumption since the Earth's temperature doesn't just blow up to infinity), then this converges to ΔT/(1 – BD).
    Thus the temperature change due to this forcing is ΔT/(1 – BD).

    This means that the CO2 change due to this forcing is:
    ln(CO2(infinity)) – ln(CO2(0)) = D*T(infinity) – D*T(0)
    = D*ΔT/(1 – BD)

    Thus the ratio of temperature change to CO2 change due to the initial non-CO2 forcing is 1/D.

    Now let's do the same thing but with an initial CO2 change.

    Substituting the second equation into the first yields:
    ln(CO2(t)) = C + D*A + B*D*ln(CO2(t-2))

    Suppose that initially the climate is in equilibrium at time t=0 and then between t=0 and t=2 there ln(CO2) increases by Δln(CO2). Then the change in ln(CO2) change between t = 2n and t = 2(n+1) is:
    ln(CO2(2(n+1))) – ln(CO2(2n)) = (B*D)^n*(ln(CO2(2)) – ln(CO2(0))) = (B*D)^n*Δln(CO2).

    Thus the total change in ln(CO2) in the long run is:
    ln(CO2(infinity)) – ln(CO2(0)) = sum(n = 0 to infinity; ln(CO2(2(n+1))) – ln(CO2(2n)))
    = sum(n = 0 to infinity; (B*D)^n*Δln(CO2)) = Δln(CO2)*sum(n = 0 to infinity; (B*D)^n)
    = Δln(CO2)/(1 – BD).

    This means that the total temperature change due to this forcing is:
    T(infinity) – T(0) = B*ln(CO2(infinity)) – B*ln(CO2((0))
    = B*Δln(CO2)/(1 – BD)

    Thus the ratio of temperature change to CO2 change due to the initial CO2 forcing is B.

    Thus, the first ratio is 1/BD times the second ratio. And since 0 < BD < 1, this means that the first ratio is larger than the second.

    This is relevant because today it is primarily a CO2 increase that is the initial forcing, where as in the past with paleoclimate data it was primarily a non-CO2 increase that is the initial forcing. Thus the ratio of temperature increase to ln(CO2) increase today should be roughly BD times the ratio of temperature increase to ln(CO2) increase that it was in the past.

    If we accept that the no-feedback climate sensitivity for a doubling of CO2 is about 1 K and that the ECS is roughly 3K, then this suggests that 1/(1 – BD) is roughly 3. Which implies that BD is roughly 2/3. So paleoclimate studies that merely look at the relationship between temperature and CO2 and do not take into account the fact that CO2 is not the initial forcing may be overstating climate sensitivity by a factor of roughly 3/2.

  98. -1=e^ipi says:

    Thought I would do a simple calculation.

    Suppose that the mid-pliocene was ~3 C warmer than pre-industrial times (corresponding to ~270 ppm) and had atmopsheric CO2 concentrations of ~400 ppm.

    Then the change in ln(CO2)/ln(2) is 0.567.

    Since the initial reason for the change from pliocene climate to pleistocene climate is non-CO2 related (such as the creation of Panama), this suggests that the amount of temperature change required for a CO2 doubling is roughly 3/0.567 = 5.29 C.

    In order to get the amount of temperature change due to a CO2 doubling, we should multiply this by 2/3 to get 3.53 C.

    Since we are talking very large time scales between these two points in time, this 3.53 C should correspond roughly with the Earth System Sensitivity.

    Of course the equilibrium climate sensitivity should be less than this, so maybe ECS is roughly the 3C that seems to be the most accepted value.

  99. ImaginaryNumberGuy,
    I am not going to try to go through your math because it looks too sloppy for my tastes.

    Take a look at this derivation instead, which includes a real positive feedback:
    http://theoilconundrum.blogspot.com/2013/03/climate-sensitivity-and-33c-discrepancy.html

    I suggest that you get yourself a blog and learn to do some equation markup, and add some graphics whenever you need some visualization assist. You might have something important to say but you need to say it in a way to make it easier on the reader.

  100. -1=e^ipi says:

    There might be a mistake in my last post. It might be better to represent 1/(1 – BD) as the ratio of the ESS to the no-feedback response not ratio of the ECS to the no-feedback response. Furthermore, the no-feedback response to a doubling of CO2 is roughly 1.15C (source: http://www.globalwarmingequation.info/global%20warming%20eqn.pdf).

    If we do this then we have :
    1/(1-BD)*1.15C = ESS = BD*5.29 C
    => 0.217 = BD – (BD)^2
    => 0 = (BD)^2 – BD + 0.217
    => BD = (1 +/- sqrt(1 – 4*0.217))/2
    => BD = (1 +/- 0.36116)/2
    => BD = 0.681 or 0.319

    => ESS = 1.69 C or 3.60 C (pretty sure the second one is valid).

    In any case, that means that ESS is about 3.60 C, not the 3.53 C based on my calculations in the last post. Doesn’t change much though.

    In any case, I think Hansen’s claims of an ESS of 4-6 C based on paleoclimate data to be wrong based on the relationship I explained above.

  101. miker613 says:

    I complained already at judithcurry about this study. I don’t understand their point; they seem to be addressing the wrong concern. Is there anyone who thinks that the models are forecasting the wrong trends _over the training period_ (i.e., roughly the twentieth century)? A glance at the model outputs vs. global temperatures will show anyone that all of them track temperature within a certain error band. One would expect that their 15-year trends must do the same, more or less. None of them has an anomaly that is much different from the observed temperatures – thus, their 60-year trends will be about right.
    None of them starts way below and ends up way higher. [This probably means (as Nic Lewis has pointed out) that the higher-sensitivity models must have some other factor (aerosols) balancing them out. The net result is (somewhat) similar trends.]
    I can’t imagine that anyone ever suggested otherwise.
    The concern has always been, AFAIK, that the _out-of-sample_ error – from this century – is too large. The temperature has been going up very little or anyhow much less than all the projections – according to pretty much everyone including the IPCC and Ed Hawkins and BEST. That is, that the models were over-fitted.
    Is the claim that out-of-sample error is similar to in-sample error? They don’t seem to be saying that, and everyone says otherwise. The standard claim of everyone except Gavin Schmidt is that the error is around or beyond the 5% significance level. It isn’t cherry-picked, it happened as soon as the models were turned loose.
    So what is this paper saying?

  102. -1=e^ipi says:

    @ WebHubbleTelescope – Thanks for the advice. I am aware that what I write is a bit sloppy and could use visuals, sorry about that. I might get a blog / youtube channel or something in the distant future when I have time and have a better understanding of things. Now I am on the verge of homelessness so I have other priorities.

  103. Ignore those models. Like I said before, it is enough to look at the factors that control the fluctuations and the trend.

    The TCR pops out of this. I really don’t care about the Monte Carlo runs that describe fluctuations that go in every direction.

  104. -1=e^ipi says:

    @ WebHubbleTelescope –

    What value do you get for the TCR. Also, is it comparable with the IPCC’s definition of TCR?

  105. I get 2C for TCR assuming that CO2 is a leading indicator. That means the other GHGs associated with CO2 emission are effectively combined into that number.

    To get the ECS value of 3C, I look at land values alone. This effectively removes the slow heat capacity transient of the ocean.

  106. jyyh says:

    Thanks WebHubTelescope for the graph. my guesses (had to google a couple) for the abbreviations used are:
    giss=giss global temperature record
    co2= co2 amout (the primary non-condensible greenhouse gas)
    aero= aerosol shading including SO2 and such
    lod=lenght of day (which varies a bit over the year due elliptical orbit of earth AND precession and other regular changes in earths orbit)
    tsi= total solar irradiation (to include solar cycle of c.11 years into the equation)
    aam= this one I had to google… atmospheric angular momentum (I’m not sure why this has to be included, heat transport to the poles? No?)
    soi = southern oscillation index (is the simplest and most easily calculated measure of ENSO)
    Amoi=Atlantic multidecadal oscillation index (measure of long term changes in the downwelling-upwelling in the North atlantic sector, representing a possible longer cycle in the temperature records , this effects the temperatures over largest land mass Eurasia so might be the better choice of these?)

    Please correct or explain more if there’s a need.

  107. jyyh , you have it about right. I didn’t provide the explanation link as I gave it earlier:
    http://contextearth.com/2015/01/30/csalt-re-analysis/

    the AAM is very similar to the effects of SOI. Pressure differences (SOI) correlate with wind velocity changes in the atmosphere (which AAM measures).

    The CO2 is actually log(CO2).

    I am waiting for someone to say this approach is nothing new and that others have tried it as well. (a true statement, how else would I have come up with this but to kipe it from somewhere?)

  108. -1=e^pi.
    No, I think you’ve made a mistake. I think that amount of warming due to a change in CO2 is going to be the same, irrespective of what causes CO2 to change.

  109. miker613,

    The temperature has been going up very little or anyhow much less than all the projections – according to pretty much everyone including the IPCC and Ed Hawkins and BEST. That is, that the models were over-fitted.

    Who says this? Firstly, this would only be relevant if the input to the models precisely matched observations. Secondly, what this paper is showing is that for any 15 year period, the model trend typically doesn’t match observations. The argument being that internal variability dominates the trends on these timescales. It just so happens, that the last 15 years has been a period where model trends were higher than observations. Imagine what would have happened if it had been the other way around?

  110. jyyh says:

    WebHubTelescope, well I have tried this sort of approach with monthly values got to the beginning of the Co2 series of Mauna Loa… missing the effect of lod and aam, which is the reason my correlation wasn’t nearly as good as yours… I think the lecturer of environmental studies in our local university has done something similar but not seen the results. You might well be the first one outside the academical journals (of which I’ve got no access to) presenting this sort of thing openly in the net.

  111. jyyh says:

    Tamino did one study in similar style and found the best correlation for TSI was with a short lag, I’d imagine this is because of the ocean surface warming holding the summer warmth for two months (or what was it?) before releasing it.

  112. BBD,
    I would actually turn the questions about the glacial cycle to a different form that change the emphasis to some extent.

    Why has the glaciation phase of the latest cycles continued all the way to the level they have reached (not fully regularly, but as the general picture)?

    Why have deglacifications be much more abrupt and gone all the way to the level of interglacials?

    There seems to be strong hysteresis involved. That may be related to the crust (Abe-Ouchy’s explanation), but it may be more fruitful to study what makes the hysteresis so strong rather than to study, why it fails at the turning points.

    In other words. The first question is perhaps not, what makes do the phases of glacification and deglacification start and end, but why we have the strong cycle at all, and not an Earth with little variation in the amount of ice on the time scales shorter than millions of years.

  113. BBD says:

    Pekka

    Well, we both know the standard hypotheses concerning all these questions. Pleistocene CO2 levels are low enough for NH glaciation to occur at times of low obliquity and this happens relatively slowly because ice albedo feedback is engaged incrementally as the ice sheets grow incrementally. Abrupt deglaciation occurs when TSI acts on inherent ice sheet instability causing collapse and very rapid and strong engagement of ice albedo feedback. Orbital dynamics, axial tilt and terrestrial boundary conditions (NH continental location, CO2) act together to produce the NH glacial cycles. While there is much detail to be explained, I have never seen the great mysteries that SoD and a very few others perceive in all this. But then, I don’t believe in self-propelling climate systems, so I am somewhat biased.

  114. miker613 says:

    @ATTP “Who says this?” I already linked the leaked IPCC summary. But here’s BEST:
    http://static.berkeleyearth.org/memos/examining-the-pause.pdf
    “While surface temperatures have generally remained fairly close to the multi-model
    mean in the past, the recent pause threatens to cause surface temperatures to fall outside the confidence interval of models in the next few years if temperatures do not rise.”
    “The pause also stands out sharply if one looks at the full range of model projections, from 1880 to 2100.”
    In other words, unlike the impression the paper is giving, the “hiatus” is not typical of the previous century of model runs. (I’m adding: this is typical of cases of overfitting.)
    Your point about differing inputs doesn’t seem to address this. We have not had any massive unexpected volcanoes in the last decade. Any other inputs should be included in the variability in the models.
    I don’t understand why I am being asked to defend consensus science. The IPCC and the BEST project, and Ed Hawkins’ charts, and James Annan (posted elsewhere), should be sufficient. If you don’t agree with them or don’t understand what they are saying, argue with them. Instead, to quote, BBD, it’s all “denier cr__”. Everyone seems to know that there is an embarrassing (quoting Annan again) issue here; everyone except fans of realclimate, which is Denying it.

    Anyhow, as I said, this paper seems to be totally irrelevant. Variability on the training data is not the same as variability on the out-of-sample data. If an event happens 5% of the time on the training data, that doesn’t mean you don’t reject your hypothesis if it happens on your validation. That is what we mean when we say 5% significance level.

  115. miker613,

    Your point about differing inputs doesn’t seem to address this. We have not had any massive unexpected volcanoes in the last decade. Any other inputs should be included in the variability in the models.

    It’s not just about volcanoes. You have to check that all the external forcings (anthropogenic, solar) that you assumed, actually match reality.

    Plus, the quote about surface temperatures remaining close to the multi-model mean applies only to the period after 1970. Look at the 1880 onwards plot and there are clearly periods where the model mean was above the observations, and periods where it was below. The current mismatch doesn’t even stand out all that much, and that’s before any correction to the forcings is applied.

    Variability on the training data is not the same as variability on the out-of-sample data. If an event happens 5% of the time on the training data, that doesn’t mean you don’t reject your hypothesis if it happens on your validation. That is what we mean when we say 5% significance level.

    You, and a few others are the only ones referring to this a validiation. It is not. We’re not trying to validate these models. They were not designed to do decadal predictions. They were designed to give long-term projections. You’re imposing a validation that noone doing climate modelling would accept. You can’t invalidate something when it doesn’t do something that it was never designed to do. Plus, these are multi-model means. All the variability is smoothed out.

  116. -1=e^ipi says:

    @ ATTP – I’m not saying that the amount of warming due to CO2 changes. I am saying that the correlation between temperature and CO2 changes depending on what starts the feedback response.

  117. -1=e^ipi,
    Okay, I see, yes that makes sense. I guess I was thinking of the Milankovitch cycles, where the external trigger (insolation changes possible) only produces a small net change in external forcing, and the dominant changes are albedo and CO2. Hence, the overall change in temperature can be related to the fast-feedback equilibrium response to the combined CO2 and albedo changes.

  118. verytallguy says:

    miker,

    a suggestion to understand how models work before criticising them.

    You appear to have fundamentally misunderstood model development.

    Is there anyone who thinks that the models are forecasting the wrong trends _over the training period_ (i.e., roughly the twentieth century)?

    You could do worse than reading work by climate modellers, for instance:

    Model development actually does not use the trend data in tuning (see below). Instead, modellers work to improve the climatology of the model (the fit to the average conditions), and it’s intrinsic variability (such as the frequency and amplitude of tropical variability). The resulting model is pretty much used ‘as is’ in hindcast experiments for the 20th Century.

    My emphasis.

    http://www.realclimate.org/index.php/archives/2008/11/faq-on-climate-models/

  119. BBD says:

    miker

    We have not had any massive unexpected volcanoes in the last decade.

    Something of a strawman when it comes to the under-estimation of the recent impact of volcanic aerosols on GAT. See Ridley et al. (2014):

    Understanding the cooling effect of recent volcanoes is of particular interest in the context of the post-2000 slowing of the rate of global warming. Satellite observations of aerosol optical depth above 15 km have demonstrated that small-magnitude volcanic eruptions substantially perturb incoming solar radiation. Here we use lidar, Aerosol Robotic Network, and balloon-borne observations to provide evidence that currently available satellite databases neglect substantial amounts of volcanic aerosol between the tropopause and 15 km at middle to high latitudes and therefore underestimate total radiative forcing resulting from the recent eruptions. Incorporating these estimates into a simple climate model, we determine the global volcanic aerosol forcing since 2000 to be −0.19 ± 0.09 Wm−2. This translates into an estimated global cooling of 0.05 to 0.12°C. We conclude that recent volcanic events are responsible for more post-2000 cooling than is implied by satellite databases that neglect volcanic aerosol effects below 15 km.

  120. miker613 says:

    “You’re imposing a validation that noone doing climate modelling would accept.” Note the quotes from BEST. Here’s another: “the longer the pause continues the more people will begin to question whether GCMs are getting either multi-decadal variability or climate sensitivity wrong.” They don’t agree with you. None of the people I quoted agrees with you. If the models are not validated, then they are not useful for long-term projections either. At least, we don’t know if they are. Certainly if they are overfitted, they are not useful for anything; that’s the nature of overfitting.

    Your claim about the model inputs essentially amounts to a claim that all the people I quoted are fools: they are comparing apples and oranges, a high-emission scenario to one that actually happened, no aerosols to a lot of aerosols, failing to take solar into account (probably a very minor effect) etc. Seems unlikely that none of them noticed that. Again, argue with them.

    “Look at the 1880 onwards plot and there are clearly periods where the model mean was above the observations, and periods where it was below. The current mismatch doesn’t even stand out all that much…” First of all, it does, as BEST pointed out. Second, it doesn’t need to. It just needs to stand out as one of the worst 5% of cases for the model to fail validation. That’s what 5% significance means. So it doesn’t help to point at the last century and say, look, that was a bad year too! You need to show me more than five or so that were at least that bad.

    Again, argue with them. You are Denying the Consensus on this one.

  121. -1=e^ipi says:

    Let me try the calculation I did before but this time with the Pleistocene ice core data.
    Mid glacials correspond to roughly -8 C and 190 ppm, interglacials correspond to roughly 2 C and 290 ppm. So change in temperature is 10 C and change in ln(CO2)/ln(2) is 0.610. And I get

    0 = (BD)^2 – BD + 1.15/(10/0.610)
    => BD = (1 + sqrt(1 – 4*0.070))/2
    => BD = 0.924

    => ESS = 0.924*10/0.610 = 15.15 C. I guess the ESS is much higher when you have glaciers going down to 40 degrees N than when you have an ice free Pliocene climate (but still, that is a factor of 4 difference).

  122. miker613 says:

    @vtg. I’m misunderstanding, because realclimate says I am?
    But let’s see what a modelling group says. Maybe you’re misunderstanding:
    Mauritsen T. , B. Stevens E. Roeckner T. Crueger M. Esch M. Giorgetta H. Haak J. H. Jungclaus D. Klocke D. Matei U. Mikolajewicz D. Notz R. Pincus H. Schmidt L. Tomassini null : ” Tuning the climate of a global model” , Journal of Advances in Modeling Earth Systems 4 , doi: 10.1029/2012MS000154 , http://www.agu.org/journals/ms/ms1208/2012MS000154/2012MS0001
    (link no longer works). (Found on lucia.)
    Quote from there: “Climate models ability to simulate the 20th century temperature increase with fidelity has become something of a show-stopper as a model unable to reproduce the 20th century would probably not see publication, and as such it has effectively lost its purpose as a model quality measure.” And “once aware of the data we will compare with model output and invariably make decisions in the model development on the basis of the results.”
    See there that they say that they tried to avoid tuning, but it wasn’t possible to avoid using prior information from earlier successful models in various ways. In other words, tuning can happen in stages. This is not an evil thing, but it can easily lead to overfitting. That’s why we must do validation.

  123. miker613,

    “the longer the pause continues the more people will begin to question whether GCMs are getting either multi-decadal variability or climate sensitivity wrong.” They don’t agree with you.

    Rubbish, read what it says and read what I said.

    The longer the “pause” continues the more people will begin to question whether GCMs are getting either multi-decadal variability or climate sensitivity wrong.

    There’s not much wrong with the above statement, but it’s not consistent with your failing validation claims. Yes, if the pause were to continue for a lot longer it would indeed bring aspects of climate modelling into question. In fact, there were always be aspects that can be brought into question. None of this implies that they’re failing validation.

  124. -1=e^ipi,
    I think you’ve used polar temperature changes only. I think, globally, it’s probably a factor of at least 2 smaller.

  125. miker613,

    That’s why we must do validation.

    Except we can’t do this without a time machine, or simply waiting and hoping that today’s models are wrong. Try this from Hargreaves & Annan (2014)

    One fundamental requirement for a hypothesis to be considered scientifically valid is that it is in principle amenable to falsifiability. The hypotheses arising from model consensus (described above) are trivially falsifiable in principle, by the process of waiting for 100 years and observing the resulting climate changes. If anthropogenic emissions were to be very different from the assumed scenarios, then it might be necessary to re-run the models with appropriate forcing, but this is a technical detail. Far more problematic, is that we are unwilling to wait 100 years before learning about climate models, and cannot wait before making today’s decisions. It might not take as long as 100 years, and indeed recent evidence does hint at the models modestly overestimating the rate of climate change,[29] but there is certainly not yet sufficient evidence to overturn the major paradigms of today’s models.

  126. verytallguy says:

    Miker,

    I’m misunderstanding, because realclimate says I am?

    Well, a good starting point for an amateur in the field would be to respect the professionals, yes.

    Mauritsen is consistent with Realclimate, rather unsurprisingly given that realclimate is written by climate scientists.

    Let’s see what AR5 says, which explicitly discusses Mauritsen, box 9.1:

    The requirement for model tuning raises the question of whether climate models are reliable for future climate projections. Models are not tuned to match a particular future; they are tuned to reproduce a small subset of global mean observationally based constraints. What emerges is that the models that plausibly reproduce the past, universally display significant warming under increasing greenhouse gas concentrations, consistent with our physical understanding.

    Chanting the word “validation” does not constitute an argument.

  127. BBD says:

    As usual, miker blanks the inconvenient bit. This time his error over volcanic aerosols.

  128. Willard says:

    Perhaps, but China.

    Where is Richard Betts when we need him?

  129. -1=e^ipi says:

    @ ATTP –
    Thanks for pointing that out. I have a question about that though. I understand that that the conventional rule of thumb is to divide the ice core temperature change by a factor of two, but what is the basis of that? I was under the impression that the polar amplification factor was greater than two. Is there a study that suggests a factor of two should be used?

    In any case,
    0 = (BD)^2 – BD + 1.15/(5/0.610)
    => BD = (1 + sqrt(1 – 4*0.140))/2
    => BD = 0.831
    => ESS = 0.831*5/0.610 = 6.81 C.

    That makes more sense, thanks. Though if a factor of more than 2 should be used, then that would suggest that ESS is even lower.

  130. Andrew Dodds says:

    miker613 –

    I’m familiar with problems of overfitting, working too well on training data and that general category of problem from working on machine learning. I suspect that they come up in other fields.

    The problem is in this case that we have the notion of prior physical constraints for out parameters. A true ML problem will by definition have no information other than the training cases – prior programming of a big neural network being impossible in any case. However, a climate model is informed by and built from physics. Overfitting is therefore much less of an issue – indeed it would almost certainly make the model fail basic integrity tests (energy conservation and the like).

  131. -1=e^ipi,
    I’m not quite sure of how they decide the ratio of the ice core temperatures to the global temperatures. Maybe someone like BBD or Steve Bloom would know.

  132. -1=e^ipi says:

    Some suggest that the polar amplification factor should be between 2-3.
    http://journals.ametsoc.org/doi/abs/10.1175/JCLI-D-12-00696.1

    Of course, the polar amplification factor in the southern hemisphere may be more than in the northern hemisphere. If the pleistocene temperature data is coming from Antarctica, then what is the appropriate amplification factor to use?

  133. Here is another post I dd a while ago trying something similar to what the ImaginaryGuy is working towards.
    http://theoilconundrum.blogspot.com/2012/03/co2-outgassing-model.html

    A caveat to the analysis was that it was done prior to what I am concentrating on now. Reading the post again, I was completely at a loss to try to make sense of the data from today, thinking that paleo-data was the only way to proceed. Now, like BBD, I think a combination of modern and paleo interpretation works best.

  134. BBD says:

    Of course, the polar amplification factor in the southern hemisphere may be more than in the northern hemisphere. If the pleistocene temperature data is coming from Antarctica, then what is the appropriate amplification factor to use?

    IIRC multiply Antarctic core temperature estimate by 0.5 for approximate global temperature.

  135. Ever notice that Miker613’s flailing away is usually based on the latest rant-du-jour on the other pseudo-skeptic blogs? This time his tantrum is based on a 1-30-15 Bishop Hill post on Mauritsen’s analysis of model tuning.

    The best analogy I have to the way these pseudo-skeptics work is to watch how peewee soccer players chase the ball around the field. In contrast to experienced players, they all simultaneously run to the current ball location.

    Yesterday’s ball location is model tuning and today’s is data falsification. Tomorrow it will be something else.

  136. Joshua says:

    miker doesn’t seem to be considering the problems with his arguments if the assumptions he’s making are wrong.

    Maybe he should consider the possibility that he doesn’t understand climate science and the unforeseen consequences of his lack of understanding as well as he thinks he does.

    I am sure going to be fearful if policy-makers ignore the possibility that miker’s arguments might be wrong.

  137. WHT,

    Bishop Hill post on Mauritsen’s analysis of model tuning.

    I think that might be where Andrew Montford concluded with

    In conclusion then I conclude that Lord Lucas’s original point was in essence correct, so long as you conclude both tuning and “tuning”.

    Which I tweeted as Illustrating that [Andrew Montford] is really a “skeptic” not a skeptic. 🙂

  138. -1=e^ipi says:

    @ BBD –
    “IIRC multiply Antarctic core temperature estimate by 0.5 for approximate global temperature.”

    I get 0.5 is the convention, but what is the basis of it? How do I know 0.5 is better than 0.6 or 0.4?


  139. jyyh says:
    WebHubTelescope, well I have tried this sort of approach with monthly values got to the beginning of the Co2 series of Mauna Loa… missing the effect of lod and aam, which is the reason my correlation wasn’t nearly as good as yours… I think the lecturer of environmental studies in our local university has done something similar but not seen the results. You might well be the first one outside the academical journals (of which I’ve got no access to) presenting this sort of thing openly in the net.

    Many have used this approach including Lean et al, and Tamino as you mention later. You also said that Tamino “found the best correlation for TSI was with a short lag”. I do apply a 6-month lag to each of the factors. This also acts as a smoothing filter to remove the noise on the time series.

    The exception to that is the multidecadal LOD factor has a 4-year lag. I don’t know what this implies exactly but if it has something to do with momentum shifts, I can imagine that oceanic reaction via overturning to impulses in the earth’s rotation will show a significant lag and damping.

  140. I get 0.5 is the convention, but what is the basis of it? How do I know 0.5 is better than 0.6 or 0.4?

    Section 3 of Hansen & Sato (2012) seems to discuss this.

  141. BBD says:

    -1=e^ipi

    I get 0.5 is the convention, but what is the basis of it? How do I know 0.5 is better than 0.6 or 0.4?

    It’s just a rough estimate. ATTP is right – the methodology is explained in HS12 section 3 (see Fig 2). I’m certain that Hansen describes the 0.5 x as ‘crude’ elsewhere. I will try to track that down if you really want.

  142. -1=e^ipi says:

    http://onlinelibrary.wiley.com/doi/10.1029/2009GL038777/full

    This paper suggests a factor of 2-3 is only valid from 1970-2008. For earlier time periods it might have been as high as 9-11.

    http://www.sciencedirect.com/science/article/pii/S0277379110000405

    This paper suggests a factor of 3-4.

    Suppose the factor of amplification is 3 rather than 2. Then my earlier calculation becomes:
    0 = (BD)^2 – BD + 1.15/(10/3/0.610)
    => BD = (1 + sqrt(1 – 4*0.210))/2
    => BD = 0.699
    => ESS = 0.699*10/3/0.610 = 3.82 C.

    This is closer to the ESS value I got for the Pliocene vs Holocene.

  143. Willard says:

    Andrew Dodds,

    I wonder if it would it help to consider climate models as some kind of (semi?) supervised learning. At the very least, it would solicit a community that knows formal stuff and shouldn’t be impressed by “objective” armwaving.

  144. Joseph says:

    I don’t understand why I am being asked to defend consensus science. The IPCC and the BEST project, and Ed Hawkins’ charts, and James Annan (posted elsewhere), should be sufficient. If you don’t agree with them or don’t understand what they are saying, argue with them.

    I guess, Miker, we could use that tactic with you whenever you question some detail upon which there is a general consensus. For example, equilibrium sensitivity, ask them if they agree with the IPCC position. I am not going to ask you to argue with them if you don’t agree, though.

  145. Steven Mosher says:

    miker.

    dont misuse the term validation.

    A proper validation requires an end user specification.

    In other words, If I’m a user of a GCM and I’d like the model to get temeperature correct to with 1C, then a model that does that has met the spec and is by definition VALID.

    The models get things wrong. All models do. So we say some models are useful.

    The Useability of a model depends on what the user wants to do. Different objectives require
    different levels of accuracy.

  146. BBD says:

    -1=e^ipi

    We seem to be skipping from Antarctic to Arctic in order to confect lower and lower ESS estimates. This doesn’t increase confidence in your results.

    These may be of interest:

    Zeebe (2013) Time-dependent climate sensitivity and the legacy of anthropogenic greenhouse gas emissions

    Previdi et al. (2013) Climate sensitivity in the Anthropocene

  147. dhogaza says:

    Miker:

    “I’m misunderstanding, because realclimate says I am?”

    Given that Gavin Schmidt was (before being promoted) the project lead for NASA GISS’s Model E, I think it’s fair to say that he knows more about how GCMs work than you do. Yes, indeed.

    Claims of “[over]fitting” and the like do demonstrate a total misunderstanding of how GCMs work. They’re not statistical models, unlike, say, Nic Lewis’s work. Sure, as our understanding of the underlying physics change, the functions that translate that physics into the computational framework of the model change, too, but that’s not at all “fitting” or “tuning” (as you misuse the word). Yes, of course, backfitting using actual observed values for forcings that are subject to natural variation that could not have been predicted in the past is used to help validate the physics, but again this is not “fitting” or “tuning” in the sense you seem to think it is. The weak solar minimum of the previous decade was not something that was or could have been predicted, for instance. It makes total sense to re-run models with the actual TSR values plugged in as a check, for instance. This is not statistical fitting, though.

  148. dhogaza says:

    Willard:

    “I wonder if it would it help to consider climate models as some kind of (semi?) supervised learning. At the very least, it would solicit a community that knows formal stuff …”

    What community? Models aren’t an example of machine learning in the sense that any computer scientist would understand the term. They don’t modify themselves, there’s no feedback loop programmed into them, at least not the two I’m somewhat familiar with. Changes in understanding are reflected in changes in algorithms that are reprogrammed by humans. Backfitting involves inserting known historical figures for (say) TSI, substituting these for the constant value for TSI assumed when running the model forward.

    Perhaps I’m misunderstanding what you’re suggesting, though.

  149. dhogaza says:

    Miker:

    “I don’t understand why I am being asked to defend consensus science. The IPCC and the BEST project, and Ed Hawkins’ charts, and James Annan (posted elsewhere), should be sufficient. If you don’t agree with them or don’t understand what they are saying, argue with them.”

    I think it’s been made clear that the misunderstanding lies with you … and those that you choose to do your thinking for you, since we know that you’re just poaching misinformation from a handful of denialist blogs.

  150. BBD says:

    -1=e^ipi

    Mid glacials correspond to roughly -8 C and 190 ppm, interglacials correspond to roughly 2 C and 290 ppm.

    Use 5C or 4.5C for LGM / Holocene difference.

  151. -1=e^ipi says:

    @ BDD –
    “We seem to be skipping from Antarctic to Arctic in order to confect lower and lower ESS estimates. This doesn’t increase confidence in your results. ”

    That is as strong possibility since the polar amplification is probably greater in the Northern Hemisphere. In any case, if the polar amplification factor for the Southern Hemisphere is 2-3 for the Pleistocene, then my previous calculations suggest that ESS is between 3.82 C and 6.81 C, which seems to agree with the 4-6 ESS range in the second paper you were referring to.

    Also, thanks for the links. I’ll be sure to read those papers when I have the time. 🙂

    “Use 5C or 4.5C for LGM / Holocene difference.”

    Yes, sorry about that mistake. 5C difference corresponds to the ~6.81 C ESS I calculated earlier.

  152. verytallguy says:

    Joshua, “interesting” in the Judith sense or the common usage? 😉

  153. miker613 says:

    “Mauritsen is consistent with Realclimate” I do not think so. He says the models _were_ tuned, realclimate says the models _were not_ tuned. Those don’t seem consistent to an objective observer. Note where I quoted from there on how it was tuned; that is fully consistent with the quote from AR5: they are not tuned explicitly, but the way they are built results in implicity tuning.

    “…rather unsurprisingly given that realclimate is written by climate scientists.” Because climate scientists never disagree? Doesn’t follow. Realclimate is trying to push a narrative about the models which is not correct. The last century counts as training data.

  154. miker613 says:

    Whoa, Joshua, I see it there. (Implicitly) Tuning the models! From a modeller!

  155. miker613 says:

    VTG, I’m sure you agree with the perspective of that blog, as Judith Curry is a prominent “climate scientist”.

  156. verytallguy says:

    Miker,

    to claim that I said models were not tuned, when I provided an emboldened quote from AR5 starting “they are tuned” and going on to describe how they are tuned leaves me entirely lost for words, but at least relieved of any desire to waste any more time on this conversation.

  157. miker613 says:

    ‘Given that Gavin Schmidt was (before being promoted) the project lead for NASA GISS’s Model E, I think it’s fair to say that he knows more about how GCMs work than you do. Yes, indeed.’
    Yes, dhogaza, he does. That’s why I quoted BEST and a team of climate modellers and (now) a post from judithcurry from another climate modeller. And your poor examples of what I’m calling “tuning” are not the examples used by those people: see their actual examples before you criticize strawmen. Because Gavin Schmidt is not representing the consensus here.

  158. jsam says:

    “I want to be a LIbertarian when I grow up.”

    “You have to choose, son.”

  159. miker613 says:

    VTG, you are claiming they are not tuned in the sense that I mean, and that was your point in posting it. You are trying to redirect the term tuning to something innocuous. The quotes from both posts on tuning climate models mean a different sense, and that is what you are trying to deny.

  160. verytallguy says:

    Miker, seriously, I’m not conversing with someone who quotes me as saying the precise opposite of what I actually said.

  161. miker613 says:

    Joseph, I accept your point. But I am not being asked to “defend” it in the sense of proving that it’s right. I am only defending that it is indeed the commonly accepted consensus in the field. I know what the IPCC said about climate sensitivity, and some of what others say about that.
    Here people are claiming, with a straight face, that realclimate says there is no tuning so there is no tuning, and everyone who knows anything knows it. The truth is the opposite.

  162. miker613,

    Here people are claiming, with a straight face, that realclimate says there is no tuning so there is no tuning, and everyone who knows anything knows it. The truth is the opposite.

    No, stop misrepresenting what is being said. Noone’s claiming that there is no form of tuning. The point being made is that the models are physically motivated and that the underlying concept is basic energy and momentum conservation. Yes, there are some processes that cannot be properly represented in these models and so are parameterised and that these parameters can be tuned to a certain extent. However, they are still constrained by the underlying physics. Therefore these models are not simply tuned to match some prior state. As pointed out above, these parameters are typically adjusted to best fit some average climate state and to as best as possible represent intrinisic variability. They are not tuned to produce some kind of best fit to the 20th century trend. Hence your over-fitting claim is almost certainly incorrect.

  163. -1=e^ipi says:

    @ BBD –

    So I looked skimmed the Previdi 2013 paper. It’s 4-6 C ESS range is based on 3 other papers: Hansen et al. 2008, Hansen and Sato 2012 and Lunt et. al. 2010.

    The Hansen 2008 and 2012 papers estimate ESS to be 6C and 8C respectively. Of course, these estimates use ice-age Pleistocene data, have all of the problems I mentioned before, and use the assumption of multiplying ice core data by 0.5 to get global mean temperature data. I am not very convinced by Hansen’s estimates.

    Lunt et al. 2010 is far more reasonable and uses a methodology that avoids assumptions such as the polar amplification factor of 2 that Hansen uses. It also uses changes from Holocene to Pliocene rather than intra-Pleistocene changes, which is probably more reflective of what should be expected due to anthropogenic CO2 emissions. It also takes into account changes in orography, which is very important. It gets ESS estimates between 4 C and 4.5 C. Of course this relies on the ECS being approximately 3.04 C, which might be a slight overestimation.

  164. miker613 says:

    Mosher, you’re right (and ATTP is right) that I am over-using or misusing the term validation. But to me the issue here isn’t whether or not they’ve failed validation. (a) That’s a matter of dispute, between those like von Storch and Lucia, and those like BEST and the IPCC who say they haven’t quite yet. (b) As you say, it isn’t really a validation process.
    The issue is, or should be, between people like most of the commenters here and realclimate who say, (1) the models are “doing a remarkable job”, (2) just put in the right inputs and all the issues go away, (3) you can’t tell anything anyhow in less than 60 years, (4) the models are just physics so they can’t be guilty of overfitting, etc. And so many experts I see outside who are saying, (1) the models are doing “embarrassingly” badly [Annan], (2) sometime soon we will have to rethink them if temperatures don’t get their act together [BEST], (3) the models may well be tuned [both papers by modellers] and therefore may well be unreliable [second paper], because we haven’t had a chance to test them on new data and they didn’t do so well on the new data so far.

    And the subject of the post is this new paper, which does not seem to do what it claims – because it seems to be trying to use the whole past century of data as validationg.

  165. BBD says:

    -1=e^ipi

    I am not very convinced by Hansen’s estimates.

    Then publish a reply. At this point, I’m not very convinced by yours.

  166. miker613,

    The issue is, or should be, between people like most of the commenters here and realclimate who say, (1) the models are “doing a remarkable job”

    Noone here says that.

    (1) the models are doing “embarrassingly” badly [Annan],

    No, he didn’t. Try reading the quote I included above.

    And the subject of the post is this new paper, which does not seem to do what it claims – because it seems to be trying to use the whole past century of data as validationg.

    No, it’s not. It’s pointing out that you can explain the discrepancy between modeled and observed short-term trends as being a consequence of internal variability. This may not be correct, but the level of internal variability required is consistent with other studies that have tried to determine the magnitude of the variability (Palmer & McNeall, for example).

  167. BBD says:

    miker

    You are being tedious again.

  168. dhogaza says:

    Miker:

    “Yes, dhogaza, he does. That’s why I quoted BEST and a team of climate modellers and (now) a post from judithcurry from another climate modeller. And your poor examples of what I’m calling “tuning” are not the examples used by those people: see their actual examples before you criticize strawmen. Because Gavin Schmidt is not representing the consensus here.”

    There are two possibilities here:

    1. Gavin Schmidt is out-and-out lying about how the model he’s been responsible for over the last couple of decades actually works, and how it is parameterized

    – or –

    2. You’re misunderstanding what various scientists are saying, as ATTP and others are saying above.

    If you think Gavin Schmidt is lying – and this is the only interpretation of your remarks that makes sense – say so explicitly.

  169. miker613 says:

    VTG, that’s your choice. I think I accurately described what you said: they are tuned, but not in the sense I mean. And I responded to exactly that: they are tuned, and in the sense I mean.

    ATTP, I don’t know if you’ve studied machine learning. I recently finished one course (Yaser Abu-Mostafa) online, and this was a major focus of the course. Overfitting is almost the rule, not the exception, and happens incredibly easily when you have access to the testing data. Especially when you have lots of degrees of freedom available (Roger Pielke Sr. said it was on the order of fifty.) The process that they are describing in these papers is exactly the kind of process that can lend itself to overfitting, and the authors are acknowledging this. The fact that they aren’t trying to curve-fit doesn’t solve the problem.
    Don’t you think it’s a little bit freaky that after more than a century of data, models differ by large amounts on the climate sensitivity – and yet all of them fit the century of data pretty well? Some other parameter like aerosols changes to compensate. Not good.

  170. miker613 says:

    dhogaza, how about a third possibility? – they disagree with him. And/or he’s missing something. And/or he is choosing a way to present it that simplifies away the troublesome issues, because that’s what people do when they are advocating.
    Are you seriously suggesting that anything on realclimate is either to be accepted without question because Gavin Schmidt says it, or he must be a liar? Scientists never disagree?

  171. BBD says:

    Not good.

    Conspiracist ideation?

    What about volcanism, miker?

    Blankety-blank, blank, blank.

  172. BBD says:

    because that’s what people do when they are advocating.

    Yup, it’s a conspiracy of greenie advocates.

  173. miker613,

    Are you seriously suggesting that anything on realclimate is either to be accepted without question because Gavin Schmidt says it, or he must be a liar? Scientists never disagree?

    I don’t think anything should be accepted without question, but Gavin Schmidt does happen to be a leading climate modeller. Simply dismissing what he says, would seem foolish.

  174. -1=e^ipi says:

    “Then publish a reply. At this point, I’m not very convinced by yours.”

    The Hansen methodology does not fully into account the issue a different initial forcing (CO2 vs Milankovitch cycles), which results in an overestimation of ESS. And it also assumes a polar amplification factor of 2 in order to just multiply the ice core data by 0.5 to get global average temperature, which again may overestimate ESS.

    I find it confusing that when replying to my question on what is the basis of the convention of multiplying ice core data by 0.5, you provide a paper that references another paper that uses the 0.5 assumption.

  175. miker613 says:

    @ATTP – “Noone here says that” ATTP, here is the title of a realclimate post by Gavin Schmidt that BBD linked: “Climate Models Show Remarkable Agreement with Recent Surface Warming”. Everything’s just great.
    And I’m not sure how you are telling me that I am reading Annan wrong. I won’t include the quote again, but you saw it elsewhere.

  176. dhogaza says:

    miker:

    “Whoa, Joshua, I see it there. (Implicitly) Tuning the models! From a modeller!”

    No, the author of the piece that Judith quotes is a consumer of model products, not a modeller.

    His claims may, or may not, have merit, but he is not a modeller.

  177. BBD says:

    -1=e^ipi

    I find it confusing that when replying to my question on what is the basis of the convention of multiplying ice core data by 0.5, you provide a paper that references another paper that uses the 0.5 assumption.

    I find it puzzling that you cannot read and understand the explanation for that assumption, especially when it has been specifically drawn to your attention.

    I find it tedious arguing with people who clearly don’t *quite* understand what they are talking about but have a very clear intention of confecting low sensitivity estimates by any means possible.

  178. miker613,
    What I remember you quoting was Annan suggesting that how some senior climate scientists had responded to the “pause/hiatus/slowdown” was poor. I have no memory of you ever showing that he had said “the models are doing embarrassingly badly”.

  179. miker613 says:

    ATTP, you may be right. “And despite what some people might like to think, the slow warming has certainly been a surprise, as anyone who was paying attention at the time of the AR4 writing can attest. I remain deeply unimpressed…” He may not mean the models in particular, more the conclusions everyone was drawing at the time from the models and everything else. By now it’s the conventional wisdom that 15 years doesn’t count; back then it was taken for granted that there wouldn’t be a pause like that ever again. The models were part of that confidence.
    But I withdraw that claim; he wasn’t specific enough for that.

  180. miker613 says:

    dhogaza, you are correct about that. He studies using models heavily, but is not a modeller.

  181. miker613,
    This is what James Annan said:

    And despite what some people might like to think, the slow warming has certainly been a surprise, as anyone who was paying attention at the time of the AR4 writing can attest. I remain deeply unimpressed by the way in which this embarrassment has been handled by the climate science insiders, and IPCC authors in particular.

  182. miker613 says:

    Ha! ATTP, I got there first!

  183. Yes, I know, but you missed out the end bit, which I would regard as quite important so as to at least get the context.

  184. BBD says:

    Well, as we know from experience, miker does have a habit of putting words into people’s mouths and of selective quotation.

  185. Willard says:

    > I got there first!

    Here:

    https://andthentheresphysics.wordpress.com/2015/01/25/puerto-casado/#comment-45360

    Then it was but James. Then it was but sensitivity. Then it was but China. It’s a bit early to return to but what James said, don’t you think?

    In fairness, you’ve succeeded in peddling but what Bjorn said in the other thread. You know, the thread where you failed to justify why more constrained sensitivity estimates would constrain policy choices.

  186. Joseph says:

    ATTP, would it be fair to say that even if the models on “average” are running hot for the current 15 year period there has been a lot of important research done that could explain why they might be running hot (e.g. impact of ENSO, volcanoes (aerosols), lack of Arctic coverage, etc) ? .And that It could be a combination of all or some of these factors?

  187. Joseph,
    Yes, that’s how I would regard it. I even agree with what James Annan was quoted as saying.

    recent evidence does hint at the models modestly overestimating the rate of climate change,[29] but there is certainly not yet sufficient evidence to overturn the major paradigms of today’s models.

    It could be that they are running slightly hot, but it is too early to really know and – IMO – it’s highly unlikely that we’ll be overturning any paradigm. It’s more likely that it will be a combination of what you suggest and possibly some other adjustments. As someone else pointed out, it could be that they’re slightly under-estimating energy transport into the deep ocean, which would then overestimate the TCR and produce these kind of mismatches (possibly without influencing the ECS).

  188. Joshua says:

    willard –

    I think you’re being unfair to miker.

    It might not be that he’s peddling. IMO, he doesn’t seem to be considering the problems with his arguments if the assumptions he’s making are wrong.

    Maybe he should consider the possibility that he doesn’t understand climate science and the unforeseen consequences of his lack of understanding as well as he thinks he does.

    I am sure going to be fearful if policy-makers ignore the possibility that miker’s arguments might be wrong.

  189. miker613 says:

    “I am sure going to be fearful if policy-makers ignore the possibility that miker’s arguments might be wrong.” Yup – me too. But I am also fearful if policy-makers ignore the possibility that you-all are wrong. Or that the policies that you-all espouse have all kinds of major consequences that you-all are sure are no problem.
    But the last twenty years is a little bit of a comfort to me, as policy-makers are probably not going to do anything anyway.

  190. Joseph says:

    Or that the policies that you-all espouse have all kinds of major consequences that you-all are sure are no problem.

    Do you have a study in mind that says there are going to be “all kinds of major consequences?”

    And Miker you should use the HTML blockquote, so someone can more easily tell where you are responding. You can delete the “cite” attribute in the example.

    http://www.w3schools.com/tags/tag_blockquote.asp

  191. -1=e^ipi says:

    “I find it puzzling that you cannot read and understand the explanation for that assumption, especially when it has been specifically drawn to your attention.”

    How can I read or understand something that I cannot find? The multiply by 0.5 seems to be a rule of thumb / convention, but I do not see how it is more justified than say 0.4. If you would like to point out this explanation, I would greatly appreciate it.

    “I find it tedious arguing with people who clearly don’t *quite* understand what they are talking about but have a very clear intention of confecting low sensitivity estimates by any means possible.”

    So apparently, 3 data points makes a ‘clear intention’? First forgetting to factor in arctic amplification, then going with the conventional value of 2, then questioning that value of 2 when a large amount of scientific literature indicates it is likely higher than 2.

    Is this ‘clear intention’ the reason I used Pleistocene data after using Pliocene-Holocene data (even though Pliocene obviously gives a lower ESS value, so if I only cared about getting as low ESS as possible I wouldn’t have bothered with ice-core data)?

    Is this ‘clear intention’ the reason why I showed that Craig Loehle underestimates climate sensitivity and the ECS is closer to 2.95 C rather than 1.98 C based on the same data?

  192. -1=e^ipi says:

    After another read, the estimates I did earlier of the ESS based on Pliocene or Pleistocene data are not valid. My point that the difference in the ratio of temperature change to CO2 change from paleoclimate data to today’s anthropogenic warming should differ by a factor BD still stands. But I think I messed up by using 1.15 K as the no-feedback effect of doubling CO2 as B. B should include other non-CO2 feedbacks as well.

    Meh, I give up. I’ll just accept the Lunt et al. 2010 ESS estimates since their methodology completely avoids this issue of the correlation between temperature and ln(CO2) being different depending on what the initial forcing is, they take into account orography, changes in the Pliocene to Holocene more accurately represent what should be expected due to AGW than changes during the Pleistocene, they don’t have the issue of trying to figure out the correct polar amplification factor, and Milankovitch cycles can make things quite complicated. So ESS of 4 – 4.5 C?

  193. Willard says:

    > Don’t you think it’s a little bit freaky that after more than a century of data, models differ by large amounts on the climate sensitivity – and yet all of them fit the century of data pretty well?

    Only if sensitivity matters much in the grand scheme of things, say by assuming that a more constrained estimate of it would matter for policy. This assumption has yet to be justified by MikeR.

  194. jyyh says:

    Jumping into potentially interesting speculation…

    WebHubT saith: “You also said that Tamino “found the best correlation for TSI was with a short lag”. I do apply a 6-month lag to each of the factors. This also acts as a smoothing filter to remove the noise on the time series.”

    oh, relying much on statistics then.
    yes, the soi definitely needs some smoothing (basically an oceanic index calculated out of atmospheric measurements, no?). as do the sunspot-derived tsi numbers, (assuming linear-near linear relatinship between them, sun rotates so all of them are not visible at the time of measurement). maybe we’re talking of a bit different thing when saying ‘lag’. the 5 to 6 month lag (=delay in effect) owuld be primarily due ENSO, ‘the moisturizer (+partly co2izer) of the atmosphere’, the water vapor coming (+ slight increase of Co2) out of the hot Pacific needs nearly half a year to reach it’s full effect. maybe getting to the extremities of the other hemisphere takes that long. maybe the aam effect needs the same lag for the same physical reasons too, but why all the others?

    WebHubTelescope continueth:
    “The exception to that is the multidecadal LOD factor has a 4-year lag. I don’t know what this implies exactly but if it has something to do with momentum shifts, I can imagine that oceanic reaction via overturning to impulses in the earth’s rotation will show a significant lag and damping.”
    I’ve imagined amoi, pdo (Ipo) being ultimately measure of a feature of deeper circulation, amoi might be a better measure of this than the pdo also because the strong one, enso happens in Pacific territory so how much pdo is disturbed by it? I do not know how pdo is calculated.
    I’ve thought these decadal features of oceanic basins are all somehow linked to each other so I haven’t actually done the numbers on those, just checked the residuals of a simpler fit and amoi have about the same period (not-in-sync with each other)… thus I’ve not been caring about the lenght of the lag (=delay in effect) in this case. possibly there is some decadal feature that would be ‘in sync’ with the T were attempting to calculate from other features, but it impossible or near impossible to measure where this is… f.e. possibly the ‘4 year lag’ is the time sunk water from NA takes to cross the Atlantic and round the Antarctica to spread the slight effect assigned to it all over the world… plenty to speculate on these statistical analyses. good job.

    I’m curious how this CAN work this well, when the albedo effect isn’t yet in, maybe ENSO and aam combine to produce more clouds on most locations but I would imagine the reduction in ice over the north would drop the neat correlation down a bit… no idea how the polar amplification could be incorporated to this sort of simple global statistical model. Would likely have to use some sort of gridding, or do this separately to each hemisphere, don’t know how to work with those. :-/

  195. We don’t need any additional time series to “pin down this rule” because, ultimately, it needs to be fed through a loss function to decide what to do, and whether the ECS is 3 degrees or 2.7 or 4, the loss is big enough that the objective decision rule is ACT NOW. Sure, rationality can be set aside, and we can opt to do clear air capture of CO2 down the road when it’s an emergency, at 10x costs even if discounted. People are fools.

  196. In the long run, volcanos and ENSO don’t matter.

  197. Mean values don’t signify diddly-squat without credible intervals around them. They might not even exist.

  198. Oh, get over it. Substitute human GHG forcing for your predictor instead of time against warming, and you’ve got 80% of the variance explained.

  199. “Bear in mind that any forcing estimate comes from models anyway”???? What bullshit! You can calculate forcing by hand in 15 minutes, from an algebraic equation.

  200. So, why look at trends at all? Why not just deal with physical models and processes? The answer is that The Public and The Politic don’t trust scientific processes and would rather See It In Direct Data. Sorry, they don’t understand those processes.

    I say, let them see it unfold, and who gives a fig any more if they or their kids suffer?

  201. No, Eli is simply asserting the mathematical point that differentiation and smoothing with a Gaussian (http://www.swarthmore.edu/NatSci/mzucker1/e27/filter-slides.pdf) operator commute, so y’might as well smooth first.

    I continue to think that “internal variability” is not well defined as a mathematical concept. I’ve had people tell me all it amounts to is unexplained residuals. If that is what it amounts to, it is hardly worthy of being identified as a major phenom. That characterization reduces to “It’s what we don’t understand.”

  202. And I don’t buy that the “discrepancy between models and observations” is statistically significant. From everything I have seen and studied, for instance, Fyfe, Gillett, and Zwiers characterization of that is sheer bupcus, based upon a misinterpretation of (a) how HadCRUT4 should be used, and (b) a slavish, unimaginative interpretation of statistical significance testing.

  203. And, as I was recently reminded, “cloud cover” in terms of albedo is frequency dependent. So, it’s not just whether there is cloud cover or not, it is cloud droplet size, residence time, and the like. Clouds do not have equal absorption spectra.

  204. Actually, quantitatively, (a) you need to separate which of the things you specify are drivers and which are effects, and (b), compared to any volcanic forcings, current human forcing by CO2 utterly dwarfs any of these effects. Need to do arithmetic.

  205. “Training period” for climate models? What do you think these are? Neural nets? They have ab initio physics built into them, plus a lot of table look-ups and simplifications to keep their run times small. Sure, they calibrate against a set of boundary and initial conditions, but it’s not like this is some massive statistical random forest thing. Don’t speak as if they are.

  206. I would suggest that for many reasons, point estimates for things like TCR and ECS are totally meaningless. Y’need a distribution folks. How do you know that mean values for the distributions you are manipulating even exist?

  207. jyyh

    The 6-month lag is really an exponential smoothing function with a 1/e damping time of 6 months. That is a minor filter which like you suggest may be part of the lag time it takes for an ENSO measure such as SOI to spread to the rest of the world. Applying it elsewhere, for example on TSI, has virtually no effect because TSI varies on the order of 10 years. What is left is applying it to Sato’s aerosol time series (stratospheric aerosol optical thickness). This filter is subtle because it is only 6 months and the average width of the lines is greater than 6 months. IOW, it doesn’t really matter in the greater scheme of things, if that’s what you are wondering about. I could remove that lag and it wouldn’t be visibly noticeable; as it would reduce the correlation coefficient by 0.002.

    To give an idea of how much the “GREAT PAUSE” plays into this, check out the region after 2011, indicated by the yellow box below. This shows a divergence by about 0.1C, which has already been mostly compensated for by this past year’s high temperature record.

    I haven’t updated this data since late 2013. Patience is a virtue. I knew the temperature would eventually rise, because it always has over the past 130+ years.

  208. Willard says:

    Could you please take the time to quote the claims to which you reply, Hyper?

    Also, if you could show the 15 minutes calculation, that would be great.

    The last sentence is in response to this:

    > You can calculate forcing by hand in 15 minutes, from an algebraic equation.

    There’s no rush. Everybody’s asleep.

  209. @Willard, Such a calculation is simply a reprise of selected material from Ray Pierrehumbert’s Principles of Planetary Climate, which has appeared in separate places, including http://people.su.se/~rcaba/publications/pdf/pierrehumbert.2013.PNAS.commentary.pdf, and in a Physics Today article which had Judith Curry applauding at http://judithcurry.com/2011/01/19/pierrehumbert-on-infrared-radiation-and-planetary-temperatures/.

    Still, I don’t see why I need to retype all that here when it’s been done countless times, in textbooks and on the Web, e.g., http://www.mathaware.org/mam/09/essays/Radiative_balance.pdf. “That is the entire Torah. Go and learn it.”

  210. jyyh says:

    WebHubTelescope, ah, I missed the effect (or rather, non-effect) of the different periods on the series have on the calculations… . yes, agree that the filter Is needed, but like you say it has only a small effect on the other series and the overall performance of the model. Ok.

  211. jyyh, thanks for keeping me honest on the details. Hope you get another chance to try your hand at a model fit in the future.

  212. hyper,

    “Bear in mind that any forcing estimate comes from models anyway”???? What bullshit! You can calculate forcing by hand in 15 minutes, from an algebraic equation.

    Yes, technically one can do this for CO2 very easily. However, if you want a full forcing timeseries for all external forcings, then these typically come from models.

    Oh, and maybe we could tone things down a little. You seem to be responding quite strongly to people who – I think – would largely agree with you.

  213. Rachel M says:

    Hypergeometric,
    It looks a bit like you’ve bombed the thread here. Can you slow down a bit next time? It’s also not clear who you’re responding to so it would help to include a quote from the comment you’re replying to in your reply.

  214. verytallguy says:

    Not perhaps strictly on-topic, but I’ll try and justify on the grounds of demonstrating that even if models were running hot, there are other good policy reasons to reduce fossil fuel use.

    With mineral resources, we are seeing something similar: operators redoubling their efforts in the face of diminishing returns of extraction; the story of “shale gas” and “shale oil” is a typical example. Maybe it is done hoping that – somehow – the destruction of one stock will increase the probability to find a new one (or to create one by some technological miracle). So, instead of trying to make mineral stocks last as long as possible, we are rushing to destroy them at the highest possible rate. But, unlike fish stocks that can replenish themselves, minerals do not reproduce. Once we’ll have destroyed the rich ores that created our civilization, there will be nothing left behind. We will have ruined ourselves forever.

    Struck me as nicely encapsulating the mentality of climate change deniers, particularly the Panglossian belief in “technological miracles”.

    http://cassandralegacy.blogspot.co.uk/2015/02/senecas-gamble-how-to-create-your-own.html

    Entertainingly, Ugo Bardi is associated with the Club of Rome, so plenty of scope for conspiracy ideation in response.

  215. BBD says:

    -1=e^ipi

    How can I read or understand something that I cannot find? The multiply by 0.5 seems to be a rule of thumb / convention, but I do not see how it is more justified than say 0.4. If you would like to point out this explanation, I would greatly appreciate it.

    Okay, perhaps “HS12 section 3” was a bit vague. Please see Hansen (2008) p 218 section 2.1 Verification and Fig 1 and Hansen & Sato (2012) p 27 Fast Feeback Sensitivity and Figure 2.

  216. BBD says:

    -1=e^ipi

    So ESS of 4 – 4.5 C?

    I’m sorry if I have misunderstood your position during this exchange. The various confusions may have created a misleading impression.

    Even accounting for model error as it does (Table 1; text p 63) there are indications that the estimated ESS in Lund et al. (2010) may be too low. The more holistic methodology of Hansen et al. (2013) derives an average fast-feedback sensitivity of 4C / 2x CO2 for the Cenozoic (~3C for late Holocene climate).

  217. BBD says:

    Rachel (or ATTP)

    Would you remind me of the link-per-comment limit? I’ve forgotten if it is two or four (is it four these days?). My mind is going, Dave, etc…

  218. Andrew Dodds says:

    miker613 –

    There is a complete category difference between Physical models and Machine leaning models. You *cannot* safely take concepts across.

    Let’;s take a simple example – I throw a ball of varying mass and starting angle, I want to predict where it lands.

    A physical model would start with calculate the effect of gravity on the motion of the ball, then add in air resistance, and add both effects to the basic geometry of ball throwing. This involved a lot of prior information – gravity, momentum, air resistance, that sort of thing – to create a model that predicts where the ball will land. You could perhaps subsequently use a small set of empirical results – a few dozen at most – to slightly change those parameters that are known to be subject to uncertainty (perhaps air resistance with the given type of ball surface material) to improve the model fit. HOWEVER, you can’t change things like g, at least outside the known error range. Your parameters are physics-constrained.

    A ML model would be completely different. You’d want hundreds if not thousands of examples of ball throwing across a whole range of parameters – training sets, testing sets and validation sets. The initial weights would be set randomly and you get them to converge to minimize errors (noting of course the danger of over-fitting – you’d probably try a range of Neural network densities and geometries). But you wouldn’t be manually tweaking weights of NN connections – and it would be silly to try.

    Both approaches would probably come up with decent predictors, because in this example we have a solid physics based approach and the possibility of enough training data for a ML approach to work. But going to climate – the amount of data is completely insufficient for a ML approach*; you’d have to parameterise the input examples so much as to make them useless. Whereas we have enough physics to build a physical model that reproduces many of the features of the climate. And, importantly, tweaking the parameters of such a model within prior established physical boundaries is nothing like a machine learning problem.

    *The only way to get enough data in a usable format would be from multiple GCM runs..

  219. [Mod : I’m not interested in promoting Paul Homewood’s nonsense on my site. Anyone who thinks that a temperature record meant to represent a fixed, unchanging location, with measurements taken at the same time every day with a well-calibrated instrument, can have discontinuities, is not thinking straight. He’s lucky that most scientists either ignore what he says, or – if they don’t – don’t respond with “don’t be stupid, that’s ridiculous”. Also, anyone who thinks that adjustments to the temperature record are a sign of fraud or misconduct is a [Mod : redacted]]

  220. verytallguy says:

    Next, to scratch Willard’s itch, is climate sensitivity even policy relevant within current uncertainty range vs a <2C target?

    We find that much lower values [of climate sensitivity] would postpone crossing the 2 °C temperature threshold by about a decade for emissions near current levels

    “Implications of potentially lower climate sensitivity on climate projections and policy” Joeri Rogelj et al 2014 Environ. Res. Lett. 9 031003

    http://iopscience.iop.org/1748-9326/9/3/031003

  221. BBD says:

    Here’s Alex Otto commenting informally on the implications of lower sensitivity:

    What are the implications of a TCR of 1.3 °C rather than 1.8 °C? The most likely changes predicted by the IPCC’s models between now and 2050 might take until 2065 instead (assuming future warming rates simply scale with TCR).

  222. verytallguy says:

    Finally, energy security is constrained by reliance on fossil fuels, particularly oil

    The situation in the Middle East is a major concern

    http://www.bbc.co.uk/news/business-30021824

    and personally I certainly don’t feel great about a reliance on Russian gas…

  223. BBD,
    It’s actually set at 6 links.

  224. BBD says:

    Thanks ATTP.

  225. Andrew Dodds says:

    vtg –

    You’re not allowed to mention energy security, it tends to make denialists’ heads explode..

    Hmmm.. so we spend billions a year protecting these middle east states despite the extremely strong suspicion that they are funding terrorists who are trying to blow us up.. and handing over billions to the Russians for oil and gas even when they are, according to reports, funding and supplying a proxy war on the borders of the EU.. and that’s OK, apparently, but making any effort to stop using fossil fuels would instantly destroy our economy. Got it?

  226. Andrew,
    But, according to the analysis, it’s producing solid GDP growth!

  227. verytallguy says:

    Andrew Dodds,

    Also, you’re not allowed to mention the finite nature of our world. It’s obviously essential to the UK economy to use fracking to maximise our extraction of natural resources as soon as possible. ‘Cos in the future when we’re flying around on levitating skateboards powered by impossibility drives those reserves will be worthless. No chance of a resource cliff approaching, no sir.

    Got it?

  228. Willard says:

    > “That is the entire Torah. Go and learn it.”

    Does it take 15 minutes? That was the advertised time.

    The site handles LaTeX, BTW.

  229. Willard says:

    > The only way to get enough data in a usable format would be from multiple GCM runs.

    Which makes sense, since machine learning works the opposite way from GCMs, I believe: from a dataset, the ML algorithms infer the underlying function(s) while computer simulations have the laws already implemented in a set of equations and approximating parameters (the part about what MikeR raises his concern) and project simulations in a problem space.

  230. Willard says:

    > ‘Cos in the future when we’re flying around on levitating skateboards powered by impossibility drives .

    I believe you refer to the infinite improbability drive, Very Tall:

    The impossibility drive only applies to Grrrowth.

  231. Here’s Alex Otto commenting informally on the implications of lower sensitivity:


    What are the implications of a TCR of 1.3 °C rather than 1.8 °C? The most likely changes predicted by the IPCC’s models between now and 2050 might take until 2065 instead (assuming future warming rates simply scale with TCR).

    WHUT the heck is wrong with these people? A TCR of 1.3C is actually likely an effective sensitivity of over 1.8C if CO2 acts as a leading indicator for the other GHGs such as methane and N2O.

    Do they think that the rise of methane and N2O will simply stop while CO2 continues to ascend?

    I realize that this is an informal comment by Alex Otto but doesn’t he realize that climate scientists should not be opening their yap about policy implications? 🙂

    Thanks BBD for the setup 😉

  232. WHUT’s up with the methane rise of recent years?

    Why isn’t Alex Otto informally concerned about this? 🙂

  233. -1=e^ipi says:

    “Okay, perhaps “HS12 section 3″ was a bit vague. Please see Hansen (2008) p 218 section 2.1 Verification and Fig 1 and Hansen & Sato (2012) p 27 Fast Feeback Sensitivity and Figure 2.”

    I feel really embarrassed about missing that. I do have 3 questions with respect to the 2008 section 2.1:
    Is the choice of a climate sensitivity of 0.75 C / (W/m^2) a reasonable assumption to make to do these calculations in order to later get climate sensitivity?
    Does it make sense to not take into account the direct effects of Milankovitch cycles when looking at paleoclimate data from the Pleistocene (Hansen just considers GHGs and changes in glacier cover)?
    Wouldn’t it make more sense for Hansen to fit the Ice-core data to the expected temperature in order to get a best estimate for the polar amplification factor, rather than just graph expected temperature with 0.5*ice-core data and suggest that the fit is good?

    Also, a question about 2012:
    Is a time scale of ~20000 years from the LGM and the mid-Holocene too long to use for ECS. Shouldn’t something on that time scale yield a value between ECS and ESS?

    “Even accounting for model error as it does (Table 1; text p 63) there are indications that the estimated ESS in Lund et al. (2010) may be too low. The more holistic methodology of Hansen et al. (2013) derives an average fast-feedback sensitivity of 4C / 2x CO2 for the Cenozoic (~3C for late Holocene climate).”

    That Hansen paper occurs over a much larger time scale and doesn’t take into account continental changes. Lund et al. has far smaller time scale and they try to take into account continental changes. If the continental changes over the Cenozonic have been slowly cooling the Earth and Hansen doesn’t take this into account, then what he gets is an overestimation. The Hansen paper also appears to involve far more rounding error.

  234. BBD says:

    -1=e^ipi

    I don’t think you have quite understood what HS12 is doing:

    Also, a question about 2012:
    Is a time scale of ~20000 years from the LGM and the mid-Holocene too long to use for ECS. Shouldn’t something on that time scale yield a value between ECS and ESS?

    The calculation is purely radiative and it is a *comparison* between the LGM and the pre-industrial Holocene. Two snapshots, if you like.

    Ditto here:

    Is the choice of a climate sensitivity of 0.75 C / (W/m^2) a reasonable assumption to make to do these calculations in order to later get climate sensitivity?

    The 0.75C is the empirical estimate derived from the LGM / Holocene comparison. What you are looking at (I think) is the *validation* process.

    Can you go back and look again at both studies, but not in a rush – not with a view to a further response here – in your own time and at your own convenience, and read again, carefully?

  235. BBD says:

    That Hansen paper occurs over a much larger time scale and doesn’t take into account continental changes.

    I think you over-state the radiative impact of these. See H&S (2012) p24:

    Continent locations affect Earth’s energy balance, as ocean and continent albedos differ. However, most continents were near their present latitudes by the early Cenozoic (Blakey 2008; Fig. S9 of Hansen et al. 2008). Cloud and atmosphere shielding limit the effect of surface albedo change (Hansen et al. 2005), so this surface climate forcing did not exceed about 1 W/m2.

  236. Andrew Dodds says:

    @WHUT

    It’s either a blip caused by a change in wind direction (although why those blips haven’t happened before is an interesting question).

    Or a sign of impending doom caused by Arctic temperatures exceeding their range of the past few million years and releasing all the carbon stored in that time. Although I’d put the odds of that happening at less than one in 10.

  237. verytallguy says:

    Willard,

    you are of course correct – although I do like the idea of claiming credit for the invention of the impossibility drive based on infinite Grrrrrrowth.

  238. verytallguy says:

    All of which brings us to Tall’s pitch for climate policy:

    Regardless as to where sensitivity is in the range, there are very strong policy imperatives to reduce fossil fuel emissions strongly starting now, to:
    – maintain global temperature rise below “dangerous” 2C
    – avoid a resource crisis
    – improve energy security

    If, sensitivity is at the top end of the range, we are on track for perhaps 6-8 degrees warming by end century, a genuinely catastrophic outcome. So we can also add
    – reduce the risk of catastrophic climate events

  239. Willard says:

    > the promised calculation is available here: https://hypergeometric.wordpress.com/2015/02/02/models-dont-over-estimate-warming/

    You either rock or rule, whichever you prefer, Hyper.

    Thanks!

  240. I wasn’t talking about that blip but in the fact that methane concentrations have once again started to rise the last 8 years, after a period where it appeared to plateau.

    The point is that Alex Otto was making a claim based on a TCR for CO2 by itself, probably without understanding that it is a leading indicator of the effective TCR due to the aggregate GHG concentration trend. That pushes up the TCR he quotes.

    The analogy is of the Dow Jones Industrial Average. This takes a sampling of stocks to represent all the stocks. The DJIA is a leading indicator because generally the rest of the stocks trend with them.

  241. -1=e^ipi says:

    “The calculation is purely radiative and it is a *comparison* between the LGM and the pre-industrial Holocene. Two snapshots, if you like.”

    I am pretty sure I understood that. Let me rephrase the question. I was under the impression that one reaches the ECS on the time scale of centuries (or maybe ~2 millennia at most). On the time scale of tens of thousands of years (LGM was over 20,000 years ago) doesn’t the expected change more closely resemble the ESS rather than the ECS (since this is sufficient time for glacier and vegetation changes)?

    “The 0.75C is the empirical estimate derived from the LGM / Holocene comparison. What you are looking at (I think) is the *validation* process.
    Can you go back and look again at both studies, but not in a rush ”

    Okay, I’ll try to track down where the 0.75 C / (W/m^2) comes from. The 2008 Hansen paper references a 2007 Hansen paper for this value. The 2007 paper introduces it on 2291 but I’m not sure where this comes from. I’ll look through it when I have time. Thank you for your patience.

    Out of curiosity though (don’t answer if you don’t want to), how does know the temperature during the LGM without using ice-core data in order to do the LGM / Holocene comparison in order to get the 0,0.75 C / (W/m^2) in order to validate that multiplying ice-core data by 0.5 is reasonable?

    “Continent locations affect Earth’s energy balance, as ocean and continent albedos differ. However, most continents were near their present latitudes by the early Cenozoic (Blakey 2008; Fig. S9 of Hansen et al. 2008). Cloud and atmosphere shielding limit the effect of surface albedo change (Hansen et al. 2005), so this surface climate forcing did not exceed about 1 W/m2.”

    It’s not just the positions of the continents in how they affect surface albedo. It is also how ocean currents are drastically changed due to even minor changes in the positions of continents (Particularly, Panama, the Tethys Sea, the Tasmanian Sea, the Bering Straight and the Drake Passage are of importance).

  242. -1=e^ipi says:

    “Regardless as to where sensitivity is in the range, there are very strong policy imperatives to reduce fossil fuel emissions strongly starting now, to:
    – maintain global temperature rise below “dangerous” 2C
    – avoid a resource crisis
    – improve energy security”

    I think performing a proper cost-benefit analysis makes more sense…

    “If, sensitivity is at the top end of the range, we are on track for perhaps 6-8 degrees warming by end century, a genuinely catastrophic outcome. So we can also add
    – reduce the risk of catastrophic climate events”

    Doesn’t it make more sense to use best estimates of climate sensitivity rather than unrealistic ones when making decisions? Worst case, we are looking at less than 800 ppm by the end of century, which means that the temperature change by then will be on the order of the TCR, which should be ~2C for a climate sensitivity of ~3C.

  243. BBD says:

    Out of curiosity though (don’t answer if you don’t want to), how does know the temperature during the LGM without using ice-core data in order to do the LGM / Holocene comparison in order to get the 0,0.75 C / (W/m^2) in order to validate that multiplying ice-core data by 0.5 is reasonable?

    LGM temperature estimates range from ~3C – 6C below Holocene. Hansen used 5C for earlier empirical estimates and 4.5C for later ones.

    If you would just read the papers… Instead of posting questions you could readily answer yourself.

    Okay, I’ll try to track down where the 0.75 C / (W/m^2) comes from.

    This is worrying. It isn’t actually possible to have read Hansen (2008) or HS12 without being aware that this is the empirical estimate for sensitivity to radiative forcing change derived from the LGM / Holocene comparison.

    At this point, I am going to stop responding to you. You need only to read the papers properly. I don’t really feel like reading them for you and explaining them piecemeal.

  244. BBD says:

    It’s not just the positions of the continents in how they affect surface albedo. It is also how ocean currents are drastically changed due to even minor changes in the positions of continents (Particularly, Panama, the Tethys Sea, the Tasmanian Sea, the Bering Straight and the Drake Passage are of importance).

    We’ve been through this. Your views are out of step with modern thinking. This too is something you can fairly easily verify for yourself if you choose.

  245. I also find it bizarre that someone would use a TCR of 1.3C to evaluate policy decisions with respect to constraining CO2 emissions. The other GHGs such as methane and N2O and others have historically followed the CO2 and this places the effective TCR closer to 2C. (And up to 3C if looking at transient land warming)

    What are they expecting — that those other GHGs will drop to ZERO net rise (!) the minute a slight amount of cap is put on CO2 ? Seriously, that is exactly what they are implying when they informally assert a 1.3C number as a TCR … say WHUT??

  246. BBD says:

    Worst case, we are looking at less than 800 ppm by the end of century, which means that the temperature change by then will be on the order of the TCR, which should be ~2C for a climate sensitivity of ~3C.

    3C / 2xCO2 = 3C / ~560ppm CO2

    If ECS/2xCO2 = 3.0C then for 800 ppm CO2, ECS will be:

    dT = 3ln(800/280)/ln(2) = 4.5C

    Why do you suggest that the transient end of century response to ~800ppm CO2 will be as low as ~2C?

  247. verytallguy says:

    -1=e^ipi

    Doesn’t it make more sense to use best estimates of climate sensitivity rather than unrealistic ones when making decisions?

    Yes, I agree, and assess the risk/benefit of potential worse outcomes.

    Central estimate for end of century under RCP6 is 3.4 degrees.

    Worst case for RCP 8.5 is 7.8 degrees. Potential worst case

    Ref AR5 WG3 table SPM.1

    Even RCP6 requires mitigation.

    RCP6 was developed by the AIM modeling team at the National Institute for Environmental Studies (NIES) in Japan. It is a stabilization scenario in which total radiative forcing is stabilized shortly after 2100, without overshoot, by the application of a range of technologies and strategies for reducing greenhouse gas emissions

  248. -1=e^ipi says:

    “LGM temperature estimates range from ~3C – 6C below Holocene. Hansen used 5C for earlier empirical estimates and 4.5C for later ones.”

    Which papers for earlier estimates are these? Can you point to them for me?

    Also, how then does Hansen get an error of +/- 1 C on his 5 C estimate if the range is 3-6 C? His error analysis seems terrible/arbitrary with lots of rounding.

    “This is worrying. It isn’t actually possible to have read Hansen (2008) or HS12 without being aware that this is the empirical estimate for sensitivity to radiative forcing change derived from the LGM / Holocene comparison.”

    Are you referring to the Supplementary Material of the 2008 paper? All I can get from there is that he claims temperature change is 5 C, to get a fast feedback climate sensitivity of 3/4 C/(W/m^2), in order to show that multiplying ice-core data by 0.5 is valid; but what is this temperature change based upon?. If it’s based upon ice-core data then you have terrible reasoning. If it is based on sedimentary ocean layers then that is multiplied by 1.5, in which case what is the 1.5 based upon? Is an assumption about the latitudinal distribution of warming made such that the combination of ice-core data and sedimentary data near the equator being made such that global average temperature can be determined to validate the conventions of 0.5 and 1.5?

    “At this point, I am going to stop responding to you. You need only to read the papers properly.”

    I am getting the impression that you don’t actually know the answer to this question…

  249. -1=e^ipi says:

    “We’ve been through this. Your views are out of step with modern thinking. This too is something you can fairly easily verify for yourself if you choose.”

    So modern thinking is that the effects of the continents on the Earth’s climate is basically negligible on the order of tens of millions of years, except for a few discrete changes?

  250. -1=e^ipi says:

    “3C / 2xCO2 = 3C / ~560ppm CO2
    If ECS/2xCO2 = 3.0C then for 800 ppm CO2, ECS will be:
    dT = 3ln(800/280)/ln(2) = 4.5C
    Why do you suggest that the transient end of century response to ~800ppm CO2 will be as low as ~2C?”

    Because you aren’t going from a 280 ppm climate equilibrium to a 800 ppm climate equilibrium by 2100. We are already above the 280 ppm equilibrium and we won’t reach the 800 ppm climate equilibrium by 2100. If you really want me to go through the calculations, like I did when I criticized Loehle’s paper, in order to get a rough percentage of the ECS that should be expected by 2100 then I can.

    And what is your understanding of the time scale of the ECS? A few posts ago you were claiming that changes over 20,000 years since the LGM represent the ECS, and now a time scale of 100 years also represents the ECS? That’s orders of magnitude different in time.

  251. -1=e^ipi says:

    “Yes, I agree, and assess the risk/benefit of potential worse outcomes.”

    Or you could look at the probability distribution of different outcomes look use the expected value of the outcome to make policy decisions.

    “Central estimate for end of century under RCP6 is 3.4 degrees.”

    This sentence doesn’t make sense to me. RCP6 is an emission scenario. An emission scenario on its own doesn’t give you end of century warming.

    “Worst case for RCP 8.5 is 7.8 degrees. Potential worst case”

    Again, that doesn’t make sense not to mention that RCP 8.5 is a complete nonsense emission scenario.

    “Even RCP6 requires mitigation.”

    This sentence doesn’t make sense. How can an emission scenario require mitigation? If you apply mitigation to an emission scenario, then you have a different emission scenario. Do you mean that you think that RCP6 represents an emission scenario that should be avoided?

  252. -1=e^ipi,

    We are already above the 280 ppm equilibrium and we won’t reach the 800 ppm climate equilibrium by 2100.

    Except, the temperature rise is often relative to when we were 280ppm, not today. And, yes, we can reach 800ppm. The most extreme emission scenario would get us to 1300ppm CO2e.

    This sentence doesn’t make sense to me. RCP6 is an emission scenario. An emission scenario on its own doesn’t give you end of century warming.

    Yes, it does. All VTG was quoting was the middle of the range for RCP6.0.

    Do you mean that you think that RCP6 represents an emission scenario that should be avoided?

    Yes, that was obvious wasn’t it?

  253. -1=e^ipi says:

    “Or you could look at the probability distribution of different outcomes look use the expected value of the outcome to make policy decisions.”

    Sorry about the typo, the second ‘look’ should be ‘and’.

  254. -1=e^ipi says:

    “Except, the temperature rise is often relative to when we were 280ppm, not today.”

    I’m pretty sure the temperature rise depends on context, I thought it was implied that I was referring to changes relative to today. Are you saying that everyone should assume someone means relative to pre-industrial levels unless otherwise specified?

    In any case, even if I meant rise since pre-industrial times, we still won’t reach equilibrium by 2100 so BBD’s claim remains false.

    “And, yes, we can reach 800ppm. The most extreme emission scenario would get us to 1300ppm CO2e.”

    That’s a nonsense climate scenario. But I guess there is a political incentive to create these scenarios by the IPCC.

    ATTP, you can easily sea for yourself how unrealistic this is. Just take Mauna Loa CO2 data and perform an exponential fit to it (actually I can do that for you, it is 270 + 38.131*exp(0.193*(year-1950))). The fit is really good (R^2 is 0.9984). This exponential increase is due to the exponentially increasing CO2 emissions by humans. The reason CO2 emissions have been roughly exponentially increasing is because human population and global GDP per capita have been increasing roughly exponentially (while CO2 emissions per unit of GDP has not changed much). If this trend were to continue, you get CO2 concentrations of 959 ppm by 2100.

    But the trend isn’t going to hold, because birth rates are declining and population is going to peak at like ~10 billion mid-century. The change in population growth means that even if real GDP per capita continues to increase exponentially and CO2 emissions per unit of GDP remains unchanged, 2100 CO2 concentrations should be ~800 ppm at most (for any realistic population model). Furthermore, the roughly constant CO2 emissions per unit of GDP relationship is likely to break down as things like information technology become more important and fossil fuels become more scarce (and therefore expensive).

    So yeah, these 1200-1300 ppm by 2100 scenarios are nonsense.

    “Yes, it does. All VTG was quoting was the middle of the range for RCP6.0.”

    So the median or mean over a set of IPCC chosen models that run this scenario? Either way, the wording chosen doesn’t make sense.

  255. The ImaginaryGuy says:


    A few posts ago you were claiming that changes over 20,000 years since the LGM represent the ECS, and now a time scale of 100 years also represents the ECS? That’s orders of magnitude different in time.

    By looking at land temperatures, it is straightforward to estimate. Land has no heat sink to speak of so that ECS ~ TCR on land. The ocean will also eventually achieve this but it will take a looong time, as James Hansen explained in his seminal 1981 paper.

    If you want to understand this on a purely intuitive level, try placing your desktop computer’s heat sink a few inches from the CPU and see how hot the chip gets.

  256. -1=e^ipi,

    In any case, even if I meant rise since pre-industrial times, we still won’t reach equilibrium by 2100 so BBD’s claim remains false.

    It wasn’t BBD was it, it was vtg who was quoting the IPCC?

    That’s a nonsense climate scenario. But I guess there is a political incentive to create these scenarios by the IPCC.

    Blast, it’s taken me this long to work out that you’re just a conspiracy theorist in disguise?

    So yeah, these 1200-1300 ppm by 2100 scenarios are nonsense.

    I think you missed the little e in CO2e?

  257. BBD says:

    -1=e^ipi

    Which papers for earlier estimates are these? Can you point to them for me?

    There are various published estimates for LGM temperature. See Hansen & Sato (2012) and links therein, eg:

    Global mean temperature change between the LGM and Holocene has been estimated from paleotemperature data and from climate models constrained by paleodata. Shakun and Carlson (2010) obtain a data-based estimate of 4.9C for the difference between the Altithermal (peak Holocene warmth, prior to the past century) and peak LGM conditions. They suggest that this estimate may be on the low side mainly because they lack data in some regions where large temperature change is likely, but their record is affected by LGM cooling of 17C on Greenland. A comprehensive multimodel study of Schneider von Deimling et al. (2006) finds a temperature difference of 5.8 +/- 1.4C between LGM and the Holocene, with this result including the effect of a prescribed LGM aerosol forcing of -1.2 W/m2. The appropriate temperature difference for our purpose is between average Holocene conditions and LGM conditions averaged over several millennia. We take 5 +/- 1C as our best estimate. Although the estimated uncertainty is necessarily partly subjective, we believe it is a generously (large) estimate for 1s uncertainty.

    Why don’t your read the papers as I suggest?

    Are you referring to the Supplementary Material of the 2008 paper? All I can get from there is that he claims temperature change is 5 C, to get a fast feedback climate sensitivity of 3/4 C/(W/m^2)

    […]

    I am getting the impression that you don’t actually know the answer to this question…

    Again, see H&S12:

    The altered boundary conditions that maintained the climate change between these two periods had to be changes on Earth’s surface and changes of longlived atmospheric constituents, because the incoming solar energy does not change much in 20,000 years. Changes of long-lived GHGs are known accurately for the past 800,000 years from Antarctic ice core data (Luthi et al. 2008; Loulergue et al. 2008). Climate forcings due to GHG and surface albedo changes between the LGM and Holocene were approximately 3 and 3.5 W/m2, respectively, with largest uncertainty (+/-1 W/m2) in the surface change (ice sheet area, vegetation distribution, shoreline movement) due to uncertainty in ice sheet sizes (Hansen et al. 1984; Hewitt and Mitchell 1997).

    […]

    The empirical fast-feedback climate sensitivity that we infer from the LGM–Holocene comparison is thus 5C/6.5 W/m2 ~ +/- ¼C per W/m2 or 3 +/- 1C for doubled CO2. The fact that ice sheet and GHG boundary conditions are actually slow climate feedbacks is irrelevant for the purpose of evaluating the fast feedback climate sensitivity.

    This empirical climate sensitivity incorporates all fast-response feedbacks in the real-world climate system, including changes of water vapor, clouds, aerosols, aerosol effects on clouds, and sea ice. In contrast to climate models, which can only approximate the physical processes and may exclude important processes, the empirical result includes all processes that exist in the real world—and the physics is exact.

    Why don’t you read the papers as I suggested? I think you are sealioning me and I am getting tired of it.

    * * *

    So modern thinking is that the effects of the continents on the Earth’s climate is basically negligible on the order of tens of millions of years, except for a few discrete changes?

    No, as I explained to you previously, modern thinking is that the ocean gateway hypothesis cannot explain a ~50Ma cooling trend.

    * * *

    And what is your understanding of the time scale of the ECS? A few posts ago you were claiming that changes over 20,000 years since the LGM represent the ECS, and now a time scale of 100 years also represents the ECS?

    No, I made no such claim. Let’s make that the first and last verbal. The ~5C difference between the LGM and the Holocene is more nearly the ESS.

    * * *

    Because you aren’t going from a 280 ppm climate equilibrium to a 800 ppm climate equilibrium by 2100.

    Nor did I argue this. Rather that we might go from the pre-industrial quasi-equilibrium at 280ppm to an end-of-century transient response to 800ppm which would be well above 2C.

    You appear not to understand this very well either.

  258. BBD says:

    You just verballed me again:

    In any case, even if I meant rise since pre-industrial times, we still won’t reach equilibrium by 2100 so BBD’s claim remains false.

    Naughty, that.

  259. verytallguy says:

    -1=e^ipi

    Could I suggest that if you want to understand the basis of the IPPCs numbers you use the reference I gave you. The numbers there are relative to preindustrial and for 2100. Others may use slightly different baselines and endpoints which can be exploited by devious gremlins.

    AR5 WG3 table SPM.1. Google is your friend.

    Even RCP6 requires mitigation to limit emissions.

    Nonsense is hard to tell at this distance, time will tell. Presently, and remarkably, we are, in the UK at least, subsidising investment in extraction of unconventional hydrocarbons…

    As for politically motivated; well, who doesn’t succumb to a little light conspiracy ideation at this time of night on occasion 😉

  260. -1=e^ipi says:

    @ BBD –

    Thank you. The Shakun and Carlson paper answers my question.

    “No, as I explained to you previously, modern thinking is that the ocean gateway hypothesis cannot explain a ~50Ma cooling trend.”

    It doesn’t have to explain all of it. Just some of it to offset the estimates and bias them upwards.

    “Rather that we might go from the pre-industrial quasi-equilibrium at 280ppm to an end-of-century transient response to 800ppm which would be well above 2C.”

    I think that we have misunderstood each other. Can we agree that warming from now to 2100 will be ~ 2C if CO2 concentrations reach 800 ppm by 2100?

    Also, sorry if I misread or mistype things. I suffered brain damage a few months ago and this is one of the symptoms. 😦

  261. -1=e^ipi says:

    @ WebHubbleTelescope –
    “Land has no heat sink to speak of so that ECS ~ TCR on land.”

    Could you please enlighten me on this extended definition of ECS and TCR? I was under the impression that they only make sense in the global context not for a local context such as land or ocean. Or are we talking about a theoretical earth that is either entirely earth or entirely ocean?

  262. -1=e^ipi says:

    @ ATTP – “I think you missed the little e in CO2e?”

    Yes. Sorry. :/

    I have a question with respect to the miscommunication on what I meant by warming by 2100.

    If someone tells you they will earn $100,000 in 1 year’s time, most people would interpret that to mean they will earn that $100,000 over the next year, not that $100,000 is the total amount of money they earn from birth to 1 year’s time.

    Yet for some reason ‘temperatures will increase by X by 2100’ is supposed to mean that temperatures will increase by X from 150 years ago to 2100 rather than from now to 2100. Seems inconsistent. Couldn’t ‘temperatures will increase by X by 2100’ just as easily use 10,000 years ago as the starting point if 150 years ago is acceptable?

    If you told the average person on the street that ‘the earth will warm by X degrees by 2100’, which interpretation do you think they would take? The choice of wording leads to confusion and gives the public the impression that there is more expected warming than there actually is.


  263. Or are we talking about a theoretical earth that is either entirely earth or entirely ocean?

    This is well beyond theory and into the realm of pragmatic empirical evidence. Like I said, the way that temperature and heat content are related is by thermal capacity factor. The ocean, having a large thermal capacity is able to absorb heat without necessarily raising its temperature significantly. On land the heat really has no where to go, so will raise its temperature in line with the infrared emission characteristics of the GHG atmosphere.

    Another factor that plays into this is that some latent heat released by the ocean as evaporation is spread over land where it is released during precipitation events. This tends to suppress the sea-surface temperature while increasing the land temperature. Again, the heat has no where to go on land.

    And yes, I have tried to estimate the impact of this process here:
    http://ContextEarth.com/2014/01/25/what-missing-heat/

    More algebra for you to look at,.

  264. -1=e^ipi,
    Yes, maybe it should have been defined more clearly, but it now has been.

  265. BBD says:

    It doesn’t have to explain all of it. Just some of it to offset the estimates and bias them upwards.

    No. Read Hansen et al. (2013) properly.

    “Rather that we might go from the pre-industrial quasi-equilibrium at 280ppm to an end-of-century transient response to 800ppm which would be well above 2C.”

    I think that we have misunderstood each other. Can we agree that warming from now to 2100 will be ~ 2C if CO2 concentrations reach 800 ppm by 2100?

    No.

  266. BBD says:

    Can we agree that warming from now to 2100 will be ~ 2C if CO2 concentrations reach 800 ppm by 2100?

    Sorry, yes, possibly but this is a misleading way of downplaying the total amount of warming as well as the fact that the ~2C is only *transient* and temperature will continue to rise post-2100.

    Couldn’t ‘temperatures will increase by X by 2100′ just as easily use 10,000 years ago as the starting point if 150 years ago is acceptable?

    No, because the early Holocene was warmer than the late (pre-industrial) Holocene.

  267. -1=e^ipi says:

    @ WebHubbleTelescope –

    “The ocean, having a large thermal capacity is able to absorb heat without necessarily raising its temperature significantly. On land the heat really has no where to go, so will raise its temperature in line with the infrared emission characteristics of the GHG atmosphere.”

    Can’t some of the heat be transferred via convection to other parts of the globe? Is that being neglected?

    “More algebra for you to look at,.”

    Thanks 🙂

    Also, I’m really impressed by your CSALT model.

  268. -1=e^ipi,,

    Can’t some of the heat be transferred via convection to other parts of the globe? Is that being neglected?

    I’m not sure what you’re getting at here. Ultimately the entire system has to reach an equilibrium and given the very large heat capacity of the oceans means that the energy that goes into the oceans will be significantly greater than the energy that heats the land and atmosphere. The oceans essentially regulate the rate at which the surface warms.

  269. -1=e^ipi says:

    “Sorry, yes, possibly but this is a misleading way of downplaying the total amount of warming as well as the fact that the ~2C is only *transient* and temperature will continue to rise post-2100.”

    So specifying the time line (now to 2100) and making an objective statement is misleading, but not specifying the starting point (by 2100) and using a starting point 150 years ago isn’t? Not to mention choosing a starting point that corresponds to the little ice age…

    “No, because the early Holocene was warmer than the late (pre-industrial) Holocene.”

    X doesn’t have to be positive. And I could have just as easily used 20,000 rather than 10,000 (which would amount to even more ‘warming’).

    Again, if you ask the average person on the street what they make of the statement ‘by 2100’ I bet very few would know to start from pre-industrial times. I bet even fewer know that pre-industrial times means the 19th century (many have the impression that the world’s climate was basically constant for millions of years before human CO2 emissions). Other statements such as ‘2014 is the warmest year on record’ can also mislead the public since the public doesn’t know what ‘on record’ entails. It is convenient that a lot of the terminology used by the IPCC and others just happens to be presented in a way such that it can mislead a large section of the public.

  270. -1=e^ipi says:

    @ATTP – I was referring to WebHubbleTelescope’s statement TCR ~ ECS on land; I think I get what is meant though I’m not sure I would phrase it this way. In any case, I’ll just have to read the link WebHubbleTelescope has given to me.

  271. BBD says:

    -1=e^ipi

    So specifying the time line (now to 2100) and making an objective statement is misleading, but not specifying the starting point (by 2100) and using a starting point 150 years ago isn’t? Not to mention choosing a starting point that corresponds to the little ice age…

    Hiding the full extent of the warming is misleading. Hiding the fact that the reduced figure is itself merely a transient value is misleading. Claiming that the start point was *in* the LIA rather than at its end is misleading.

    As for the stuff about starting ~10ka etc, it’s just more obfuscation.

    This long ago became boring, btw.

  272. BBD says:

    It is convenient that a lot of the terminology used by the IPCC and others just happens to be presented in a way such that it can mislead a large section of the public.

    Spare me the conspiracist ideation, please.

  273. verytallguy says:

    -1=e^ipi,

    No, I don’t agree that to have as a baseline the pre-industrial temperature when talking about the temperature rise due to human activity is misleading.

    And I specifically left you the reference so you could read the footnotes and context if you wished.

    I think there is one misleading thing about using pre-industrial – 2100 as a baseline, and that’s that it hides the inertia in the system:
    – inertia in the committed warming as the response moves for TCS to ECS to ESS for CO2 already emitted
    -inertia in the further committed emissions as it will be much harder to decarbonise from then than it is from now.

  274. -1=e^ipi says:

    “Hiding the full extent of the warming is misleading.”

    So objective statements that don’t give 100% of the information are misleading (even when I specify a starting and end point)?

    If I say it will be 1 C warmer by March? Is the assumption supposed to be from now to March, or from 150 years ago to March? What if I said that people in Nigeria will be 30% richer in 4 years time. Would the assumption be from now to 2019 or from 150 years ago to 2019? What if I said that if we go up 100 m then temperatures will drop by 1C? Is the implication from where you currently are or from sea level?

    It seems that in the English language when someone says ‘by X, there will be Y’ the implication is that it is relative to here/now not from some arbitrary starting point somewhere or sometime else. In case you haven’t heard, there is this thing called relativity in the universe. Yet somehow the one exception is with respect to climate change? If you mean ‘from pre-industrial times’ then say ‘from pre-industrial times’, don’t bend the English language just to be able to make statements that mislead the public.

    “Hiding the fact that the reduced figure is itself merely a transient value is misleading.”

    How is it hiding that if you say “warming from now to 2100”. Nothing is being hidden, it is clear that a transient change is being referred to.

    “Claiming that the start point was *in* the LIA rather than at its end is misleading.””

    ‘X is at the end of Y’ is a subset of ‘X is in Y’. It’s basic set theory. At least that statement is not ambiguous, unlike saying that “there will be X warming by 2100” and not specifying that you mean that this X warming started in the 19th century.

  275. BBD says:

    Too boring.

  276. Look at what has got Revkin concerned — wiggles!

    and

    This is like shooting fish in a barrel — to get the wiggles correct, just compose the major factors.

  277. eli says:

    -1st get have an interesting discussion with Richard Told (calling Richard The Tol) about where the baseline temperature for IAMS are. Another reason not to trust.

  278. Eli,
    Yes, I think that’s been tried and the response did not build confidence.

  279. -1=e^ipi says:

    @ WebHubbleTelescope –

    But if global temperature could be predicted with a simple linear regression model, think how that could affect people’s job security if they do far more complicated models with less explanatory power.

    Also, have you thought of adding temperature to your explanatory factors for the change in temperature with respect to time? Then the Linear Regression would have a simple and intuitive form:

    dT/dt = -k(Te – T)
    , Where k is a constant, Te is the globe’s current theoretical equilibrium temperature and T is the globe’s current temperature. So the global temperature just decays exponentially to equilibrium. Then you replace the theoretical equilibrium temperature with the factors you use in CSALT (you might want to delag them) and you get the CSALT model but with temperature. Then ECS is simply the coefficient infront of ln(CO2) divided by the temperature coefficient times -ln(2).

  280. miker613,
    I’ll have to have a look but a quick glance indicates that he’s had help/assistance from an economist and a retired mathematician. Can’t he discuss this with other climate scientists?

  281. miker613 says:

    ATTP, his claim (briefly) is that the statistics were done incompetently. It’s not a climate science issue – the math is done wrong.

  282. mikep says:

    ATTP, what Nic Lewis has done is shown that Marotske has regressed the change in model temperature on an explanatory variable that turns out to be the change in model temperature plus a bit of noise. So it’s not surprising that adding other variables to the regression doesn’t do much extra for the “explanation”. This is the sort of problem that is well understood by economists and statisticians. I would have thought it would be understood by logicians too, it’s an “almost” tautology. The paper appears to be completely worthless

  283. mikep,

    So it’s not surprising that adding other variables to the regression doesn’t do much extra for the “explanation”. This is the sort of problem that is well understood by economists and statisticians.

    Sure, but there’s much physics that statisticians don’t understand. Also, statistics doesn’t trump physics. I’d still like to know why in Lewis (2013) he manages to match Forster & Gregory (I think) but then gets a very big change in his climate sensitivity result when he adds an extra 6 years worth of data. I have asked him this exact question and he didn’t really give an answer. I should probably also have made sure that I have the papers right and put links, but I need to go and cook some dinner, so don’t have have time right now (I’m trying to pre-empt a standard Nic Lewis nit-pick).

    There’s a couple of things I will admit about Marotzke & Forster (2015). When I saw their equation I immediately thought “oooh, someone’s going to criticise this” since it is an approximation and I’m not sure over what time-interval it is valid. It seems clear that \kappa has to be time dependent.

    The other issue is that I think it has been over-hyped in the media. I don’t think it’s really showing that the model trends are fine. I think it’s simply showing that you can explain the discrepancy for short time intervals as being due to some kind of residual that might be internal variability. The interesting thing about this is that the magnitude of this internal variability is consistent with other studies that have looked at this (Palmer & McNeall, for example). It’s also interesting that it’s consistent over the whole instrumental temperature record – it’s not just present in the last 10 years or so. The other interesting thing is that the redsidual becomes much smaller for longer time intervals (60 years) which, again, is consistent with what one would expect for internal variability.

    I haven’t had a chance to read his critique in detail, but I think the way he chooses to frame the conclusions is very unfortunate and is one of my major issues with Nic Lewis’s behaviour (as I’ve also pointed out to him before).

    One of Marotzke’s conclusions is, however, quite likely correct despite not being established by his analysis: it seems reasonable that differences between simulated and observed trends may have been dominated – except perhaps recently – by random internal variability over the shorter 15-year timescale.

    So, he likes this but it’s purely by chance.

    The statistical methods used in the paper are so bad as to merit use in a class on how not to do applied statistics.

    All this paper demonstrates is that climate scientists should take some basic courses in statistics and Nature should get some competent referees.

    The paper is methodologically unsound and provides spurious results. No useful, valid inferences can be drawn from it. I believe that the authors should withdraw the paper.

    Well, this is just snarky, unnecessary and somewhat unprofessional. I’m quite comfortable with people being snarky as I can be so myself, but those who do so should at least be able to take it as well as dish it out. It’s also a pity that Nic Lewis doesn’t try to at least take something positive from this paper. It should be possible to critique in a way that isn’t just a nit-pick. I’ve yet to see Nic Lewis do anything else, though.

  284. miker613 says:

    Nic Lewis on Forster: “I was slightly taken aback by the paper, as I would have expected either one of the authors or a peer reviewer to have spotted the major flaws in its methodology. I have a high regard for Piers Forster, who is a very honest and open climate scientist, so I am sorry to see him associated with a paper that I think is very poor, even as co-author”
    Seems that he has respect for Forster, but is finding it hard to have any respect for the paper.
    ATTP, do you react the same way when Tamino, say, talks about his opponents in his posts? Or Sou? Or Paul Krugman, for that matter? Unlike Nic Lewis, I have never witnessed them saying one positive thing about anyone they consider on the other side; their hatred and contempt just drips from everything they write.

  285. miker613,

    ATTP, do you react the same way when Tamino, say, talks about his opponents in his posts? Or Sou? Or Paul Krugman, for that matter? Unlike Nic Lewis, I have never witnessed them saying one positive thing about anyone they consider on the other side; their hatred and contempt just drips from everything they write.

    Sou is a blogger who has chosen a style. Paul Krugman is an economist and from what I’ve seen, being rude about other economists (or just being rude in general) is standard for that field. Tamino, I think, does pretty well. Maybe you can find some example, but I can’t think of anything.

    Also, this isn’t about “the other side”, though. This is about a published scientist critiquing another paper. My point is that Nic Lewis is treated very well by many active scientists because he is trying to be genuinely skeptical. I think it is very good that he does so and have said so many times myself. I think his papers are interesting and valuable. My view would be that it would be better if he were to engage in a manner that didn’t make him seem like a standard pseudo-skeptic. Insulting the statistical abilities of the authors of another paper and calling for it to be retracted is not – IMO – a particularly good way to behave. He doesn’t have to change (it’s a free world) but if that is how he wants to engage, I feel perfectly comfortable pointing this out.

  286. miker613 says:

    Tamino: https://tamino.wordpress.com/2013/03/22/the-tick/
    First example I found (tamino, search for mcintyre). Note his comments on Dave Burton and McIntyre.

    I had made a dangerously strong statement, “I have never witnessed them saying one positive thing…”. Your response was “Tamino, I think, does pretty well”. So it’s not enough for me to find one example to prove my point. On the contrary, you can disprove my point with just one example of where Tamino says something positive about the work of a “denier”, and I would have to back down. In that one post, I found nothing non-angry in the post nor the dozen comments he made on it, nothing at all. That’s been my experience, which is a reason why I don’t go there or Sou or Paul Krugman often.

  287. miker613,
    You may be missing my point. It’s not so much about behaviour per se, it’s about whether or not the person is actually interested in some kind of dialogue. I don’t think Sou or Tamino actually care about dialogue with those they’re criticising. They probably expect to get as good back as they give. It’s if you want dialogue that you should behave in a way that makes that possible (or, at least, that’s my view). It is possible that I’m being slightly ironic in saying this as I may well have said things that would make dialogue with Nic Lewis impossible. I will, at least, admit that I’ve done that. I would also hope to completely change my view if someone gave me reason to and would apologise if I’d characterised someone unfairly.

    I’m also not convinced that Nic Lewis’s argument is correct. He’s suggesting some kind of circularity – that the temperatures determine the forcings and then they’re using the forcings to determine the temperatures. I’m not even sure that the former is correct, but I don’t think that latter is relevant. Marotzke & Forster are using a simple energy balance approach (which includes the model forcings) to estimate the model trends, and then comparing those with the observed trends to determine a residual. I can’t see how that’s inherently circular.

  288. mikep says:

    ATTR, the insults were in fact by Gordon Hughes, though reported by Nic Lewis. However, the “who has been nastiest to whom” debate is simply a (deliberate?) distraction from the substantive issue. Ross McKitrick has a simple clear explanation of Nic’s major point.

    “Your diagnosis of where they went wrong, as I understand it, is singularly devastating. The authors took their forcing trend estimates (dF) from an earlier paper that constructed them using the equation

    [2] dF = a0*dT + dN

    where a0 is a feedback term, dN is a term capturing the Top of Atmosphere radiative imbalance, which in this context is just a source of noise, and dT is… dT! So in [1] they regressed dT on itself + other terms! The Marotzke regression is actually something like

    [3] dT = c0 + c1*(a0*dT+ dN) + c2*alpha + c3*kappa + e

    and not surprisingly they found alpha and kappa contribute nothing. The only reason the regression model didn’t collapse due to dT being on both sides is that a bit of noise in the form of dN is added to dT on the right hand side.”

    This has got nothing to do with physics and everything to do with the mathematical structure of regression equations. If dT was growth in GDP, say, and dF was a measure of innovation, say, calculated as a linear function of GDP plus a bit of noise then the results would be equally nonsensical (despite the fact that innovation does almost certainly strongly affect GDP growth). If this criticism is correct, and it looks as though it is, then there is no worthwhile information in the paper. I can only hope that the peer reviewers missed this point because the forcing calculations were taken from a separate paper and they didn’t realise that they were simply a transformation, plus a bit of noise, of the temperature data which was being “explained”. That’s the big question – if Nic Lewis is right how on earth did this get into Nature?

  289. mikep,

    If this criticism is correct, and it looks as though it is, then there is no worthwhile information in the paper.

    I’m not convinced the criticism is correct. All the Marotzke & Forster are doing is using the model forcings to estimate the model trends, and then comparing that with the observed trends to get a residual. I don’t see anything circular in that. Even if the model temperatures were used to get the forcings doesn’t really matter, because we want the forcings to reproduce the model trends. I’ll have to think about this a bit though.

  290. mikep wrote:

    “[the immediately following is mikep quoting someone else]…they regressed dT on itself……The only reason the regression model didn’t collapse due to dT being on both sides is that a bit of noise in the form of dN is added to dT on the right hand side….[and now mikep]…. This has got nothing to do with physics and everything to do with the mathematical structure of regression equations.”

    The mathematical structure *only*?

    Is Lewis’s argument based on a claim along the lines of the following implication?

    – If a physics equation E can be deconstructed into a form E’ such that in E’ we have the same variable on both sides of the equation with this variable being a function of itself, then it is always the case that physics equation E is meaningless.

    Take one of the basic equations in physics, say
    d = rt.
    But what is r? It’s this:
    r = d/t.
    And so,
    d = (d/t)t.
    We now have d on both sides of an equation with d as a function of itself.

    Is Lewis putting forth the logic that says that given r = d/t, having the same variable on both sides of the equation d = (d/t)t with d a function of itself implies that the equation d = rt is meaningless?

    Is Lewis claiming that equation d = rt is circular since we can deconstruct it into a form such that we have the same variable on both sides of the equation with this variable being a function of itself?

    If so, then Lewis’s logic is wrong. How?

    We can do this sort of thing all the time on physics equations, where we can deconstruct them such that we end up with an equation with the same variable on both sides of the equation and with one of the variables being a function of itself.

    More generally, when we deconstruct the variables in physics equations to their “atomic” elements, which are variables representing the most fundamental measures of time, distance, mass, charge, and temperature, then we end up with large tautologies, with many times the same variables on both sides of the equation. But these physics equations are not meaningless, since all these sub-formulas in the formulas built up around and in terms of these fundamentals have meaning in and of themselves.

    Note that we don’t have to deconstruct all the way to the “atomic” variables to see the same acceptable phenomenon of “the same variable on both sides of the equation with this variable being a function of itself” in physics equations.

    If Lewis is not arguing along these lines above, then what is he arguing?

  291. miker613 says:

    “If Lewis is not arguing along these lines above, then what is he arguing?” He is saying that if you do a regression for an output variable on a set of input variables, and the data for (one of) those input variables was itself derived by regression from the output variable, you are essentially doing a regression
    T = c(T+e1) + d.S + e2, where S are some other variables and e is noise. Don’t be surprised if you find that the “input variable T” term explains the output variable T better than all the rest, with a little noise. Don’t publish an article in Nature saying that the S variables don’t make much difference.

    IF this turns out to be right, and we’ll know that soon as the paper would be withdrawn within a short time, then it’s pretty ridiculous. KeefeAndAmanda, what was that you were saying about how the rest of us can’t trust a result unless it’s been peer reviewed and checked by experts and all that? And how publishing on the web and crowd review just can’t compare, no one trusts that? VeryTallGuy, what was that you were telling me about how the BEST project teaches that climate scientists really do know statistics and the rest of us were wrong to doubt their skills? What does it say that no one heard a peep from a world full of climate scientists about a paper in Nature, for goodness sakes, quoted in major news outlets (I saw the Washington Post) – except for those who gleefully retweeted it? Aside from Nic Lewis and a climateaudit statistician or two, who completely refuted it.

    Truth is, even if it is right, I can’t blame the peer reviewers that much – probably they can’t be expected to track down the earlier paper and realize where the values for the forcings came from. But if they can’t be expected to do that, then they can’t replace climate auditors who recheck all the calculations in detail. And they can’t be relied upon that someone’s work is right.

  292. verytallguy says:

    miker,

    your risible parroting of quotes from someone else’s analysis you evidently do not understand is patently ridiculous and not worthy of dignifying with a response.

    But as you summon up my name, let’s just remind you of your dishonesty.

    You claim you are told that models are not tuned.

    What I actually said, quoted from realclimate and AR5 respectively.

    Model development actually does not use the trend data in tuning

    Models… …are tuned to reproduce a small subset of global mean observationally based constraints.

    Two suggestions:
    1. Listen first, condemn later.
    2. Stop making strong statements about issues you do not understand. It makes you appear ridiculous.

  293. miker613 says:

    I like this quote: “Everything You Already Believe Is Completely Correct, and Here’s Some Math You Won’t Understand That Proves It.”
    http://www.bloombergview.com/articles/2015-02-05/goodbye-to-the-dish-and-blogging-too

  294. miker613 says:

    VTG, I don’t know why you keep repeating this. I understood you perfectly:
    You claimed that models are not tuned using the temperature data. You brought quotes from realclimate to that effect.
    I brought quotes from actual climate modellers that models are indeed tuned using the temperature data – in an indirect way. It’s nice that realclimate doesn’t think so (or missed that part of the issue), but I have explained why I think the modellers are right.

  295. miker613 says:

    VTG, if you don’t think my description of Nic Lewis’ post is correct, the thing for both of us to do is wait. Let us see if the paper is withdrawn.

  296. verytallguy says:

    miker,

    Why – because you claimed, both here and elsewhere that you were told that models were not tuned. That was the opposite of what you were told.

    In your latest comment, you show that you believe that you know better how models are tuned than the authors of realclimate do.

    Let’s just repeat that:

    You believe that you know better how models are tuned than the authors of realclimate do.

  297. verytallguy says:

    miker,

    I make no claim either way if Nic Lewis is correct, debating that with you would be entirely pointless.

    I do claim that it’s evident that you do not understand Lewis’s analysis, yet make very strong claims over it it regardless. Which makes you look ridiculous.

  298. What Nic Lewis appears to say is

    As is now evident, Marotzke’s equation (3) involves regressing ΔT on a linear function of itself.

    Now, I don’t think this is correct. The way I understand Marotzke & Forster is that they use the model forcings to determine the forced trends (for different time intervals) and then compare that with the observed trends to determine the residual (internal variability). I don’t think that means that they’re regressing \Delta T on a linear function of itself, so I don’t see how this is circular.

  299. AndyL says:

    It will be interesting to see how this pans out.

    For once we have a debate that is not about validity of proxies, or which data set was used, or cherry-picking, or whether a method was used and described correctly, or in-filling, or over-fitting, or motives, or any of the things that people have disagreed about on in the past.
    This debate (so far at least) is simply about the maths, and only one side can be right. I hope that Anders and other people here take up the challenge of working out which one.

    My only prediction is that if Lewis is shown to be wrong, he will withdraw his claim.

  300. AndyL,

    For once we have a debate that is not about validity of proxies, or which data set was used, or cherry-picking, or whether a method was used and described correctly, or in-filling, or over-fitting, or motives, or any of the things that people have disagreed about on in the past.

    I’m not sure why you think it’s different. Seems about the same to me : statistician with background in economics finds fatal flaw in the statistical method used by a physical scientist and doesn’t bother to check with any other physical scientists before confidently claiming that the entire paper is junk, it should be withdrawn, and that the scientists involved have the technical ability of undergraduate students.

    This debate (so far at least) is simply about the maths, and only one side can be right. I hope that Anders and other people here take up the challenge of working out which one.

    Not really sure I can be all that bothered. I don’t know if Marotzke & Forster is wrong, but I am struggling to see what is flawed about their analysis. Seems pretty straightforward to me, but I haven’t really had a chance to look it in any great detail.

  301. AndyL says:

    I say it’s different because Lewis’ claim does not depend on any of those things. The background of the statistician is not relevant – the stats is either right or wrong.

    Lewis is claiming a fatal mathematical error in the paper. It should be a simple exercise to look at what M&F did to see whether Lewis is right

  302. miker613 says:

    @VTG “You believe that you know better how models are tuned than the authors of realclimate do.”
    No, VTG, I think that these modellers said so. They may know better than realclimate, no? It could be that the authors of realclimate misunderstand how easy it is to overfit by mistake – not being mathematicians. (According to the class I took) it’s a trap that many modellers have fallen into without knowing it. The way to be sure it isn’t happening is by validation.

  303. verytallguy says:

    miker,

    “the class I took”

    Priceless.

  304. BBD says:

    miker

    Look up Gavin Schmidt’s career history. What did he do, until promoted into Hansen’s old job of director at GISS?

    What did he do?

    Go and find out and report back.

  305. Andy L.,

    I say it’s different because Lewis’ claim does not depend on any of those things. The background of the statistician is not relevant – the stats is either right or wrong.

    Lewis is claiming a fatal mathematical error in the paper. It should be a simple exercise to look at what M&F did to see whether Lewis is right

    The problem is that this isn’t always true. That’s why – IMO – you don’t just throw some statistician at a problem and assume that they know what they’re doing (similarly for physical scientists, obviously).

    I realise, however, that I think I’ve misunderstood what Marotzke & Forster actually did (no surprises there, I hear you say). I had assumed that they determined the externally forced trend in the models and then compared that with the observed trends to determine a residual, which they interpreted as internal variability. I had thought that by showing that this internal variability component was consistent across the instrumental temperature record, that this showed that it could explain the discrepancy for the shorter time intervals (In retrospect, this may have been silly of me).

    I think what Marotzke & Forster have done is determine the externally forced trends in the models and then compared this with the actual model trends to estimate a residual which is then the internally forced component. I think this actually strengthens their result and I take back the over-hyped that I said earlier. What this shows is that the model trends can be decomposed into an externally forced and an internally forced component and that these together can explain the observed trends for all time intervals (with the internally forced component dominating for short time intervals).

    What I think Nic Lewis is suggesting is that the earlier work that determined the external forcings used the temperatures to do so. Therefore using the external forcings to then determine the temperatures is circular. The problem is that this would only be true if the estimates of the external forcings were not a reasonable representation of the actual external forcings. If they are a fair representation of the actual external forcings, then there is no problem with then using them to determine the externally forced trend. It’s kind of how its defined. So, unless Nic Lewis can really show that the earlier work that produced the external forcings has a problem (i.e., that these estimates are not correctly representing the actual external forcings) then I don’t think his criticism is actually valid.

  306. Willard says:

    A technical comment on Nic’s model:

    Modelers are rather clever and I believe they know the effect of their parameters and choices on important emergent properties like CS [climate sensitivity]. This is certainly true for turbulence models for example, where the effect of choices on important properties are well known to specialists. The real problem here is that there are only a finite set of parameters and a potentially almost limitless number of emergent properties.

    http://neverendingaudit.tumblr.com/post/108050690409

  307. Andy L said (I presume this is not Andy Lacis):


    It should be a simple exercise to look at what M&F did to see whether Lewis is right

    Why oh why should I do that if there is a better approach than what the M&F’ers applied?

    What you actually should do is compose the global temperature anomaly out of the known bounded variability factors (i.e ENSO, volcanoes, LOD, TSI). And then you are done with it and left with the CO2(e) residual, where (e) stands for equivalent.

    Forget what M&F did, and especially forget what Nic Lewis did. I am not in the business of justifying every research paper that comes down the pike. If it doesn’t look elegant and concise in comparison to what else is available as a good supporting argument, why bother?

  308. miker613 says:

    ‘miker, “the class I took” Priceless.’
    It’s called honesty.

  309. AndyL says:

    Thanks ATTP – it looks like you have understood the circularity. it would be good if you could explain why it is then valid to do linear regression on these equations. I don’t see how the accuracy of the estimates makes any difference this.

    WHT: No I’m not Andy Lacis. It’s a shame that Nature published and the MSM promoted M&F as a rebuttal to the so-called pause instead of pubishing your “better approach”

  310. BBD says:

    miker

    What did (does) Gavin Schmidt do?

    Why have you not answered this question?

  311. AndyL,

    it would be good if you could explain why it is then valid to do linear regression on these equations. I don’t see how the accuracy of the estimates makes any difference this.

    Well, because if you determine the external forcings, and if this is a reasonable way in which to do so, then there’s nothing wrong with then using these to determine the externally forced trend. In addition, if you then compare your estimate for the externally forced trend with the actual trend, you can determine the residual (or, the internally forced trend). I can’t see anything wrong with this. The full climate model produces a single trend, all that Marotzke & Forster are doing is trying to determine, for a given time interval, what fraction of this trend is typically externally forced and what fraction is internally forced.

  312. miker613 says:

    @ATTP ‘The problem is that this would only be true if the estimates of the external forcings were not a reasonable representation of the actual external forcings. If they are a fair representation of the actual external forcings, then there is no problem with then using them to determine the externally forced trend.’ Not so. If they were determined by regression from the temperatures, even if he did a good job, they will still be in the form of temperature + random noise. That’s what regression does. Using them as input to another regression on temperatures is going to give that term a great big advantage over the other terms in the regression.

  313. BBD says:

    It could be that the authors of realclimate misunderstand how easy it is to overfit by mistake – not being mathematicians.

    What is Gavin Schmidt’s degree in?

    Find out and report back…

  314. miker613,

    If they were determined by regression from the temperatures, even if he did a good job, they will still be in the form of temperature + random noise. That’s what regression does. Using them as input to another regression on temperatures is going to give that term a great big advantage over the other terms in the regression.

    I disagree. If one can show that the first regression was a valid and reasonable way to determine the external forcings, then there’s no reason why you can’t then use these to determine the externally forced trends. That’s kind of how its defined. Whether its circular or not, doesn’t – as far as I can tell – matter. If the external forcing estimates are a reasonable representation of the actual external forcings, then using them to determine the externally forced trend is fine.

  315. BBD says:

    I’m getting a little frustrated miker.

    Please justify your earlier assertions by answering some relevant questions.

  316. Actually KandA’s example may be apt. If I know how far I’ve travelled, I can work out my average speed from v = d/t. If someone then asks how far did you travel, I can use d = vt to work that out. It’s circular, but it doesn’t matter, it’s an entirely valid calculation.

  317. AL says:


    It’s a shame that Nature published and the MSM promoted M&F as a rebuttal to the so-called pause instead of pubishing your “better approach”

    No harm done in the Nature paper. The opposing forces is another story. Reminds me of a GW Bush quote: “They never stop thinking about new ways to harm our country and our people, and neither do we”

    “Reggie” Lewis is always shooting toward the wrong basket, and he hasn’t a clue.
    #OwnGoals all the way.

  318. miker613 says:

    “Actually KandA’s example may be apt.” I think I’m going to stop. You’re all saying things that make no sense to me, since they don’t address the fact that we’re doing regression.
    I guess we’ll wait and see what happens.

  319. miker613,

    You’re all saying things that make no sense to me, since they don’t address the fact that we’re doing regression.

    That, as far as I’m concerned, doesn’t matter. Either the estimated external forcings are a valid estimate, in which case they can be used to estimate the externally forced trend. That regression is involved is – I think – irrelevant.

  320. BBD says:

    miker

    [VTG:] “You believe that you know better how models are tuned than the authors of realclimate do.”

    No, VTG, I think that these modellers said so. They may know better than realclimate, no? It could be that the authors of realclimate misunderstand how easy it is to overfit by mistake – not being mathematicians.

    1/ What did (does) Gavin Schmidt do? What is his domain expertise?

    2/ What subject is his PhD in?

    Answers here.

  321. rconnor says:

    I’m coming into this discussion late, so I apologize if I’m derailing the current discussion, but I just could not stand by and let Miker misrepresent Mauritsen et al. (2012) as badly as he has.

    Miker plucks a quote from Mauritsen et al. (2012) (“Climate models ability to simulate the 20th century temperature increase with fidelity has become something of a show-stopper”) and attempts to use this as “proof” that Mauritsen et al. (2012) is saying that climate models are tuned to match 20th century temperature trends. This is the exact opposite of what Mauritsen et al. (2012) actually says.

    From Mauritsen et al. (2012):
    “Climate model tuning has developed well beyond just controlling global mean temperature drift”
    “The MPI-ESM was not tuned to better fit the 20th Century. In fact, we only had the capability to run the full 20th Century simulation according to the CMIP5-protocol after the point in time when the model was frozen.”

    Miker, it’s right there in the paper – “[the model] was not tuned to better fit the 20th century.” Clear as day. Mauritsen et al. (2012) is in agreement with the Real Climate post. You’re wrong.

    All that Mauritsen et al. (2012) is trying to say by the quote you cherry picked is that, while models are NOT tuned to fit 20th century temperatures, they are compared to 20th century temperatures as a test. If they cannot accurately replicate 20th century temperatures, then there is something wrong with how the physics is modeled and that should be corrected before publication.

    (also, ATTP, as I long-time reader and first time poster, I have to say how much I enjoy your site. It’s a tough gig but know that there are many out there that appreciate what you do!)

  322. rconnor,
    Thanks, very useful.

    while models are NOT tuned to fit 20th century temperatures, they are compared to 20th century temperatures as a test. If they cannot accurately replicate 20th century temperatures, then there is something wrong with how the physics is modeled and that should be corrected before publication.

    I think this is an important point and is probably something that people (if I’m being generous) don’t get. When you are trying to model a complex, physical system, you clearly want to see how well your model fits the observations. If it doesn’t fit, you try to work out what is wrong, or what is missing, However, you’re typically constrained by the basic laws of physics and so even though you may adjust the model to try and better fit the observations, this isn’t the same as tuning a model that simply has a set of parameters that you can tune and that aren’t constrained by the laws of physics.

  323. AndyL says:

    ATTP,
    I think you are saying that “yes there is circularity, but it doesn’t matter”. Is that a fair summary?

  324. AndyL.
    I’m not quite saying that because I haven’t had a chance to have a good look at the Forster paper in which the external forcings were determined. So, I don’t know if there really is any circularity. Also, if there is a general circularity, where does the residual come from? (i.e., if the temperatures define the external forcings, then why is there a residual when you use them to determine the temperature?).

    At the end of the day, if you have an estimate for the external forcings then these can be used to determine the forced trends. I don’t think the circularity is relevant if these external forcings are indeed a reasonable represntation of the actual external forcings.

  325. Oale says:

    would it be funny if the so called c.60-year cycle would be proven to be an artefact of early human influence on climate? It’s quite easy to connect these swings to known human activities from the 19th century on. This might in turn make it easier for people to accept ‘the true reason’ of LIA (the depeopling of Americas and incraese in gardening agriculture in Asia, leading to a strong CO2 drawdown). numbers might not add up but most people won’t do numbers anyway, except when needing to buy some ‘essential’ new gizmo. 😉

  326. In reply to my comment above on February 6, 2015 at 1:18 pm,

    Miker613 wrote,

    “KeefeAndAmanda, what was that you were saying about how the rest of us can’t trust a result unless it’s been peer reviewed and checked by experts and all that? And how publishing on the web and crowd review just can’t compare, no one trusts that?”

    By the conventions of sentential logic in which “unless” translates to “or” and by the definition of implication, the equivalent of what you just said I said is, “we can’t trust a result if it has not been peer reviewed and checked by experts”. Well, yes, I said that and that’s true, and nothing has changed here. And note also that I said *repeatedly* that the public can trust the reputable professional literature comprised of the reputable professional monographs, textbooks, and refereed or peer-reviewed journals *in its ongoing aggregate* – and I *repeatedly* put the phrase “in its ongoing aggregate” in asterisks for emphasis. Note again the terms “ongoing” and “aggregate”. This covers any false result that manages to get into this literature.

    ATTP wrote on February 6, 2015 at 6:33 pm,

    “Actually KandA’s example may be apt. If I know how far I’ve travelled, I can work out my average speed from . If someone then asks how far did you travel, I can use to work that out. It’s circular, but it doesn’t matter, it’s an entirely valid calculation.”

    Then miker613 wrote,

    “”Actually KandA’s example may be apt.” I think I’m going to stop. You’re all saying things that make no sense to me, since they don’t address the fact that we’re doing regression.”

    It still seems to be the case that Lewis and company claim that there is an underlying mathematical structure in a certain given equation that is circular and that therefore simply because of that circular structure, the equation is invalid. That is, they seem to claim that all equations in physics that have such an underlying circular structure are invalid.

    Well, guess what. If this is what they claim, then they claim that many equations in physics are invalid.

    To show this, I’d like to amplify what I said in my comment above. This circularity that they seem to have a problem with is just the circular nature of the composite function formed when we take a one-to-one unary function and its inverse function together, as in f^{-1}f(x) = x. We start with x and then we end with x. As I tried to point out in my comment above, this mathematical property of circularity is embedded in many physics equations, since many physics equations can be deconstructed to expose that at least one of the variables in the deconstruction is a function of itself, where this function of itself is the composite function of a one-to-one unary function and its inverse function such that we start with this variable and end with this variable.

    Another way of saying this is that many physics equations are constructed on one-to-one unary functions and their inverse functions using fundamental variables, these fundamental variables usually the ones of the most fundamental measures of nature, these being time, distance (linear measure of space), mass, charge, and temperature.

    To see this, let’s first take again that very simple example I used, the basic equation d = rt. By what I just said about fundamental variables, this equation actually derives from variables d and t, to get a definition for r, which would be d/t = r. This can be defined as a one-to-one unary function, namely f(d) = d/t = r. Then we simply use algebra to get d = rt. But this “using algebra” is actually performing the inverse function operation, namely f^{-1}f(d) = f(d)t = rt = d. So we have a circularity of taking a one-to-one unary function and its inverse function – we start with d and end with d – and this is perfectly valid. Generally, depending on what equation we choose, if we rewrite again and again the equation in terms of more and more fundamental variables (per the above on fundamental variables) and “use algebra”, we might notice that more and more, we derive equations in which we have the same variable on both sides of the equation, and in some of these instances we find that the equation is in the form of a variable that is a function of itself via the circular nature of the composite function formed when we take a one-to-one unary function and its inverse function together.

    So Lewis and company are wrong if they say that circularity of this type in the underlying mathematical structure always implies invalidity.

    So how do we decide whether a circularity of this type is invalid? It’s a question of physics.

    And so Lewis and company who claim that this has nothing to with physics and instead has only to do with the underlying mathematical structure are wrong on that point, too.

    As to what the physics says on whether the underlying circularity of taking a certain one-to-one unary function and its inverse function is wrong in the equation(s) under discussion – assuming that this type of circularity that I demonstrated above can be perfectly valid is there in the equation(s) under discussion, we will see.

    If the physics says it’s OK, then it may be that the authors used this mathematical structure to work out a way to tease out an underlying “signal” from internal variability.

  327. I was thinking about this a bit more and I think what Nic Lewis is arguing is that the external forcings that are being used to determine the temperature, depend also on temperature. However, they’re external, therefore, by definition they do not depend on anything internal. That an earlier piece of work may have used temperatures to determine what these external forcings are, does not mean that they depend on temperature.

  328. David Jay says:

    ATTP: If you don’t understand the fatal flaw in this paper, you could walk across campus and discuss it with Gordon Hughes. He is the one who said that the flaw is so serious that it would be an appropriate (negative) case for academic instruction.

  329. David,
    I did consider that, but I have little interest in dealing with those who seem comfortable insulting experienced researchers in other fields. I also think I do understand this and I think Gordon Hughes is wrong. The external forcings do not depend on temperature (by definition). Simply because they used the model results to infer the external forcings in an earlier paper does not make this so. A more interesting question is how the earlier work distinguished between external forcings and internal variability in order to extract the external forcing. I haven’t had a chance to look at it in detail, but may try to do so.

  330. as ATTP says the circularity is that “external forcings that are being used to determine the temperature, depend also on temperature”

    The issue of circularity when attributing variability to temperature is easy to avoid — just pick measures that are not temperature based. So one chooses SOI, aerosol optical depth, TSI, and LOD to compensate for the variability, and remove the nuisance signal. The residual is then mapped to equivalent log(CO2).

    No significant problems doing it this way. Impediments to progress such as Nic Lewis would struggle mightily trying to quibble over that type of approach.

  331. WHT,
    The one thing I don’t quite understand (and I haven’t worked through Forster et al 2013 in detail, so it may be there) is how one ensures that the estimation of the external forcings isn’t influenced by internal variability. Internal variability can influence both the temperature trend and can produce a radiative perturbation. If so, if you are simply using temperature and TOA imbalance to estimate the external forcing, how does that exclude influences from internal variability. It’s possible that the regression essentially does this, but I’m not sure that I quite get this at the moment.

  332. SOI does not have a trend so that composes into the global temperature time series and the residual that’s left behind shows more of the monotonic warming signal.

    Why doesn’t SOI show a trend? Because the measure of SOI is caused by the sloshing of the oceans, which is driven by a periodic or at least quasi-periodic force. Obvioiusly GHG accumulation is not periodic on the scale that we are interested in, so the physics would preclude that from being a factor in the ENSO SOI

    Why can’t we use an SST in the measure of ENSO? Because that does show the effects of a warming trend due to GHGs. Therefore strike that as a regressor due its bad circularity nature

    We could filter the SST, but then why don’t we just filter the GAT in the first place?

    Becase then the nitpickers would then focus on the 30 to 60 year wiggles. Get rid of that by using the long-term LOD shifts as a regressor. That is likely geophysical in nature as well, so one is left with a cleaned up trend.

  333. -1=e^ipi says:

    @ ATTP –

    “Therefore using the external forcings to then determine the temperatures is circular. The problem is that this would only be true if the estimates of the external forcings were not a reasonable representation of the actual external forcings.”

    If I say ‘A is true therefore A is true’. That is circular regardless of whether A is true or not.

    @ WHT –

    I agree that approach has a lot of value.

    Yesterday, I tried regressing change in temperature on a constant, ln(CO2), NAO, PDO, SOI, change in LoD, TSI, volcanic aerosols and temperature. Got a climate sensitivity of (1.98 +/- 0.92) C.

  334. -1,

    If I say ‘A is true therefore A is true’. That is circular regardless of whether A is true or not.

    But I didn’t say anything like this, did I? The point is simply that using model temperatures to determine the external forcings does not imply that the external forcings depend on temperature (they don’t).

  335. AndyL says:

    ATTP: you said “I have little interest in dealing with those who seem comfortable insulting experienced researchers in other fields”
    Aren’t you being a little over-sensitive here? Consider the reverse situation. Imagine a high visibility paper on statistics that in your opinion contained some fundamental error in physics that invalidated the results. Wouldn’t you be scathing about the statisticians’ abilility in physics?

    Anyway, the decision on that is up to you. I’m hoping someone in the climate science mainstream with strong statistics credentials will give their views on this.

  336. AndyL.,

    Aren’t you being a little over-sensitive here?

    Possibly, but I’ve been involved in this discussion long enough now to have a reasonable sense of what is worthwhile and what isn’t.

    Consider the reverse situation. Imagine a high visibility paper on statistics that in your opinion contained some fundamental error in physics that invalidated the results. Wouldn’t you be scathing about the statisticians’ abilility in physics?

    No. If I thought there was a major problem with a paper, I would first talk with colleagues and with the authors. I wouldn’t simply be scathing (well, maybe in private). I’ve also done this kind of thing. When I think there is a serious problem with a paper, I publish my own and talk about it at scientific meetings, where I don’t simply insult the other authors.

    I’m hoping someone in the climate science mainstream with strong statistics credentials will give their views on this.

    As I said, I don’t think this is simply a statistics issue. The real issue is whether or not Forster et als. (2013) estimate of the external forcings is reasonable. If it is, there is no reason why what was done in Marotzke & Forster (2015) is circular.

  337. AndyL says:

    “The real issue is whether or not Forster et als. (2013) estimate of the external forcings is reasonable.”
    That is a possible defence which a statistician should consider. Remember that Lewis’ criticism is that linear regressions were inappropriately applied. The statistican would have to consider whether the accuracy of the estimate is in any way relevant to the criticism.

  338. AndyL,

    Remember that Lewis’ criticism is that linear regressions were inappropriately applied.

    His argument appears to be that it is wrong because it was regressing T against itself because the external forcings depend on T. However, the external forcings do not depend on T if they were properly determined in Forster et al. (2013). If so, Nic Lewis’s argument is wrong. That’s why he should really consider the validity of Forster et al. (2013), not really the regression in Marotzke & Forster (2015).

  339. AndyL says:

    I’m still not clear what you mean by “properly determined”. Are you saying that the forcings are independent of T?

  340. AndyL,

    Are you saying that the forcings are independent of T?

    Yes, by definition external forcings are independent of T. They’re external. So, the more interesting question is whether or not Forster et al. (2013) determined the external forcings in a reasonable way, or not. That’s why I don’t think that Nic Lewis’s argument is, by itself, correct. I think he needs to show that Forster et al. (2013) produced estimates for the external forcings that have a problem, not simply that the temperature was used to make these estimates.

    For example: If I exert a force on an object that produces an acceleration, I could determine the force from the acceleration. I can then use that estimate for the force to determine how this object would move. That may be circular in a sense, but it isn’t unphysical, which is what matters in this case.

  341. BBD says:

    Thanks for this, ATTP. Much clearer now.

  342. AndyL says:

    Your example doesn’t make sense. If you are describing a simple F=ma situation, you can’t determine force from acceleration and then in turn use the calculated force to derive the acceleration.

    Anyway back to M&F. Lewis says that Forster13 used delta T to determine delta F, then plugged the derived delta F into the regression against temperature. The key line is “Marotzke’s equation (3) involves regressing ΔT on a linear function of itself”. How does the accurate calculation of delta F in Forster 13 avoid that problem? It needs a competent statistician who understands the underlying assumptions and limitations of linear regression to determine whether the regressions have been applied correctly.

  343. AndyL,

    If you are describing a simple F=ma situation, you can’t determine force from acceleration and then in turn use the calculated force to derive the acceleration.

    I didn’t say that, did I? I said “I could determine the force from the acceleration. I can then use that estimate for the force to determine how this object would move.” Also, it’s clear that if I use the acceleration to determine the force, I can then use the force to determine the acceleration. It would be stupid to do so, but it would still give the right answer.

    How does the accurate calculation of delta F in Forster 13 avoid that problem?

    I’m not sure what you mean. My point is that one needs to understand Forster et al. (2013) to understand if its estimates of the external forcings are reasonable.

    It needs a competent statistician who understands the underlying assumptions and limitations of linear regression to determine whether the regressions have been applied correctly.

    Not necessarily. All you really need is for someone to establish if the method in Forster et al. (2013) is a reasonable method for estimating the external forcings.

  344. Mike M. says:

    I just stumbled upon this and have not had a chance to digest the paper, but it looks to me like Marotzke and Forster may have made a very basic error. They seem to be treating natural internal variability as a completely random fluctuation in temperature; i.e., white noise. In that case, it is no surprise that they get no natural trends over longer times. But don’t power spectral density analyses of the long-term temperature records show pink, or even red, noise on time scales above a couple of decades? Those low frequency noise components will contribute much more to multi-decadal scale climate variability than the white noise appropriate on shorter time scales. So if they are not in the Marotzke and Forster analysis, then the conclusions can not be trusted. Am I off base here?
    It seems to me that this is the basic problem with trying to claim that the the observed climate change can not possibly be natural. If we don;t understand natural climate change on time scales beyond a decade or two, then how can we conclude that the observed trend can not be natural? The best we can say is that the observed trend agrees with expectations of anthropogenic change. That is a much weaker statement.

  345. MikeM,

    They seem to be treating natural internal variability as a completely random fluctuation in temperature; i.e., white noise.

    No, I don’t think so. All they’re doing, I think, is determining the residual when they compare the externally forced trend with the actual trend. So, in that sense, if the only two factors are external forcings and internal variability (which is presumably the case) then I can’t see why there is a problem with their analysis.

  346. David Young says:

    This is a little beside the point isn’t it? If Lewis and two statistical experts have incorrectly characterized this paper, someone should go to Climate Audit and engage them just like Nic engaged the authors before going public. The goal here is to find what the truth of the matter is, not to continue conversations that have no hope of doing so.

    BTW, This is a long standing issue for climate science. In medicine it used to be a problem and it was fixed. All reputable studies now are designed with the involvement of at least one professional statistician.

  347. AndyL says:

    ATTP
    We seem to have reached our own circularity, so I’m stopping here.

    I’ll just note that your line “it’s clear that if I use the acceleration to determine the force, I can then use the force to determine the acceleration” is simply amazing.

  348. AndyL,

    I’ll just note that your line “it’s clear that if I use the acceleration to determine the force, I can then use the force to determine the acceleration” is simply amazing.

    Hmm, why amazing? Also, for some reason you left out – “it would be stupid to do so”? Maybe, though, you can explain why if I know the force acting on something, I can’t then determine the acceleration. It’s Newton’s 2nd Law! (okay, I need to also know the mass). You wouldn’t be ending with some kind of rhetorical flourish? Would be rather irritating if so.

  349. DY,

    This is a little beside the point isn’t it? If Lewis and two statistical experts have incorrectly characterized this paper, someone should go to Climate Audit and engage them just like Nic engaged the authors before going public.

    Did he? Even so, so what? If Nic Lewis and two statistical experts want to incorrectly characterise a paper (assuming that they have) and then get it mentioned by James Delingpole in Breitbart (which itself should give any sensible person pause for thought) that’s entirely their right. Noone is obliged to go and engage with them at all.

  350. Mike M. says:

    … and then there’s physics wrote: “if the only two factors are external forcings and internal variability (which is presumably the case)”. But I think that is three factors. Internal variability would seem to consist of two factors: uncorrelated random noise (included in the analysis) and low frequency variability (ignored in the analysis). By low frequency I mean highly correlated over time scales of decades or longer. How do you distinguish that from an external forcing?

  351. MikeM,

    But I think that is three factors. Internal variability would seem to consist of two factors: uncorrelated random noise (included in the analysis) and low frequency variability (ignored in the analysis).

    Not really. There are – in a sense – two primary factors, external forcings (anthropogenic, solar, volcanoes) and internal factors. I don’t see how their analysis ignores low frequency variability. If their estimate of the external forcings is correct, then this should be included in the residual that they estimate.


  352. -1=e^ipi says:

    @ WHT –

    I agree that approach has a lot of value.

    Yesterday, I tried regressing change in temperature on a constant, ln(CO2), NAO, PDO, SOI, change in LoD, TSI, volcanic aerosols and temperature. Got a climate sensitivity of (1.98 +/- 0.92) C.

    Excellent. This approach has value in that it can bridge the gap between analysts that otherwise may not see eye to eye.

  353. miker613 says:

    Posted this at climateaudit, but it didn’t show up too well:
    This is my attempt to see if I can simulate the issue, using my very rusty skills in R:

    //x = 4a + 3b +5f + e, where e is error term

    > set.seed(1)
    > a=rnorm(1000,3,2)
    > b=rnorm(1000,4,3)
    > f=rnorm(1000,2,2)
    > e=rnorm(1000,0,.7)
    > x=4*a+3*b+5*f+e
    > lmm=lm(x ~ a+b+f)
    > summary(lmm)
    Call:
    lm(formula = x ~ a + b + f)
    Residuals:
    Min 1Q Median 3Q Max
    -2.26987 -0.49973 -0.00857 0.50395 2.14144

    Coefficients:
    Estimate Std. Error t value Pr(>|t|)
    (Intercept) -0.032828 0.053684 -0.612 0.541
    a 4.008051 0.011139 359.818 <2e-16 ***
    b 3.003538 0.007383 406.831 <2e-16 ***
    f 5.003245 0.011183 447.403 <2e-16 ***

    Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

    Residual standard error: 0.7278 on 996 degrees of freedom
    Multiple R-squared: 0.9981, Adjusted R-squared: 0.9981
    F-statistic: 1.741e+05 on 3 and 996 DF, p-value: |t|)
    (Intercept) -0.727846 0.111691 -6.517 1.14e-10 ***
    x 0.081310 0.002956 27.511 < 2e-16 ***

    Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

    Residual standard error: 1.556 on 998 degrees of freedom
    Multiple R-squared: 0.4313, Adjusted R-squared: 0.4307
    F-statistic: 756.8 on 1 and 998 DF, p-value: f2=fitted(lmf)

    // now let’s try regression again, this time using f2 instead of f

    lmm2=lm(x ~ a+b+f2)
    summary(lmm2)

    Call:
    lm(formula = x ~ a + b + f2)

    Residuals:
    Min 1Q Median 3Q Max
    -3.249e-14 -1.780e-15 1.360e-16 1.913e-15 6.075e-14

    Coefficients:
    Estimate Std. Error t value Pr(>|t|)
    (Intercept) 8.951e+00 2.622e-16 3.414e+16 <2e-16 ***
    a 6.899e-16 7.721e-17 8.935e+00 <2e-16 ***
    b 1.089e-15 5.322e-17 2.045e+01 <2e-16 ***
    f2 1.230e+01 1.448e-16 8.492e+16 <2e-16 ***

    Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

    Residual standard error: 3.844e-15 on 996 degrees of freedom
    Multiple R-squared: 1, Adjusted R-squared: 1
    F-statistic: 6.251e+33 on 3 and 996 DF, p-value: < 2.2e-16

    // note that f2 has totally swallowed any dependence on a and b

  354. miker613 says:

    Shucks: again, part of it didn’t show up
    That was the part in the middle where I defined f2. Like this:

    // now lets try deriving f from x

    lmf=lm(f~x)
    summary(lmf)

    Call:
    lm(formula = f ~ x)

    Residuals:
    Min 1Q Median 3Q Max
    -5.6699 -1.0461 0.0670 0.9834 5.3395

    Coefficients:
    Estimate Std. Error t value Pr(>|t|)
    (Intercept) -0.727846 0.111691 -6.517 1.14e-10 ***
    x 0.081310 0.002956 27.511 < 2e-16 ***

    Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

    Residual standard error: 1.556 on 998 degrees of freedom
    Multiple R-squared: 0.4313, Adjusted R-squared: 0.4307
    F-statistic: 756.8 on 1 and 998 DF, p-value: f2=fitted(lmf)

  355. miker613,
    I don’t really get what you’re trying to illustrate.

  356. It seems that what I wrote in my comment on February 6, 2015 at 1:18 pm and my most recent comment on February 7, 2015 at 7:12 am is whizzing right by the heads of those who agree with Lewis. It looks like I’ll have to amplify again: *Even if* we were to define forcing in terms of temperature (but, as ATTP repeatedly says through his recent comments, it isn’t), this claim by Lewis of mathematical illegitimacy does not hold. How? Read on.

    (Preliminary note: I use the term “one-to-one” function instead of “injective” function
    http://en.wikipedia.org/wiki/Injective_function
    because I choose to not bother with addressing whether the function is surjective,
    http://en.wikipedia.org/wiki/Surjective_function
    which would mean having to bother with defining a codomain of the function.)

    -1=e^ipi says:
    “If I say ‘A is true therefore A is true’. That is circular regardless of whether A is true or not.”

    There’s nothing mathematically illegitimate with the circularity of one-to-one functions and their inverse functions.

    In reply to my comment on February 6, 2015 at 1:18 pm, miker613 wrote on February 6, 2015 at 1:58 pm,

    “”If Lewis is not arguing along these lines above, then what is he arguing?” He is saying that if you do a regression for an output variable on a set of input variables, and the data for (one of) those input variables was itself derived by regression from the output variable, you are essentially doing a regression
    [1] T = c(T+e1) + d.S + e2
    where S are some other variables and e is noise.”

    On February 5, 2015 at 11:11 pm, Mikep quoted someone else:

    “The authors took their forcing trend estimates (dF) from an earlier paper that constructed them using the equation
    [2] dF = a0*dT + dN
    where a0 is a feedback term, dN is a term capturing the Top of Atmosphere radiative imbalance, which in this context is just a source of noise, and dT is… dT! So in [1] they regressed dT on itself + other terms! The Marotzke regression is actually something like
    [3] dT = c0 + c1*(a0*dT+ dN) + c2*alpha + c3*kappa + e.”

    In the below, I will simply go with what miker613 said and what mikep said someone else said, and will assume the appropriate restrictions on the domains of the relevant variables to avoid division by zero.

    Definition:
    http://mathworld.wolfram.com/InverseFunction.html

    Based on what miker613 said:

    Define (from [1])
    f(T) = T + e1.
    From equation [1],
    (T – d.S – e2)/c = T + e1.
    Now we can define
    g(T) = (T – d.S – e2)/c = T + e1 = f(T).
    We now have two functions of T that are equal:
    f(T) = g(T).
    Equation [1] given by miker613 is written such that it meets the definition above of inverse functions:
    g^{-1}g(T) = g^{-1}f(T) = T = c(T+e1) + d.S + e2.

    Based on what Mikep said someone else said:

    Define [2] as
    f(dT) = dF = a0*dT + dN.
    From equation [3],
    (dT – c0 – c2*alpha – c3*kappa – e)/c1 = a0*dT+ dN
    Now we can define
    g(dT) = (dT – c0 – c2*alpha – c3*kappa – e)/c1 = a0*dT+ dN
    = f(dT).
    We now have two functions of dT that are equal:
    f(dT) = g(dT).
    Equation [3] given by mikep as saying what someone else said is written such that it meets the definition above of inverse functions:
    g^{-1}g(dT) = g^{-1}f(dT) = dT = c0 + c1*(a0*dT+ dN) + c2*alpha + c3*kappa + e.

    Very important note: If there are problems with what I just did – specifically one of these functions not actually being one-to-one, then those problems would be with the equations given above, [1], [2], and [3], by miker613 and mikep. I’m just exposing some mathematical consequences of what these commenters actually wrote. If there are such problems, then these problems would expose that perhaps these equations these commenters wrote are not correct characterizations of what Marotzke and Forster actually wrote.

    The above shows that Lewis *is* arguing what I said he is arguing: He seems to be arguing that it is always mathematically illegitimate to take the output of a one-to-one function to find the input of that one-to-one function by using the inverse function of that one-to-one function, and since this is what Marotzke and Forster did (assuming that they’re actually doing this taking the output of a one-to-one function to get its input), what they did was mathematically illegitimate. That is, Lewis is arguing that Marotzke and Forster took the output f(T) = g(T) of a one-to-one function to find the input T of that one-to-one function using the inverse function g^{-1}g(T) = g^{-1}f(T) of that one-to-one function, and since he seems to think that it’s always mathematically illegitimate to take the output of a one-to-one function to find the input of that one-to-one function using the inverse function of that one-to-one function, then what Marotzke and Forster did was mathematically illegitimate (assuming that they’re actually doing this taking the output of a one-to-one function to get its input).

    The problem for Lewis and company, as I hinted in my first comment above but explicitly said in my second comment above, is that many times *it is mathematically legitimate* to take the output of a one-to-one function to find the input of that one-to-one function using the inverse function of that one-to-one function – we do it all the time in mathematics! Even elementary school children do it, and we *sort of* tell them that when we tell them that division is the inverse of multiplication. Example: The division 56/8 = 7 is taking the specific output h(7) = 8*7 = 56 of a one-to-one unary function h(x) = 8x to get the specific input 7 of that one-to-one function. It’s called unary multiplication, and yes, it exists – we teach such as h(x) = ax to algebra students. (It’s derived from multiplication as a binary operation, h'(w,x) = wx: Just fix w and replace it with another variable like a to denote that it is now to be treated as an arbitrary constant, and then define the mapping on x only.)

    Generally, for any one-to-one function m of x, it is mathematically legitimate to take m(x) to find x since we can be given m(x) without explicitly being given x as well even though m(x) is defined in terms of x.

    There is nothing inherently wrong with applying in mathematics or science the mathematical structure of one-to-one functions and their inverse functions: Yes, this structure is circular – but that’s the very nature of the one-to-one correspondences between the domains and ranges of these types of functions. To think that applying in mathematics or science this circularity of this type of mathematical structure is a problem is to show a certain lack of understanding.

    So, as I said at the beginning, *even if* we were to define forcing in terms of temperature (but, as ATTP repeatedly says, it isn’t), this claim by Lewis of mathematical illegitimacy does not hold. That is, Lewis must explain why it is mathematically illegitimate for Marotzke and Forster to take the output of a one-to-one function to find the input of that one-to-one function using the inverse function of that one-to-one function (assuming that they’re actually doing this), even though in general there is nothing mathematically illegitimate with doing this – we do it all the time in mathematics.

  357. KandA,
    Thanks. I guess one argument could be that Marotzke & Forster could have determined the forced and unforced trends from the models directly. However, there’s nothing fundamentally wrong with doing it as they have if the external forcing estimates are reasonable representations of the actual external forcings. In some sense, it doesn’t make any difference since extracting the forced trend from the models would still need an estimate of the external forcing (or else, how do you distinguish the forced from the unforced trends).

  358. My impression is that Nic Lewis has been criticized based on strawman argumentation, i.e. claiming that he has written something he has not.

    I do not say that he is correct. I have no opinion on that right now, but I do think that I have understood his claims.

    What he is claiming is that Marotzke & Forster are not using in their calculation any independent estimate of the forcings but a substitute that has been obtained effectively from the same temperatures that are then compared again in their paper. Nic is not making any claim that those forcings are real. He does not claim either that the values were not consistent with what’s known about the forcings from other sources, but his argument does depend on the fact that the forcings used are not exactly the correct forcings and that the deviations are dependent on the models used in the paper,

    The circularity goes as follows:

    Use temperatures and the model to determine an estimate of the forcings. Lacking better estimate of the forcings, use these forcings to calculate temperatures with a somewhat different but related inverted procedure. The differences in the one-way calculation and in the inverse calculation lead to some variability, but guarantee that some other type of variety is minimized.

    Nic may have missed some essential point, but his argument is not so unreasonable that the reactions I have seen in this thread would appear justified. The impression is that people here draw conclusions against Nic as uncritically as other people accept his claims at CA.

    I may look more closely at the Marotzke & Forster paper to form my own opinion on the actual issue. As I wrote already at the top of this comment, I do not have any such opinion right now.

  359. Pekka,

    My impression is that Nic Lewis has been criticized based on strawman argumentation, i.e. claiming that he has written something he has not.

    Did you read what I’ve actually been saying, because this

    Use temperatures and the model to determine an estimate of the forcings. Lacking better estimate of the forcings, use these forcings to calculate temperatures with a somewhat different but related inverted procedure. The differences in the one-way calculation and in the inverse calculation lead to some variability, but guarantee that some other type of variety is minimized.

    is essentially what I’ve think I’ve said a number of times. Yes, I realise that temperatures are used to determine the external forcings, hence my point is that one should aim to understand if there is any reason why this is not an appropriate way to do so, or does not produce a result that properly represents the external forcings. If it does, then I can’t see any reason why you can’t then use these forcings to determine the forced trends.

    The more interesting question (which I think I’ve said a number of times too and which you’re suggesting here) is whether or not the method in Forster et al. (2013) produces estimates for the external forcings that have some component of internal forcing/variability. If so, then using these forcings to determine a forced trend and a residual, might underestimate the residual since some of the external forcing is actually internal variability. I don’t, however, know the answer to this, but I don’t think Nic Lewis’s argument tells us anything about this either.

  360. Joshua says:

    Pekka –

    ==> “Nic may have missed some essential point, but his argument is not so unreasonable that the reactions I have seen in this thread would appear justified”

    Could you quote the criticisms that aren’t justified?

  361. Pekka,

    Could you quote the criticisms that aren’t justified?

    Yes, please do because I think I’m the only one who’s criticised that post and have tried very hard to be balanced. In fact, as far as I can tell, what I said was essentially the same as you’ve just said above. So, please tell me what you think was unjustified because you make it seem that there was some kind of knee-jerk reaction to his post. Also, it’s interesting that you seem comfortable with the tone of Nic Lewis’s post, but feel able to criticise what’s been said here without explaining why and by presenting an argument that appears the same as what I’ve been saying.

  362. I don’t like to argue with named individuals, but it’s easy to find many comments in this thread that make strong statements based on some very limited evidence.

    The way Nic presents the role of the circularity it would definitely be an essential error in the paper, and make the main part of it moot. It would really mean that no other outcome were really possible, whatever the truth. Thus the question is, whether Nic has misinterpreted the analysis.

    I don’t give much weight on what support Nic got from experts of statistics as I think they were dependent on the way Nic presented the case for them. If Nic has missed something essential, then so have them.

  363. Joshua says:

    Pekka –

    A while back, at WUWT I think, I think it might have been been Willis (which is really quite extraordinary from an unintentional irony standpoint), I saw someone point out that much of the problems in the climate wars come about when people add “something extra” into their arguments.

    I think that is about as basic a truth as we can get regarding the climate wars. It covers the arguments as far from the science as discussing the term “denier,” and it often covers the more technical arguments.

    Here’s a case in point from the latter category:

    Gordon Hughes had some pithy comments about the Marotzke and Forster paper:
    The statistical methods used in the paper are so bad as to merit use in a class on how not to do applied statistics.
    All this paper demonstrates is that climate scientists should take some basic courses in statistics and Nature should get some competent referees.

    There you have it. Something extra. What follows from something extra is sameolsameol.

  364. Joshua says:

    Pekka –

    ==> “I don’t like to argue with named individuals, but it’s easy to find many comments in this thread that make strong statements based on some very limited evidence.”

    Because I can’t evaluate the technical arguments on my own, I try to rely on parsing the logic that people make on the different sides of issues If you’re going to say that there have been invalid arguments made, it helps me to know which arguments your speaking of.

    I don”t think that anyone here is so sensitive that they will be harmed by the association of their name with an argument that you think is invalid. As it stands, your broad characterization is probably more provocative, at the personal level, then it would be if you quoted the argument that you think doesn’t hold up.

  365. Pekka,

    I don’t like to argue with named individuals, but it’s easy to find many comments in this thread that make strong statements based on some very limited evidence.

    Then, please either find these examples, or don’t make these claims in the first place.

    The way Nic presents the role of the circularity it would definitely be an essential error in the paper, and make the main part of it moot.

    Except that if it is truly circular and if the residual is determined by comparing the estimated externally forced trends and the actual model trends, why is is non-zero? If the forcings are being determined from the temperature and then the forcings are used to determine the temperatures, shouldn’t there be no residual if it were truly circular?

  366. miker613 says:

    ATTP, I tried to illustrate the point I’ve been trying to make. I apologize if it’s unreadable: I guess there must be some other way to paste R output into wordpress; this sure didn’t work.
    Unlike what KandA keeps posting, the idea is not that circularity is always a problem. The idea is that this kind of circularity in doing regression is going to be a problem. To show that, I set up a very dumb case where x depends on several variables, each of which has some noise. I did a regression of x against those variables (a,b,f), showing that it gets their coefficients right. Then I estimated one of the variables (the “forcing” f) using the values of just x – this corresponds to the 2013 paper (which I have not read – I’m assuming for the sake of argument that Nic Lewis is correct in his reading). I used that regression to get new estimates for the forcing f2, the “approximation” to the forcing f.
    Then I went back and redid the original regression, x vs. a,b,f – but this time using the estimated values f2 in place of f. What happens is that x depends _entirely_ on f2, and the dependence on a and b drops out.
    The reason is that even though f2 is a somewhat noisy approximation to f, what looks like random noise is actual very much correlated with a and b. That’s what the regression to create f2 did: it turned a and b into f2’s noise. Then when we do the later regression, we don’t need a and b any more, because f2 incorporates their variability.
    If I get I chance, I’ll try it again with a more non-linear x as a function of a,b,f. I’m guessing that the result will be that the effect will be lessened but still pronounced.

    Again, I didn’t really read either paper, so I don’t know if Nic Lewis is reading them correctly. This is just to explain the point he’s making.

  367. aTTP,
    It’s clear that it cannot be fully circular. The approximations are different and random variability enters in a different way. The question is, what is the information content of the differences. More specifically given the conclusions of the paper: Does it tell something new on whether the models overestimate the response to radiative forcing.

    The models have highly different parameter values for &alfa; and κ. They are much closer in their temperature trends. Thus they must have highly different forcing histories. Those highly different forcing histories are used in the comparison presented in the paper. The forcing from CO2 is, however, essentially the same for all models. Thus they must respond very differently to CO2, and compensate by other forcings. This effect should be much more clear over the end of the periods, but hidden for most of it. That effect is probably visible over the latest decades, but the method may by its constriction succeed in hiding that in the overall statistics.

    Prior to this work common thinking was that the hiatus has been unlikely but not so unlikely that it would strongly prove models wrong. Does this paper add anything to that, or does it just present results that were certain to come out given that prior thinking, and based on the same information that was the basis for the prior thinking?

  368. miker613 says:

    Pekka, I think I was asking the same thing as you, here
    https://andthentheresphysics.wordpress.com/2015/01/31/models-dont-over-estimate-warming/#comment-46125
    I just don’t follow what the paper is claiming to add, even if the statistics would be right.

  369. Pekka,

    The question is, what is the information content of the differences.

    Yes, I agree and is essentially what I’ve been suggesting.

    Does this paper add anything to that, or does it just present results that were certain to come out given that prior thinking, and based on the same information that was the basis for the prior thinking?

    I don’t know, but I did wonder this. The result isn’t particularly surprising, so the interesting question is whether or not the analysis in Forster et al. (2013) produces a reasonably representation of the actual external forcings. If it does, then I can’t see an issue with Marotzke & Forster. On the other hand, if the analysis in Forster et al. (2013) underestimates the influence of internal variability, then that might feed through into the Marotzke & Forster results.

  370. Having a further look at the paper my conclusion is that the paper does not analyze it’s methods to tell, what we can really learn from the results. The issues that Nic Lewis mentioned are enough to make it highly unclear, what the results mean. A paper should discuss that sufficiently to clarify these issues, but this paper does not do that. There’s definitely significant circularity, and the significance of that is one of the issues that should have been discussed.

    On the basis on what’s known about the work, it may be impossible to conclude anything new about the models. This does not seem to be a method that could tell, whether the deviation observed over the last 15 years or so is significant or not.

    The basic truth remains that nothing can be concluded about a phenomenon, when a new method of unknown power cannot produce significant results.

  371. aTTP,

    the interesting question is whether or not the analysis in Forster et al. (2013) produces a reasonably representation of the actual external forcings. If it does, then I can’t see an issue with Marotzke & Forster.

    Reasonable representation is not enough, the representation must be known to be rather accurate for that conclusion. Furthermore the models must have so different forcings that all of them cannot be even reasonable representations of actual external forcings. Their values of α vary over the range 0.6-1.8 W/m² and their values of κ over the range 0.45-1.52 0.6-1.8 W/m². The implied model forcings must vary roughly as much.

  372. ATTP:

    Except that if it is truly circular and if the residual is determined by comparing the estimated externally forced trends and the actual model trends, why is is non-zero? If the forcings are being determined from the temperature and then the forcings are used to determine the temperatures, shouldn’t there be no residual if it were truly circular?

    Lewis suggests that F13 derives dF time series (from the models) as follows: dF = a*dT + dN where dN is the model simulated TOA radiative imbalance. Did you not understand this? Or are you suggesting that dN = 0?

  373. Pekka,

    Reasonable representation is not enough, the representation must be known to be rather accurate for that conclusion.

    Replace “reasonable representation” with “rather accurate” if you prefer. I’m not interested in semantics.

    Layman,

    Lewis suggests that F13 derives dF time series (from the models) as follows: dF = a*dT + dN where dN is the model simulated TOA radiative imbalance. Did you not understand this? Or are you suggesting that dN = 0?

    I’m not suggesting dN  = 0. Why would you think I was, and why is that relevant?

  374. Actually, I’m not sure I get the whole issue. Marotzke & Forster are trying to deconstruct the model trends into a forced and unforced component and show that, together, this can explain the observed trends on all timescales. So what could a problem be? A problem could be that the earlier estimates haven’t completely removed an unforced component from the external forcing estimates. It seems to me that if this were the case, then that might mean that they’re either under-estimating or over-estimating the residual. However, I can’t see how this would change the overall result because there would be a corresponding change in the forced trend. Also, given that both the forced trend and residual have a small range for long timescales, that would still imply that internal variability is not that important on these long timescales, even if they haven’t completely removed an internal variability component from the external forcing estimate.

  375. ATTP:

    I’m not suggesting dN = 0. Why would you think I was, and why is that relevant?

    You suggested that if there is true circularity there should be no residual. IOW -> you suggesting that if dF = a*dT + dN … ergo … dF – a*dT = dN can only be 0 residual if dN = 0.

  376. aTTP,
    The models use very different forcings. Those can be seen in the reference 35 of the paper we discuss. Thus we see that various groups have combined their model development and estimation of forcings in a way that the final outcomes are not too unreasonable in their history representation (combinations that are too unreasonable have been dropped).

    Given the same forcings, the modes would not all produce a reasonable history. The circularity Nic has been discussing is essentially that each model is used with it’s own preferred forcings. When the combinations of forcings and models have gone trough the selection process the outcome is a set of pairs of models and forcings that are very different, but all reasonable enough over the history. In the paper they study, how this kind of set behaves in a specific test. Some of the forcings are certainly badly wrong, and some of the models are also certainly badly wrong, we just don’t know, which are the best, and how good the best are. By looking at the statistical properties of an ensemble with many badly wrong members, we don’t know, whether, what we see is dominated by the worst of the models or not.

    When we don’t know in other ways, how the selection process of the model-forcings pairs affects the statistic of the results, we don’t know either, how it affects this analysis. We can also ask, what is the meaning of statistical properties of a set of highly different model-forcings pairs, where only few can at best be in good agreement with reality. What is the reality this kind of set represents? What does this particular analysis tell?

  377. Here is part of the Figure 2 of the Forster et al (2013) paper (J. Geophys. Res. Atmos., 118, 1139–1150, doi:10.1002/jgrd.50174) that’s the source for the forcings

    Here we see, how much both the total forcing (grey) and the non-GHG anthropogenic forcing vary between the models. Some of them are badly wrong, but what is correct?

  378. Layman,

    You suggested that if there is true circularity there should be no residual. IOW -> you suggesting that if dF = a*dT + dN … ergo … dF – a*dT = dN can only be 0 residual if dN = 0.

    I can’t quite download the paper at the moment, but the equation in Marotzke & Forster was essentially
    dT = \dfrac{dF - dN}{\alpha} + \epsilon,
    where \epsilon was the residual, not dN. The equations in Forster et al. (2013) and in Marotzke & Forster (2015) were simply rewritten version of the same equation. Hence, if there was true circularity it seems that the residuals should have been 0.

  379. Robert Way says:

    WHT,
    “I know this because I actually work with the numbers. As I have said Cowtan&Way is a small correction, impacting TCR by 3 to 4% at best. In this post I said the correction was “subtle””

    ATTP,
    “Okay, 10% may be too much but, AFAIK, it only goes back to 1970, so I’m not sure what the effect over the whole instrumental temperature record would be”

    Using the long-kriged approach that we have on our website the difference is ~10%, nearly the same when using Berkeley. Accounting for a slight lag increases it again. Accounting for the underestimated volcanic forcing increases it again. Nick Lewis’ approach when done using the updated temperature reconstructions with updated forcings (a test he won’t do) gives a much higher TCR.

    I’ve been on his case everywhere I see him comment about his comments on CW2014. He dodges the technical questions about where his issues lie and instead he’s going in the direction of making it personal. I’m looking forward to having a real technical conversation with him on the issue if he’ll step up and be willing to have one but to this point he seems to shy away.

  380. Robert,
    Thanks.

    I’ve been on his case everywhere I see him comment about his comments on CW2014.

    Yes, I was on his case a little on Ed Hawkins’s blog last weekend when he was suggesting that Ed shouldn’t use C&W because you and Kevin Cowtan are SkS activists. Not only do I think that’s an appalling thing to suggest in the first place, for someone associated with the GWPF to do so, is doubly annoying,

    Pekka,

    Here we see, how much both the total forcing (grey) and the non-GHG anthropogenic forcing vary between the models. Some of them are badly wrong, but what is correct?

    Sure, but I’m not quite sure what your point is. As I understand it they determined the forced trend and residual for each model using each model’s estimate of its own forcings and climate sensitivity. Sure, some of them are wrong but I’m not sure why that means that they haven’t illustrated the the ensemble of models suggests that we can explain the observed trend as a combination of a forced component and internal variability, with internal variability playing a smaller role when the timescale is long, than when it is short.

  381. ATTP:

    I can’t quite download the paper at the moment, but the equation in Marotzke & Forster was essentially
    dT = dF – dN + e,
    where e was the residual, not dN. The equations in Forster et al. (2013) and in Marotzke & Forster (2015) were simply rewritten version of the same equation. Hence, if there was true circularity it seems that the residuals should have been 0.

    If you regress dT on (a*dT + dN + e) and e = 0 you will only have a 0 residual if dN=0. This is still obviously circular.


  382. Some of them are badly wrong

    Of course they are wrong — these are models of climate that are trying to reproduce the variations of ENSO. Yet models of ENSO have not been even close to representative of the actual wiggles. So how can a model get it right?

    So what you do is use the ENSO as is to provide one of the contributing factors to natural variability:

    http://contextearth.com/2015/01/30/csalt-re-analysis/

    It’s really not that hard. As a benefit, Nic Lewis becomes irrelevant.

  383. Layman,

    If you regress dT on (a*dT + dN + e) and e = 0 you will only have a 0 residual if dN=0. This is still obviously circular.

    What? No, if you start with
    dF = \alpha dT + dN,
    and you then want to do the reverse,
    dT = \dfrac{dF - dN}{\alpha} + \epsilon,
    then the residual, \epsilon, is zero even if dN is not. Just to be clear dN is NOT the residual. The residual is defined by Marotzke & Forster as \epsilon.

  384. The C&W improvement allows HadCRUT to more closely match GISS.

    My understanding is that these time-series are all converging. And when all of these use essentially the same SST time-series, then about 70% of the anomaly shares the same baseline. That’s because oceans cover 70% of the earth’s surface. So it is no surprise that the differences are like bread crumbs.

    So the Commonwealth countries can go with HadCRUT and the C&W corrections. As an American, I can go with GISS or NOAA/NRDC. As a radical, one can go with Berkeley 🙂

  385. Joshua says:

    Interesting that there is no “something extra” in the response.

    No guilt-by-association and no denigration of skills and abilities as seen in Nic Lewis’ blog post.

    Perhaps those “skeptics” who try to reverse engineer from the tone of arguments to draw conclusions about the technical validity of arguments, should reconsider?

    Yeah. Like that’s going to happen, 🙂

  386. miker613 says:

    Response from Marotzke & Forster
    http://www.climate-lab-book.ac.uk/2015/marotzke-forster-response/
    Excellent. This is the way to do science!

  387. ATTP:

    The residual is defined by Marotzke & Forster as e.

    Last comment. The computed *regression* residual will be composed of any/all independent non-zero variables added to the rhs of the regression equation. Maybe I’m misunderstanding you. Your argument is that since the regression residual is not 0 that this shows there can be no circularity correct?

  388. Joshua says:

    miker –

    ==> “This is the way to do science!”

    Gordon Hughes had some pithy comments about the Marotzke and Forster paper:
    The statistical methods used in the paper are so bad as to merit use in a class on how not to do applied statistics. All this paper demonstrates is that climate scientists should take some basic courses in statistics and Nature should get some competent referees.

    Is that science? And is that the way to do it?

    Go ahead, miker – criticize Nic for that bullshit. Give it a shot.

  389. BBD says:

    miker

    Excellent. This is the way to do science!

    Let’s try discussion in good faith while we are at it.

    What is Gavin Schmidt’s domain expertise?

    What field was his PhD in?

  390. Layman,

    Your argument is that since the regression residual is not 0 that this shows there can be no circularity correct?

    No, it was a question, not an argument. If there is no \epsilon term in the equations used to determine the forcings, why is it non-zero when they use the forcings to determine the trends if the situation is simply completely circular? The answer, I think, is that it’s not strictly circular as explained in the Climate lab book post. The estimate of the forcings in Forster et al. (2013) is not influenced by variability in the temperature trends driven by internal variability. Therefore when you determine the temperatures trends from the forcings using
    T = \dfrac{F}{\alpha + \kappa},
    there is a residual that represents internal variability.

  391. After composing the SOI factor as a variability component in the GISS time-series, we can look at the residual.

    The top panel shows the residual error along with a scaled SOI time series along with it. Note that there are some clear vestiges of the SOI that likely still exists in the residual. This is in spite of the correlation coefficient between the two being very close to 0.0 (the reason that it is zero is that the SOI was already composed as a multiple linear regression using CSALT). In particular, look at the recent few years, where the SOI trend looks to exactly compensate the residual error. (poof) There goes the hiatus !!!

    So why didn’t the original SOI fit do a better job at reducing the residual error, if the eye can still detect some of this rather obvious correlation?

    One way is that epistemic measurement errors create anti-phase regions that preclude a better correlation. That is shown in the lower panel, where I gray-out regions that the residual error goes in the opposite direction of -SOI. Note that some of these regions occur during periods of time when measurement quality was highly suspect, such as during WWI and WWII. Other regions occur during volcanic eruptions such as El Chichon and Pinatubo, despite that the aerosol factor was also compensated for by CSALT. The result of this is is that better correlation can’t be achieved as long as these anti-phase regions reduce the magnitude of the summed metric.

    The SOI value is also only based on two readings, that of atmospheric pressure in Tahiti and Darwin. These two form a sloshing dipole that has a large negative correlation coefficient that will approach -1 if it is an ideal see-saw. But since the CC is only about -0.7, it is clear that one of the numbers is better than the other at different times. That is another example of epistemic error.

    Also many of the differences look like level shifts, which could be attributable to bucket corrections made to the raw SST readings over time.

    My point is that the internal variability may be attributable to ENSO in magnitude beyond our means to measure. Who is working this rather obvious angle?

  392. miker613 says:

    “Go ahead, miker – criticize Nic for that”. I have no problem doing that – I thought it was gratuitous and pointless and nasty. I wish Nic Lewis wouldn’t throw in that kind of garbage; it detracts. I appreciated the tenor of the response, just business.
    I hope it continues in the same vein, instead of the way it started. And, I hope they end up agreeing, whatever they end up agreeing with.

  393. BBD says:

    The problem here, miker, is that you impugned Gavin Schmidt by asserting that he wasn’t a climate modeller nor a mathematician and therefore didn’t know what he was talking about and was simply peddling warmist propaganda (I paraphrase, but I’d advise not disputing this reading).

    When it turns out that you were flat-out wrong, you go into blankety-blank mode and refuse to acknowledge your false claims, let alone retract them.

    This places you in the same space occupied by Nic Lewis.

  394. miker613 says:

    BBD, if you’d like to make a point, feel free. Socrates died a couple of thousand years ago.

  395. miker613 says:

    BBD, given that I never said any of the things you claimed, or implied them, I can’t take your advice. I’m not responsible for your misreading my words.

  396. Joshua says:

    ==> ” I have no problem doing that – I thought it was gratuitous and pointless and nasty. ”

    Kudos. I have found it very hard to get “skeptics” to take make that kind of statement. I ask them to do so often, but I can’t actually remember one having done it before (although not ruling out the possibility that I’ve seen it).

    But here’s what I want to ask you to respond to now: What we talked about also shows poor reasoning – which IMO is more important than it being gratuitous, pointless, and nasty. It is fallacious to generalize from a potential error in a particular paper in the way that Nic did.

    It shows that Nic, as a smart and knowledgeable person engaged in this discussion, allowed his tribalism to infect his reasoning, to the point where he made blatantly bad arguments.

    So does that affect your confidence in his reasoning on issues where you have to place some trust in his expertise? How about with Stevie-Mac, for tacitly endorsing Nic’s bad thinking by putting up that post?

    Now it’s clear that you put that mechanism in play among “consensus” scientists. Your confidence in the expertise of “consensus” scientists in general is reduced when you see particular scientists engage in activism or make poor arguments.

    What isn’t clear to me is whether you apply that thinking when observing scientists who support the “skeptical” side of the discussion.

    .

  397. davideisenstadt says:

    anders:
    what is your training and background in statistics?

  398. david,
    Why is that a question worth answering?

  399. miker613 says:

    “It is fallacious to generalize from a potential error in a particular paper in the way that Nic did.” Here you lost me, Joshua. I don’t know what is being generalized, or which fallacy you mean. I don’t know if M&F’s response is right, I couldn’t follow it (probably because I didn’t read either paper). As far as I know, Nic Lewis’s argument is still correct, but I’ve got my popcorn out.

    I acknowledge, though, that I find it harder to trust people who sling insults with their science. I wish they would all stop. Nic Lewis sure isn’t the only one.
    Now that (IMHO) real science is moving to the public arena, scientists had better learn how to operate here. Part of that is behaving like scientists, not blog commenters. We all know that scientists can be creeps in private life, but they generally don’t pull that kind of stuff in their published papers. Blogs might be different, but I hope more of them will get their acts together.

  400. BBD says:

    miker

    BBD, given that I never said any of the things you claimed, or implied them, I can’t take your advice. I’m not responsible for your misreading my words.

    You wrote this:

    “You believe that you know better how models are tuned than the authors of realclimate do.”

    No, VTG, I think that these modellers said so. They may know better than realclimate, no? It could be that the authors of realclimate misunderstand how easy it is to overfit by mistake – not being mathematicians.

    What did (does) Gavin Schmidt do?

    What subject is his PhD in?

    Answers here.

  401. Joshua says:

    miker –

    ==> “Here you lost me, Joshua. I don’t know what is being generalized, or which fallacy you mean.”

    I’m talking about this:

    … All this paper demonstrates is that climate scientists should take some basic courses in statistics and Nature should get some competent referees…”

    So does his argument imply that “climate scientists” (in general, let alone just these climate scientists) haven’t taken basic courses in statistics? I think so, and so what does it mean that he generalizes from a particular, potential error in a given paper to impugning the competence of climate scientists generally as well as the entire group of referees at Nature?

    That is activism. And it’s fallacious reasoning.

    ==> “Part of that is behaving like scientists, not blog commenters.”

    So this is a problem. “Skeptics” argue that traditional methods for qualifying expertise are antiquated and intrinsically flawed. I am actually quite sympathetic to those criticisms, although I think that when combined with binary thinking, they become quite empty criticisms (just because there are intrinsic flaws, that doesn’t mean that the traditional methods for qualifying expertise are invalid).

    But the problem is that “skeptics” want to have there cake and eat it too. They want blog articles to be considered on par with peer review journal articles, yet they engage in behaviors that would validly run afoul of exclusion criteria in a peer review process.

    “Skeptics” want to point to Nic Lewis as a shining example of a scientist who merits respect for his knowledge and technical chops, yet will then turn the other way when he displays behaviors that “skeptics” will say invalidates the work of “consensus” scientists even when they are writing editorials – let alone if they were embedded within their scientific product.

    Nic didn’t just slip in some gratuituous, nasty, and pointless bullshit in an op-ed – he put it in a blog article that “skeptics” all over the “skept-o-sphere” are holding up as an exemplary work of science.

  402. miker613 says:

    @BBD, “not being mathematicians” specializing in VC Theory (https://en.wikipedia.org/wiki/Vapnik%E2%80%93Chervonenkis_theory). Mathematics is a big field.

  403. miker613 says:

    As I said, I didn’t like that comment. On the other hand, if this is a mistake, it’s a pretty egregious one. It didn’t have to be Nic Lewis who pointed it out; lots of people saw and retweeted etc. notices about the paper. It’s a post on Skeptical Science already.
    But all this needs to wait on resolution of the issue. M&F have responded. Surely others will pitch in as well.

  404. miker613,

    But all this needs to wait on resolution of the issue. M&F have responded. Surely others will pitch in as well.

    Except this is blog wars. If Nic Lewis were to publish a paper commenting on Marotzke & Forster, that might get noticed (it might not). Without that, this will probably stay mainly on blogs and maybe Booker or Delingpole will highlight it now and again, but that will probably be about it.

  405. BBD says:

    miker

    So despite your assertions being completely wrong, you refuse to acknowledge your mistakes. Again.

    Mathematics is a big field.

    Unlike climate modelling, in which GS is eminently qualified to speak with authority.

  406. miker613 says:

    “GS is eminently qualified to speak with authority.” Of course he can. So can those other modellers who disagreed with him. You seem to assume that he can’t be wrong, or can’t miss anything important. But that’s not how science works.

  407. miker613,
    The problem is, you said

    Realclimate is trying to push a narrative about the models which is not correct.

    Which is insulting and probably wrong. And then you said,

    It could be that the authors of realclimate misunderstand how easy it is to overfit by mistake – not being mathematicians.

    Which is also wrong, as BBD pointed out. RealClimate is probably the blog with the most experienced group of relevant experts. That doesn’t make them right, but dismissing them out of hand, doesn’t seem appropriate either.

  408. miker613 says:

    “Except this is blog wars. If Nic Lewis were to publish a paper commenting on Marotzke & Forster, that might get noticed (it might not). Without that, this will probably stay mainly on blogs…” I don’t believe that for a minute. People are watching this.

  409. miker613,

    I don’t believe that for a minute. People are watching this.

    I don’t think scientists are going to cite a blog post and I wouldn’t be surprised if most are barely aware of this. It could well be highlighted over and over again on some blogs, but I doubt that will have all that much effect.

  410. davideisenstadt says:

    https://andthentheresphysics.wordpress.com/2015/01/31/models-dont-over-estimate-warming/#comment-47289

    “Why is that a question worth answering?”
    Because you seem to fail to grasp the notion that regressing a variable on itself results in a useless answer, thats why.
    Because you seem to feel that the opinions of professional statisticians mean nothing; that retired mathematician’s’ opinions mean nothing.
    Because you dont seem to have even the most rudimentary grasp of what regression analysis is all about?
    That you appear to have absolutely no concept of what independent events are or what degrees of freedom mean?
    Beacause you choose to opine on critiques of statistical analysis, when it appears that your own knowledge of regression analysis is limited, to say the least?
    those reasons spring to mind.
    so, the question remains…what is your training and background in statistics?

  411. david,
    Calm down – and that is only partly a joke. I’m not hugely interested in being lectured to by someone who is probably going to be wrong!

    Because you seem to fail to grasp the notion that regressing a variable on itself results in a useless answer, thats why.

    The external forcings do NOT depend on T, therefore the analysis isn’t regressing a variable on itself if the external forcings are properly determined (or as well as they can be).

    Because you seem to feel that the opinions of professional statisticians mean nothing; that retired mathematician’s’ opinions mean nothing.

    I didn’t say that, did I? (if you’re going to mis-represent me, it’s going to be a short visit). I suggested that he seemed to have got advice only from a statistician and a retired mathematician and asked why he couldn’t speak to some actual climate scientists.

    Because you dont seem to have even the most rudimentary grasp of what regression analysis is all about?

    Oh, I think I do – rudimentary at least 🙂 Do you understand basic physics?

    That you appear to have absolutely no concept of what independent events are or what degrees of freedom mean?

    Yes, I do, but that doesn’t appear quite as relevant as I think you’d want it to be.

    Beacause you choose to opine on critiques of statistical analysis, when it appears that your own knowledge of regression analysis is limited, to say the least?
    those reasons spring to mind.

    No, I was actually pointing out that if the external forcings are reasonable representations of the actual forcings, then the regression is not T upon itself. Just because T is used to determine the external forcings doesn’t mean that they depend on T.

    so, the question remains…what is your training and background in statistics?

    Again, why is this relevant? What is your experience in physics and climate science specifically?

  412. miker613 says:

    “I don’t think scientists are going to cite a blog post” No, but if Lewis turns out to be right, the Nature paper will be withdrawn, no? Why would it matter that it was a blog post?

  413. jsam says:

    If Lewis thought he was right he’d have written to the journal. He’s playing to his crowd.

  414. miker613,

    No, but if Lewis turns out to be right, the Nature paper will be withdrawn, no? Why would it matter that it was a blog post?

    Well, I don’t think he is and I think I’ve worked out an easy way to explain why. It is late, though, so the actual post may have to wait till tomorrow. Also, papers are normally retracted if they’re fraudulent or plagiarised, not simply because they’re wrong, or have an error. I guess they could be retracted if they turn our to be so wrong that they have no value, but that would normally be at the request of the authors. If the authors disagree, then those who are arguing that it’s wrong would have to write their own papers to make their case. I think that unless you could show genuine scientific misconduct, a journal would not normally retract a paper simply because someone writes to the editor claiming to have found an error. Maybe there are exceptions to this, but I’m not really aware of any.

  415. miker613 says:

    “I guess they could be retracted if they turn our to be so wrong that they have no value, but that would normally be at the request of the authors. If the authors disagree, then those who are arguing that it’s wrong would have to write their own papers to make their case.” For sure I agree with that. I had thought – and still think – that the two sides can figure this out and come to a clear conclusion. Either the procedure dilutes the effect of alpha and kappa a lot, or it doesn’t. I would even think it ought to be testable, by repeating the exact calculations for a variety of values and ranges of the variables, and seeing what happens. You shouldn’t need to run climate models, just some regressions.

  416. miker613 says:

    “If Lewis thought he was right he’d have written to the journal.” Shucks – that takes months. This will be long over within days. If Lewis remains convinced that he’s right, and the authors don’t agree, then there’s time to write to the journal to get his point on record.
    The journal is not going to settle it, when Lewis has already posted and they have already responded. And I expect there’ll be a variety of responses at climateaudit in the next few days. Roman M is a regular there, and it’s Lewis’ post, and McIntyre already responded briefly…

  417. davideisenstadt says:

    [Chill. -W]

  418. MikeH says:

    Who are the “professional statisticians” that @davideisenstadt is referring to? It started off as one which I assumed was the economist Ross McKitrick. McIntyre is the other?

    If @miker613 were correct and papers are retracted because bloggers point out obvious errors in them, Ross McKitrick would have a much thinner publication record.

    If you are going to go down the “argument from authority” route, be careful who you chose as the authority.

    e.g.
    “Recipe for a hiatus”
    https://quantpalaeo.wordpress.com/2014/09/03/recipe-for-a-hiatus/

    “McKitrick screws up yet again”
    http://scienceblogs.com/deltoid/2004/08/26/mckitrick6/
    http://scienceblogs.com/deltoid/2010/04/06/mckitrick-at-it-again/

  419. David Young says:

    “In putting together this note, I have had the benefit of input from two statistical experts: Professor Gordon Hughes (Edinburgh University) and Professor Roman Mureika (University of New Brunswick, now retired).”

  420. Willard says:

    Interesting that Nic does not mention that one is CA’s janitor and the other wrote for the GWPF:

    http://www.thegwpf.org/gordon-hughes-why-is-wind-power-so-expensive/

    Let’s hope that does not qualify as activism: Nic can muster fighting words against this kind of extracurricular activity.

  421. As far as can see, the two statistical experts were probably totally dependent on the framing of the issue presented by Nic. The likely error of Nic is not in the part of the issue statistical experts can judge, but in the framing presented to them.

    I have some ideas of, how to make the whole process more clear, and to indicate, where the disagreement between the authors of the paper and Nic is. I’ll tell more, if I succeed.

  422. Pekka,

    As far as can see, the two statistical experts were probably totally dependent on the framing of the issue presented by Nic. The likely error of Nic is not in the part of the issue statistical experts can judge, but in the framing presented to them.

    Possibly, but that might be why you don’t normally insult the authors of a paper from a different field.

    I have some ideas of, how to make the whole process more clear, and to indicate, where the disagreement between the authors of the paper and Nic is.

    So do I. I’ll write it up if I chance today too.

  423. Sven says:

    ATTP: ” I suggested that he seemed to have got advice only from a statistician and a retired mathematician”
    No you didn’t. You said “…he’s had help/assistance from an economist and a retired mathematician”

  424. miker613 said on February 8, 2015 at 2:48 pm, in a partial reply to my comment on February 8, 2015 at 10:32 am and my earlier comments,

    “Unlike what KandA keeps posting, the idea is not that circularity is always a problem. The idea is that this kind of circularity in doing regression is going to be a problem.”

    Well, it seems to me that the kind of circularity Lewis and you and the rest seem to have problem with would invalidate lots of valid regressions.

    It sounds to me that ultimately, what Lewis and you and the rest are claiming is that for any regression relation whose input is x and whose output is y, said regression is invalid if we can find or construct a formula f(y) that contains y and that we can set equal to the input x as in f(y) = x.

    But there are lots of formulas out there in mathematics, science, and engineering that contain variables that are the output variables of regressions that have been done, and so it would seem to be the case that from that sea of formulas one can find or construct a formula satisfying the above for a whole bunch of regressions.

    And so it would seem that you all have just declared invalid a whole bunch of valid regressions.

    Perhaps Lewis and you and the rest could give a rigorous definition of “regressing a variable on itself” in the context of explicitly noting that variables y and f(y) are not the same variable.

  425. KandA,

    The claimed circularity goes as follows:

    y is first calculated from x based on a linear relationship adding some random fluctuations:

    y = c * x + e

    Then It’s checked, whether x is linearly dependent of y.

    That’s all of it, nothing more. That’s what Nic has claimed. That would really make the analysis that finds x to be linearly dependent on y irrelevant,

    What Marotzke and Forster are telling in their response is that y was not determined as c * x + uninteresting random fluctuation, but that the deviation of y from c * x (or that of x from that linear in y) contains useful information.

  426. I noticed that Nic has not accepted the response of Marotzke and Forster but maintains his position in recent comments at CA.

    I think that the issue of such nature that going more thoroughly trough the arguments of both Nic and Marotzke and Forster is really warranted. The explanations that I have seen so far are not detailed enough.

  427. Pekka,
    But surely Nic’s claim that it’s circular is simply wrong? I’ve written a comment on Ed Hawkins’s blog that I won’t go through again here. But, the claim that the external forcings depend on T is simply not correct. That’s not to say that the external forcings are extactly correct, but that T is used when determining them does not make them dependend on T.

  428. AndyL says:

    For those who don’t want to trawl through a long CA comment thread, here is the comment that Nic Lewis made following the M&F response:

    “To recap, there are only two relevant CMIP5 model Historical simulation outputs available and used here, T and N. The simple physical model used in the paper’s analysis and to diagnose F in the first place reduce algebraically to ΔT = ΔN / κ which, with an error term added, is my equation (6). Whether or not that equation has any significant explanatory power here (it doesn’t), it does not enable any separate estimate of ΔF to be made that would enable the relationship between ΔT, ΔF and α to be investigated. Jochem Marotzke and Piers Forster do not appear to have realised this when they undertook their analysis, and neither have they addressed the issue now that I have pointed it out.”

    Hopefully with both sides engaged this will end up either with agreement or very specific points of disagreement.

  429. aTTP,
    He does not claim that external forcings depend on temperature, but he claims that the estimate of forcings that Forster et al 2013 produced is derived from the temperatures that the same model produced.

    The main steps are:

    1) Create models and choose input forcings. That’s done bu the modelers.

    2) Use the model to produce CMIP5 results.

    3) Forster et al take the data from CMIP5 archives, and work backwards to determine the forcings that the results imply. Temperature time series are essential input in that. The coefficients α and κ are also determined from the CMIP5 archived results.

    4) Marotzke and Forster take again data from the CMIP5 archives. The take also the results of step (3) above. The analyse this input in a different way as they look at the cross model behavior rather than each model separately.

    Nic is claiming that (3) and (4) are fundamentally circular. M & F are telling that they are not, because they look at the data from a different perspective.

    I cannot presently present arguments to support either one claim.

  430. I may also be clarifying to note that M & F have not studied real climate, they have studied CMIP5 archive of model based results. The only connection to the real climate is in the figures, where model results are compared to HadCRUT4. Real climate and real forcings affect the analysis indirectly by having affected the models and the input forcings of the models, but M & F do not use that data, they use only data extracted from CMIP5 archives as far as I have understood correctly. That applies also to the data from Forster et al (2013).

  431. Pekka,

    Real climate and real forcings affect the analysis indirectly by having affected the models and the input forcings of the models, but M & F do not use that data, they use only data extracted from CMIP5 archives as far as I have understood correctly.

    Yes, and that’s kind of the point, isn’t it? They’re determining the forced trends and residuals from the climate models and comparing that with the observed trends.

    Okay, I’ll redo the argument I made on Climate Lab Book. Consider a single climate model. You run a 1% per year simulation to determine the climate sensitivity \alpha. Then you run some historical forced runs to compare with the observed temperature data. However, determining the external forcings in those runs is non-trivial because they’re calculated self-consistently and because the system is responding to the imposed forcings. However, you can get it from basic energy conservation. If there is a change in forcing dF in a model with a climate sensitivity of \alpha, then if the temperature change is dT, the change in the top-of-the-atmosphere flux, dN, should be

    dN = dF - \alpha dT.

    Therefore if you know \alpha, dT, and dN, you can estimate dF using

    dF = \alpha dT + dN.

    Could there be some issues with this? Sure, maybe climate sensitivity isn’t linear. Possibly, but it would be somewhat ironic for Nic Lewis to make this argument as this is a fundamental assumption of his preferred energy balance method. What about internal variability. Well, you can rewrite the above as

    dF = \eta 4 \sigma T^3 dT + \lambda_{feed} dT + dN.

    So, if internal factors are producing some variability, it shouldn’t affect the above because any change in dT, will be compensated for by a corresponding change in dN, keeping dF fixed.

    So, there’s no real reason why the above isn’t a valid way to estimate the forcings in the climate models. It also does not mean thart dF depends on dT.

    All that Marotzke & Forster do is to take the forcing history and use it to determine the trends in the different climate models. i.e.,

    dT = \dfrac{dF}{\alpha + \kappa},

    where \kappa dT is essentially the same as dN. The above, however, only gives the forced trends. To get the residual (internal variability) they simply add an extra term, \epsilon, giving

    dT = \dfrac{dF}{\alpha + \kappa} + \epsilon,

    and they determine \epsilon by regressing the above against the model trends.

    So, I’m failing to see where the circularity is. If you have a reliable/reasonable/accurate estimate for the external forcings, there is nothing wrong with then using that to determine the temperature trends.

  432. aTTP,
    M&F write after their formula (1):

    Each CMIP5 model simulates its own ERF time series over the historical period. These time series were diagnosed previously [Forster 2013]; if multiple realizations were available for a model, the ensemble average of the individual diagnosed ERF time series for this model was given and is used here.

    That seems to contradict your statement

    All that Marotzke & Forster do is to take the forcing history and use it to determine the trends in the different climate models.

    as you refer to the forcing history not to a different forcing history for each model.

    Where Nic claims circularity is in the use of separate forcing histories derived from the same model for each model.

  433. Pekka,
    I’m failing to understand what the issue is. Yes, my example was just a simple illustration. My point was that there is no circularity in the regression, which is what I understand Nic’s claim to be. The statement he makes in the Climate Audit post,

    As is now evident, Marotzke’s equation (3) involves regressing ΔT on a linear function of itself. This circularity fundamentally invalidates the regression model assumptions. Accordingly, reliance should not be placed on any of the results in the Nature paper.

    which doesn’t seem to be the same as what you’re suggesting. I’m not convinced we should be moving onto the next criticism until we’ve established whether the one that completely invalidates Marotzke & Forster has been addressed.

    Also, what’s wrong with this,

    These time series were diagnosed previously [Forster 2013]; if multiple realizations were available for a model, the ensemble average of the individual diagnosed ERF time series for this model was given and is used here.

    Isn’t this simply producing a mean and a range for the forcing history for each model? All they’re wanting is some forcing history for each model run.

  434. Steve Bloom says:

    Followed Willard’s CA link. Hadn’t been there in quite some time. McI seems to have acquired a ranty tone, a wannabe Galileo pounding on the Inquisition’s door for attention.

    I’m looking for him to start making references to “Team Climate Science” any time now.

  435. aTTP,
    I really cannot understand, how you don’t see the apparent circularity. To me that appears totally obvious and clear. That does not necessarily mean that the circularity is serious as that depends on quantitative factors that I haven’t checked.

    The circularity is present technically, when:

    (1) Temperatures produced by the models are used in estimation of the forcing time series of each model (not as the only input, but as one of the inputs).

    (2) The forcing time series obtained by (1) are used in regression of the temperature time series.

    That’s technically circularity, but as I wrote above, that alone does not tell that the circularity is a serious problem.

  436. Pekka,

    That’s technically circularity, but as I wrote above, that alone does not tell that the circularity is a serious problem.

    I know that what you say is true, but the forcings DO NOT depend on T, therefore the regression in Marotzke & Forster is NOT regressing T on itself.

    Consider the following. I push a box along a frictionless track with a force that depends on time F(t). I measure the distance that the box travels in every time interval. I use those measurements to determine the velocity of the box at every time interval. From the velocities, I get the acceleration and from the acceleration I can get the force applied. However, this DOES NOT mean that the force applied depends on velocity. It’s external and therefore independent of all the other variables. Just because I’ve used these other variables to determine the force, DOES NOT make the force dependent on these variables. Oh, I’m not trying to shout, I’m simply trying to stress that point.

    My assertion is that the claimed circularity is firstly wrong since F does not depend on T and even if you want to point out that F is determined using T does not mean that this creates some kind of circularity problem.

  437. aTTP,
    You must notice that we are not discussing empirical data or real Earth. We are discussing model properties. Both the temperatures and the forcings used by M&F and Forster (2013) are model outputs. Both are used as input data to the these papers in the order

    Model results, including temperature -> forcing estimates -> use in regression with the temperature.

    They are not using any external forcing time series, the only forcings they use are those part of the above chain. Also the temperature is the same. That’s circularity.

    Piers Forster confirms that in his latest comment

    Nic is right that deltaT does appear on both sides, we are not arguing about this we are arguing about the implications.

    He continues by telling that the circularity is not serious:

    We see the method as a necessary correction to N, to estimate the forcing, F. This is what we are looking for in the model spread, not the role of N – it would be more circular to use N as this contains a large component of surface T response.

    The question is not, whether there’s circularity, it’s whether that’s serious.

  438. Pekka,

    Both the temperatures and the forcings used by M&F and Forster (2013) are model outputs. Both are used as input data to the these papers in the order

    Yes, I know.

    Model results, including temperature -> forcing estimates -> use in regression with the temperature.

    They are not using any external forcing time series, the only forcings they use are those part of the above chain. Also the temperature is the same. That’s circularity.

    Yes, I know, but since the forcings do not depend on temperature if energy is conserved it should follow that even though the temperature is used to determine the forcings, the forcings DO NOT explicitly depend on temperature.

    You seem to be ignoring, that the actual chain is

    Model results, including temperature AND TOA imbalance -> forcing estimates -> use in regression with temperature.

    Since the temperature and TOA imbalance should adjust in a way that ensures that the forcing is independent of temperature, the circularity really shouldn’t be important.

    The question is not, whether there’s circularity, it’s whether that’s serious.

    So, you’re suggesting that maybe energy isn’t conserved?

  439. aTTP,

    This is mathematics at this point. Using physical arguments is not likely to work.

  440. Pekka,

    This is mathematics at this point. Using physical arguments is not likely to work.

    Seriously? Well I think that’s absolute nonsense. If the climate models conserve energy then – by definition – the external forcings DO NOT depend on T. How does some mathematical/statistical argument trump that?

  441. Basically we have first

    F = N + α T

    Then we do regression

    T = a + b F + c α + d κ + e

    Using the first in the second and moving one term to the left hand side

    (1 – α b) T = a + b N + c α + d κ + e

    That seems to lead problems, if the coefficient of T may be close to zero. Thus we should perhaps not trust the results, if the regression tells that b is close to 1/α even in part of the situations.

  442. aTTP,

    We are discussing regression. Regression is mathematics.

  443. Are not statisticians the worst at circularity? How about Bayesian updating — using previous values of an observable to update the observable. How is that right?

    Run, duck, and hide. 🙂

  444. verytallguy says:

    It might be more effective if both Nic Lewis and Pekka went to have this discussion at climate lab book where the study authors appear to be engaging, rather than having a 2nd hand discussion.

  445. Pekka,

    We are discussing regression. Regression is mathematics.

    Regression is just a tool. Personally, I think we’re discussing this because a bunch of statisticians don’t understand energy conservation.

    Personally, I think you’re forgetting something in your regression example above. Consider the following. I can get dF from dT and dN using

    dF = dN + \alpha dT.

    Okay, just energy conservation. The above is also true whether or not the model has internal variability, because it is simply energy conservation.

    Now, when I have my forcing timeseries, I can do the reverse. I can determine the temperature response using

    dT = \dfrac{dF}{\alpha + \kappa}.

    But, this dT is not the same as the dT in the first equation because it doesn’t, in the form above, include internal variability. It’s only the forced response. If I want to determine the level of internal variability, I can do so by adding a residual,

    dT = \dfrac{dF}{\alpha + \kappa} + \epsilon.

    Okay? But that doesn’t change that the dT in the above equation is still essentially the forced response (which isn’t the same as the dT in the first equation), plus some kind of residual representing internal variability.

    I still don’t see how any kind of mathematic arguments changes that it is perfectly reasonable to use a known forcing timeseries to determine the forced trend and then to estimate a residual and I still don’t see how anyone can actually argue that it’s regressing dT on itself if dF is – by definition – independent of dT.

  446. Willard says:

    > You must notice that we are not discussing empirical data or real Earth.

    No, we’re discussing data produced by simulations, which in turn take as input some real data. So this is not a purely mathematical relationship either.

  447. I’ll post Ed Hawkins’s comment as this is, essentially, what I’ve been trying to get at for the last however many hours.

    the crucial aspect is that N (as calculated in the models) contains a component related to T. So, the correction made to N to obtain F is trying to remove that T influence, rather than adding it.

  448. aTTP,

    Regression is just a tool and the paper of M&F is just a paper based on the use of that tool.

    The paper is pure mathematical analysis done using the specified tools. The question of Nic is whether the tool is valid in this case.

  449. AndyL says:

    aTTP: This comment on CA seems relevant to your point about dT in your second equation being different to dT in your earlier one. If it is not to the point, please ignore.
    Back to the popcorn – this is an interesting discussion

    “…
    According to Profs M&F, you are confusing two different entities which are both called temperature. (You aren’t but I think that this is the basis for their argument rejecting circularity.) The first entity is represented by the actual temperature (anomaly) observed in the given GCM. This temperature anomaly is made up of two components, the first of which, Tf, is the forced response including temperature-dependent feedbacks, and the second of which, Tnv, are surface temperature variations caused by “natural variability” in the GCM. By assumption in Forster’s energy balance model, the restorative flux responds linearly to surface temperature change; the model does not care what caused that temperature change. Hence, the restorative flux is represented as a simple linear function of both components of this observed temperature. Hence, in order to estimate the forcing from the net flux time series, it is necessary to adjust the net flux using the total actual temperature change observed in the GCM. So the derived adjusted forcing (AF) value is given by:-
    F(t) = N(t) + α*T(t)
    = N(t) + α * (Tf(t)+Tnv(t))
    Where F, N, Tf and Tnv all denote change in values from some initial theoretical steady-state, T(t) = Tf(t) + Tnv(t)) and the time series, N and T, come directly from the GCM results. (None of this will come as any surprise to you, but their response seems to imply that it should.)
    Now Profs M&F want to separate out the forced change in temperature plus associated feedbacks, from the “natural variability” change in temperature in the same GCM. The model they use to do this assumes that
    ΔTf = ΔF/(α + κ)
    I think that they are arguing that the ΔTf is not the same animal as T(t) above, since the natural variation component is now excluded. If we substitute the (exact) expression for the derived ΔF, we obtain:-
    ΔTf = [N(t) + α*T(t)]/(α + κ)
    Hence, since the temperatures on the LHS and RHS are different animals, they reject your argument of circularity. Bingo.
    In reality, the problem has not gone away at all, since the actual regression itself is not against ΔTf, but against a mean shifted T(t).

  450. AndyL.,
    I think I agree with most of that, but am not sure I get this

    In reality, the problem has not gone away at all, since the actual regression itself is not against ΔTf, but against a mean shifted T(t).

    I don’t think this is true. If they’re trying to determine the residual, then the should be regression against the actual model temperatures, not some mean shifted T(t). Well, unless this just means that they’ve re-defined some kind of baseline, but I can’t see how that would matter.

  451. Willard says:

    > The question of Nic is whether the tool is valid in this case.

    Or more generally whether regression is invalid in cases such as M&F’s.

    You can’t simply pound the able with “M&F can’t do this, they just can’t” if you pretend it’s a mathematical discussion.

    OTOH, most discussions in linguistics end up like this, so mileage varies.

  452. Willard says:

    Also notice the comments about clarity at the Auditor’s. The escape route is being prepared, in case of need. Nic is clear, while M&F aren’t. M&F’s lack of clarity made Nic do it.

    Hopefully, M&F will still make sense by the end of the week.

  453. Willard,
    This paper describes and uses a mathematical method. When combined with the earlier paper of Forster et al, all that research is technically just manipulation of numbers picked from a depository. Physics enters only in the motivation of the work and in the final comparison with HadCRUT4 temperatures. Everything else is mathematics.

    It’s typical for mathematical analysis that there are regions, where the methods do not work. In this case my present impression is that even that applies only to the reconstruction of temperatures from the results of the regression, because the only apparent problem from the circularity is that the coefficient of the temperature may be very small or even zero at some point. A very small coefficient for the temperature means that coefficients that are used to calculate the temperature are very large leading to undefined outcome.

    So far I don’t know, whether the analysis gets even close to the problematic situation. The regression itself is stable. Therefore the problem may apply only to the reconstruction of the temperature, possibly not even to that.

  454. Willard,
    Yes, I was noticing that. I’ve left a comment there, but it’s still waiting moderation. Although, it was me complaing about being called dishonest, so – in retrospect – I probably shoudn’t have bothered.

  455. Pekka,

    Physics enters only in the motivation of the work and in the final comparison with HadCRUT4 temperatures.

    Come on, it also enters into the calculation of dF which is motivated by energy conservation, one of – as I understand it – the fundamental laws of physics.

  456. aTTP,
    In these papers the physics does not enter, except as motivation. They are pure mathematical manipulation, and the question is, whether the mathematics is applied under conditions, where it works.

  457. Joshua says:

    A couple of observations.

    Nic’s post at Climate Audit builds from an assertion of obvious and fundamental errors that should have been caught by people with basic knowledge of statistics to insult the authors, all climate scientists and all of the reviewers at Nature.

    A) Now in this thread, what I think that I see is that Pekka has said he has spent a fair amount of time looking at the F&M article and Nic’s criticism, and thinks that the criticism might be valid but it’s kind of hard to tell. I can think of two ways to go with that. One, is that Pekka lacks basic knowledge of statistics, and should probably just bow out of the discussion less he do further discredit to himself. The second is that Nic is was wrong and included a personal attack in his blog article that is based on fallacious reasoning. That would mean that he was tribal in his assessment, and that as a result “skeptics” all across the “skept-o-sphere” should be explaining that just as in the Climategate emails, Nic’s tribalism has discredited his work, and thus his work should not be trusted (FYI, I won’t hold my breath on that one!).

    B) Related to tribalism. It’s interesting that Nic sought out consultation with experts in statistics, but as far as I can tell only sought out consultation with statisticians who are clearly aligned on one side of the great climate change divide. If the problems were so obvious and so fundamental, why didn’t he consult with some non-aligned statisticians? Wouldn’t that have strengthened his case? Wouldn’t that have enhanced the discussion?

  458. Rachel M says:

    In these papers the physics does not enter, except as motivation. They are pure mathematical manipulation, and the question is, whether the mathematics is applied under conditions, where it works.

    I dunno Pekka. I’m no expert but I suspect that pure mathematicians probably wouldn’t regard it as pure mathematics.

  459. Willard says:

    > It’s typical for mathematical analysis that there are regions, where the methods do not work.

    Show me an example, Pekka.

  460. Joshua,
    My comment tells, how the circularity enters the regression. We can see that all regression parameters multiply variables that have significant variability. Therefore the regression is not hampered by the circularity. The situation is not nearly as bad as Nic claims.

    One problem remains. The coefficient of temperature may be very small in some cases. Therefore there may be situations, where the results of the regression lead to large uncertainties in the calculation of the temperatures from the results of the regression. It’s possible that this effect affects the spread of predictions from regression seen in the Figure 2b over years 1950-70 (the Figure 2 is shown in aTTP’s post). That’s at least a possible consequence of this issue. (M&F propose other possible reasons for the effect, but only propose).

    If my above proposal is correct, it might contribute also to the somewhat less increased variability of the latest predictions and to the variability over most of the full period in the case of the 62 year trends.

    Thus I do not think that the whole analysis would be affected strongly as Nic claims, but the circularity might have influence. Certainly it would be nice to know, whether the coefficient of T is small at all, and if it is, how much influence that would have on the results. Checking that would be possible either from full information from the original calculation or from a repetition of that calculation recording the relevant coefficients during the calculation.

  461. Willard,

    Take any book of numerical analysis. You’ll always find warnings about using methods beyond the region of their validity or accuracy. One typical reason for that is that some resulting coefficients grow very large (in this case the inverse of the factor that multiplies temperature after rearranging the regression).

  462. Rachel,
    I don’t know, whether answering your comment is necessary.

    I used pure to mean nothing but.

  463. Look at this paper and supplement from last year by Santer et al on how to compensate for the variability. Check the figure on statistical removal on volcanic and ENSO signals

    “Volcanic contribution to decadal changes in tropospheric temperature”
    https://dspace.mit.edu/openaccess-disseminate/1721.1/89054
    http://www.nature.com/ngeo/journal/v7/n3/extref/ngeo2098-s1.pdf

  464. David Young says:

    As a mathematician by training I can give an example. If the viscosity of a flow is resolved by the grid centered differences are stable. For flows of practical interest where the viscosity is say 7 orders of magnitude smaller than the convection terms, centered schemes are unstable and a very poor choice. This is a purely a mathematical result and the “physics” of fluids is irrelevant.

  465. miker613 says:

    @Joshua “I can think of two ways to go with that…” Kind of a bizarre comment follows. My general impressions so far are: (a) everyone expert enough in the math (M&F included) agrees that Nic Lewis’ comment would be totally on the mark, if M&F were really doing what he was claiming. (b) M&F are claiming they aren’t doing that, but it’s been pretty hard for everyone to figure out their explanation of what they say they are doing exactly. Everyone is continuing to try.
    Anyhow, neither of your “two ways” seems to have anything to do with what’s happening. Keep that popcorn handy.

  466. Willard says:

    > Anyhow, neither of your “two ways” seems to have anything to do with what’s happening.

    Joshua presents two basic facts, MikeR.

    Here’s the first one:

    Nic’s post at Climate Audit builds from an assertion of obvious and fundamental errors that should have been caught by people with basic knowledge of statistics to insult the authors, all climate scientists and all of the reviewers at Nature.

    Here’s the second one:

    Pekka has said he has spent a fair amount of time looking at the F&M article and Nic’s criticism, and thinks that the criticism might be valid but it’s kind of hard to tell.

    How do you reconcile the two facts exactly?

  467. Joshua says:

    ==> “Kind of a bizarre comment follows. My general impressions so far are: (a) everyone expert enough in the math (M&F included) agrees that Nic Lewis’ comment would be totally on the mark, if M&F were really doing what he was claiming.”

    Nic’s comment stated that the problems were so obvious and indisputable that they justified criticisms of the knowledge level of the authors, climate scientists in general, and all of Nature’s reviewers.

    The implication was that anyone who didn’t agree rather easily that the paper is obviously and fatally flawed is either ignorant of basic statistics or (it is implied). But yet, you say:

    “it’s been pretty hard for everyone to figure out their explanation of what they say they are doing exactly.”

    Even accepting the spin you put on your description – it lies at odds with Nic’s assertion that the flaws are incredibly obvious such that they bring into question the authors’ (and Nature’s reviewers and climate scientists in general) competence and/or integrity.

    As for that last part about integrity, notice Nic’s rhetorical flourish.

    “I have a high regard for Piers Forster, who is a very honest and open climate scientist, so I am sorry to see him associated with a paper that I think is very poor,

    Interesting logic, that. He associates honesty and openness with writing a poor paper. Why go there? The implication of that rhetoric (with, of course, a plausible deniability of which RPJr. would be proud) is that Forster’s involvement in the paper is somehow incongruous with his “honesty” and openness.

    Keep in mind that in the past, Nic has charged that Forster either “tacitly” or deliberately allowed misuse of his data

    http://judithcurry.com/2011/07/05/the-ipccs-alteration-of-forster-gregorys-model-independent-climate-sensitivity-results/#comment-83248

    – with the (IMO implausible) explanation that he made that charge rather than contact Forster to raise questions because he didn’t want to put Forster in an uncomfortable situation.

    ——————

    And all of this takes place in the context where, for years, “skeptics” have been arguing that the tribalism that they see in “Climategate” (a tribalism that I agree is there, btw) invalidates the work of climate scientists as a group.

    Do you think that Nic’s overt tribalism (what else explains gratuitous, nasty, and pointless attacks mixed into a scientific analysis) invalidates his work? Should we generalize from his tribalism to assume that the work of all “skeptics”‘ is invalid?

  468. Willard says:

    > If the viscosity of a flow is resolved by the grid centered differences are stable. For flows of practical interest where the viscosity is say 7 orders of magnitude smaller than the convection terms, centered schemes are unstable and a very poor choice. This is a purely a mathematical result and the “physics” of fluids is irrelevant.

    The first sentence doesn’t follow from mathematics alone, David. Nor is “practical interest” of any mathematical interest. The only mathematical “result” you can cling on is in the second sentence and is not even spelled out. In my opinion, this is the “result” that Nic hasn’t spelled out.

    That’s not how mathematicians should proceed to settle a mathematical dispute.

    That’s polemic almost all the way down.

  469. Willard says:

    > I don’t know, whether answering your comment is necessary.

    I wanna tell you a little secret, Pekka: her hubby’s a math guy. He intervened here once, and MikeR kinda folded. You want her to ring him again?

  470. Willard,

    One clear example is the amplification from positive feedback of strength f. The result is 1/(1-f) for f less than 1. For f > 1 the behavior of the system becomes very different. This case may well have a similarity to that simple example.

  471. miker613 says:

    @Willard “How do you reconcile the two facts exactly?”
    Well, here’s an earlier comment from Pekka that may answer your question: https://andthentheresphysics.wordpress.com/2015/01/31/models-dont-over-estimate-warming/#comment-47216 “Having a further look at the paper my conclusion is that the paper does not analyze it’s methods to tell, what we can really learn from the results. The issues that Nic Lewis mentioned are enough to make it highly unclear, what the results mean. A paper should discuss that sufficiently to clarify these issues, but this paper does not do that.”

    Paraphrasing: Neither Pekka nor anyone else could tell exactly how the methods worked, prior to the comment on Climate Lab. It sounded like a big problem. Now that we have feedback from the authors, there may be enough information to work out whether the issue is a problem or not.
    Even if it turns out that the new description of the methods takes care of the issue, still neither of Joshua’s choices is correct.


  472. David Young says:
    As a mathematician by training I can give an example. If the viscosity of a flow is resolved by the grid centered differences are stable. For flows of practical interest where the viscosity is say 7 orders of magnitude smaller than the convection terms, centered schemes are unstable and a very poor choice. This is a purely a mathematical result and the “physics” of fluids is irrelevant.

    WHUT’s with the Foo Young? Why must we always suffer with your own personal problems of dealing with complexity?

  473. Willard,
    I did notice the connection of Rachel with mathematics, and that may be what made me formulate my answer like that. (But who knows his own intuitive motives.)

  474. Joshua says:

    Pekka –

    For the benefit of myself and Miker. You have spent some time looking at the paper and Nic’s criticism.

    Do you think that there is no question that there are obvious and fundamental flaws in the paper? (I know that you don’t like yes/no questions – but that is a yes or no question. Nic didn’t say that there were questions as to whether the paper was based on flawed statistics. He said that the flaws were so obvious and fundamental that they called into question not only the authors’ skills, but supported impugning climate scientists as a group and all of Nature’s reviewers).

    If so.. Do you think that if you had read the paper before reading Nic’s criticism, the fatal flaws would have jumped out at you in such a fashion that you would feel justified (technically if not morally, ethically, or strategically) in impugning the authors’ skills, climate scientists in general, and all of Nature’s reviewers (even though I realize that your inclined to be tribal in that way – my question is whether you think it would be justified)

  475. David Young says:

    Willard, The viscosity is the coefficient of the second derivative term in the equation. Everything I said is purely mathematical. You can ask Gavin as I’m sure he took numerical analysis in school. The reference is the old book or Richtmyer and Morton where it is proved at the advanced undergraduate level

  476. Willard says:

    > The result is 1/(1-f) for f less than 1. For f > 1 the behavior of the system becomes very different.

    Thank you, Pekka. Now, what would be your mathematical prescription? I don’t think you can formulate one from a “purely” mathematical point of view. To do that, you need a domain of interpretation, where “does the tool do its job it’s supposed to do?” makes sense. At best you could argue that this belongs to applied maths.

    The only “pure” alternative is to prove that what M&F are looking for is not possible, coherent, consistent, etc. This requires a formal proof, which should already exists since it’s so trivial we should take disciplinary action. Even then, the pure alternative requires that this trivial result applies to what M&F did. Even then, it becomes more judiciary than deliberative, if I can borrow from the jargon of rhetoric.

    Seen under that light, Nic presents his case as if he was both prosecutor, judge and jury.

  477. Willard says:

    > Neither Pekka nor anyone else could tell exactly how the methods worked, prior to the comment on Climate Lab. It sounded like a big problem. Now that we have feedback from the authors, there may be enough information to work out whether the issue is a problem or not.

    This doesn’t cohere with Nic’s story, MikeR. According to Nic, there was enough information to retract the paper.

    Nice try, though.

  478. Willard says:

    > The viscosity is the coefficient of the second derivative term in the equation.

    Is viscosity a mathematical concept, David?

  479. Joshua says:

    miker –

    This kind of response is why, although sometimes I think that you engage in good faith, sometimes I think that you don’t:

    ==> “Neither Pekka nor anyone else could tell exactly how the methods worked, prior to the comment on Climate Lab. It sounded like a big problem. Now that we have feedback from the authors, there may be enough information to work out whether the issue is a problem or not.”

    You have transformed the issue. Nic stated that the flaws were so obvious that it justified not only questioning the competence of the authors, and further climate scientists in general and all of Nature’s reviewers – but also justified what you described as a nasty, gratuitous, and pointless attack mixed into a scientific “publication.”

    But here you say that neither Pekka nor anyone else can reach the conclusion that Nic said was obvious. You say that there isn’t enough information to justify a conclusion that Nic said was so obvious that it called into question the competence of the authors, climate scientists in general, and all of Nature’s reviewers.

    Here’s what I think, miker. I think that you are willing to go only so far in your criticism of Nic. You are willing to agree that his tribalism is sub-optimal, but you are unwilling to acknowledge that his tribalism was the result of fallacious reasoning and also unwilling to apply the same judgement standards to the implications of his behavior that you apply to the implications of similar behavior from scientists on the other side of he great climate change divide.

    Now in the past, I have asked you to walk stuff back…and I felt that was the point where the good faith of our exchange was extinguished. I’m getting the impression that we’ve reached that point again here.

  480. David Young says:

    Willard, Viscosity is commonly used in numerical analysis to denote the coefficient of the second derivative term as in “artificial viscosity” a very commonly used technical term.

  481. miker613 says:

    As I said, Joshua, everyone competent in the math thought it looked like a serious problem. Read more of Pekka’s original comments, here or at climateaudit. You seem to be trying to hold everyone responsible for seeing the Climate Lab post before it arrived.
    I really think that at this point it makes sense to let the principals settle it before doing post-mortems. Are you really going to try to say, “Well, maybe M&F were dead wrong (if it turns out that way), but Nic Lewis was even wronger, shooting his mouth off before the rest of us made up our minds?” Just doesn’t seem useful, or that relevant; just a way to score points or something.

  482. Willard says:

    I see, David. You’re referring to this concept:

    > The three basic properties of viscosity solutions are existence, uniqueness and stability.

    http://en.wikipedia.org/wiki/Viscosity_solution

    If a viscosity solution is always stable, why worry about unstability?

    You seem to be conflating mathematical and mathematized. M&F have not published a mathematical result.

    I also think you may not know what circularity means. Ask Nic about that one.

  483. miker613 says:

    Willard, you see to be playing the same game as Joshua: Nic Lewis makes a claim, M&F have a response, we’re now waiting for Nic Lewis or Roman M or whomever to respond.
    In you jump, saying, Yes, but Nic Lewis already forfeits because he didn’t anticipate M&F’s response!
    Sorry, but most of us are going to care more about how the thing works out in the end. In other words, who is actually right.

  484. Willard says:

    What Joshua is saying holds whatever the merits of M&F, MikeR. Either their mistake is overly stupid, or it’s not. Nic’s infamia pitante presumes it is. Pekka testified it’s not.

    As you say, grab the popcorn.

  485. Joshua,

    The circularity in the analysis that Nic noticed is in my view real enough and potentially significant enough to deserve some consideration by the authors and a comment in the paper. How that comment should have been formulated depends on what they would have found out. Looking at that wold have probably been relatively easy, if the had thought about that in advance of the actual analysis.

    Another issue that Nic brought up as his motivation to look at the paper more closely is the surprising result that the parameters α and κ have very little effect for the 62 year trends. That’s a very surprising result to me as well. After all the estimates of the forcing should not differ so much between the models meaning that the temperature changes should be roughly inversely proportional to α + κ.

    Thus we can conclude that the models considered are a strange set:
    – they do not contradict too strongly the climate history (They do differ significantly, but not that much)
    – their outcome depends relatively strongly on the forcing used
    – their outcome depends very little on α and κ

    That result is surprising enough to make me wonder, whether the result is an artifact of something specific to the analysis like
    – a weakness in the methodology, or
    – a property of the CMIP5 model result data ensemble, not representative more general classes of models (This might be due to some biasing selection criteria used by the modelers, probably without knowing, what was done.)

    I’m more prepared to believe that some effect like the above has affected the outcome than that warming produced by climate models is not strongly influenced by α and κ.

  486. Joshua says:

    ==> “As I said, Joshua, everyone competent in the math thought it looked like a serious problem.”

    And so there, you’ve gone so far in the opposite direction of walking back, that you reproduced Nic’s attack. In order for your statement to be true, the authors and the reviewers and anyone who questions the severity of the problem are “incompetent in math.” As near as I can tell, Pekka says that after looking he can’t determine the severity of the problem – so for your logic to hold, Pekka is incompetent in math.

    ==> ” “Well, maybe M&F were dead wrong (if it turns out that way), but Nic Lewis was even wronger, shooting his mouth off before the rest of us made up our minds?”

    They are separate issues. And you keep conflating them. You are, essentially, ignoring the point that I have already made a number of times. I have to say that it is frustrating, and although I don’t think that it merits the kinds of attacks I often see made against you, I understand why your lack of engagement engenders that kind of response.

    From what I see, you are unwilling to hold Nic accountable for his tribalism, yet you seem to be one of the “skeptics” who is very comfortable impugning “consensus” scientists in all manner of ways because of tribalism.

    You call his attack gratuitous, pointless, and nasty, but seem to me to be unwilling to engage with why a scientist would engage in a gratuitous, nasty, and pointless attack in a scientific “publication” — based on fallacious reasoning.

    You acknowledge that the magnitude of the problem is unclear to people who have the technical chops and who have looked at the issues, but won’t connect the dots to Nic’s justification of insults based on what he claimed were clear, obvious, fundamental, and fatal flaws.

    But I’m repeating myself. Guess I’ll catch you on another thread.

  487. Willard says:

    Thank you, Pekka.

    I think we can also commit this other technical comment on the models involved:

    http://neverendingaudit.tumblr.com/post/108050690409

    We could adapt it to the ones at hand if need be

  488. Joseph says:

    Well, Miker, if it is only Lewis that thinks there is some fundamental mistake in the paper that requires that it be retracted, I think I would conclude that Lewis is most likely the one who is wrong. I really can’t see either side conceding any points at this point. And the study was peer reviewed and published in a very prestigious journal. I don’t think that we are likely to come to the resolution that Lewis seems to prefer.

  489. Joshua,

    In the ideal case the authors of a paper observe all potential problems, check how they (may) affect the outcome, and report at least in the supplementary material, what they have found out.

    The reality is never ideal, but perhaps the paper was too far from that.

  490. Joshua says:

    Pekka –

    ==> “The circularity in the analysis that Nic noticed is in my view real enough and potentially significant enough to deserve some consideration by the authors and a comment in the paper.

    That’s why I asked yes/no questions – because I anticipated that in a response not so constrained, you would included things like “potentially” and “real enough” and “deserve some consideration”‘ – which don’t really address the questions that I’m asking. I would have preferred if you had addressed the questions that I’ve asked.

    “Potentially,” and “real enough” and “deserve some consideration” are not consistent with what miker described as nasty, gratuitous, and pointless.

    Maybe my lack of conciseness is to blame – but if so then reread willard’s 6:46 and 7:31 comments. Other than that, I’m tired of going ’round in circles.

  491. Joshua says:

    ==> “In you jump, saying, Yes, but Nic Lewis already forfeits because he didn’t anticipate M&F’s response!”

    miker- looks like you have a stock of whole cloth.

    ==> “Sorry, but most of us are going to care more about how the thing works out in the end. In other words, who is actually right.”

    I’m sorry that I’ve been excluded from the “most of us” club, who care about how “the thing works out in the end.”

    But I would suggest that there are different “things” in play,and from where I sit, you are just flat out resisting to address one of them – Nic’s tribalism and the implications thereof.

  492. verytallguy says:

    Miker,

    everyone competent in the math thought it looked like a serious problem

    (our emphasis)

    who is actually right.

    Why, everyone, of course.

  493. Pekka,
    I think you’re wrong about the circularity being a real problem, but given that my laptop has died, I’ve explained why many times already, and I can’t quite work this tablet, I’ll leave it there.

  494. Joshua says:

    Joseph –

    ==> “Well, Miker, if it is only Lewis that thinks there is some fundamental mistake in the paper that requires that it be retracted..’

    For all I know, there may well be fundamental mistakes in the paper that require it to be retracted. Let the science go where it goes.

    I am questioning about the “something extra,” whereby Nic asserts that the flaws were so obvious that their existence justified nasty, gratuitous, and pointless behavior – whereby he impugned the competence (if not integrity) of the authors, climate scientists in general, and all the reviewers at Nature.

    What kills me about all of this, is that many people who just won’t hold folks like Nic accountable, then hand-wring about the use of “denier” because its nasty, gratuitous, and pointless – and on that basis draw implications about the science of people who use the term “denier.”

  495. aTTP,

    In my view you have repeated several times rather similar arguments that do not apply to this particular analysis. You have something else in mind and keep on commenting on that.

  496. Joseph says:

    Right, Joshua, but you have to draw conclusions on the best available evidence. And for me not being familiar with the technical details, the best available evidence would be that other experts in the field (especially the experts associated with Nature) have no problem with the methods used by Maroztke in their paper even after considering the comments by Lewis. I always try to qualify what I say with something like “most likely” because I can’t be certain.

  497. Pekka,
    Yes, but I don’t know how to explain it any other way. You also keep saying that, too. I also think Joshua’s been making a valid point that you seem to be ignoring.

  498. Willard says:

    I might as well drop my trump, since I have other things to do today:

    http://plato.stanford.edu/entries/hilbert-program/

    The “but pure maths” argument may presuppose a conception of maths that was refuted more than 75 years ago. Semantics matter, even in pure maths. Incidentally, semantic considerations are at heart of most circular arguments I know, which belong to the skeptic toolkit since the beginnings of academic skepticism:

    http://www.tandfonline.com/doi/abs/10.1080/05568641.2011.634246

    AT’s argument can’t be dodged with “yes, but pure maths”.

    Enjoy this ClimateBall episode,

    W

  499. Willard,
    You might add Bourbaki to your list

  500. miker613 says:

    Joshua, I guess I’m less concerned with the “something extra” stuff than you are. Near as I can tell there’s a lot of it going around. I don’t like it but tend to ignore it.
    I’ll stay with that issue of,.Is the paper right or wrong?

    Joseph, I don’t mean to be sarcastic, but the world is not waiting on either you or me to make up our minds. The issue is still ongoing and will probably be worked out in the next days or weeks. Those of us who can’t participate in the details will just watch.

  501. Joshua says:

    miker –

    ==> “Joshua, I guess I’m less concerned with the “something extra” stuff than you are.

    You misunderstand me. I’m not “concerned” about the “something extra.” Perhaps it would be more accurate to say that you’re less focused on it.

    My point of interest is what the “something extra” reveals in how people approach these issues. I see much hand-wringing from partisans about the damaging effects of the “something extra” – inevitably however, concern is only about folks on the other side of the great climate change divide.

    In the case of many “skeptics,” concern about “something extra” is practically a raison d’être, What does it mean, then, when their concern is so selective? My god man, how many electrons have been spent talking about the tribalism of Climategate – and I would guess that you’re spent a few yourself.

    It’s human nature. Human nature doesn’t really “concern” me. It is what it is. But if the goal is to “care about how things work out in the end,” I think that the human tendency towards flawed reasoning should be on the table. Nay. I think it has to be on the table.

    Otherwise, IMO, it’s just parallel play – kind of like a collection of dog packs, all simultaneously chasing their tails (sameolsameol).

  502. miker613 says:

    Joshua, you’ve convinced me. Skeptics are politicians. Believers in AGW are politicians. You’re all politicians. I don’t trust politicians.
    Not sure this is helpful.

  503. David Young says:

    Willard, Google “artificial viscosity” to see the numerical analysiscdefinition

  504. Steve Bloom says:

    Self-hate is never pretty to see out in the open, miker.

  505. Joseph says:

    So, Miker, can you sketch out a future scenario where you will be satisfied with the result concerning this paper?

  506. Willard says:

    > Google “artificial viscosity” to see the numerical analysiscdefinition

    In thy Wiki page referenced above:

    In the modern approach, the existence of solutions is obtained most often though the Perron method. The vanishing viscosity method is not practical for second order equations in general since the addition of artificial viscosity does not guarantee the existence of a classical solution. Moreover, the definition of viscosity solutions does not involve any viscosity of any kind. Thus, it has been suggested that the name viscosity solution does not represent the concept appropriately. Yet, the name persists because of the history of the subject. Other names that were suggested were Crandall-Lions solutions, in honor to their pioneers, L^\infty-weak solutions, referring to their stability properties, or comparison solutions, referring to their most characteristic property.

    http://en.wikipedia.org/wiki/Viscosity_solution#History

    Thanks to Pekka, we should also note that Bourbaki’s program excludes most of soft analysis, algorithmic content, heuristic, category theory, combinatorics, and mathematical applications in general:

    http://en.wikipedia.org/wiki/Nicolas_Bourbaki#Appraisal_of_the_Bourbaki_perspective

    From my perspective, their refusal to consider pictorial representations is a big no-no.

    ***

    So your example may not even be pure maths, after all. But let’s assume it is. How would you adapt your example to explain Nic’s argument, which rests on a literalistic (which I compared above to formalism) interpretation of M&F’s results?

    As far as I am concerned, unless you dig into the domain of application, there is no case yet. Besides, one does not simply provide a math KO by handwaving to the circles of Mordor.

  507. miker613 says:

    “So, Miker, can you sketch out a future scenario where you will be satisfied with the result concerning this paper?” Why is this difficult? This isn’t politics. I imagine they’ll work it out. Wait for it to settle down.
    I enjoy Pekka’s refereeing, by the way.

  508. miker613 says:

    Steve Bloom, self-hate?

  509. Steve Bloom says:

    You’re a politician, miker. Trust me on that.

  510. Joshua says:

    Mine
    Miker –

    I don’t understand what that comment (9:23) meant.

  511. dhogaza says:

    Miker:

    “I imagine they’ll work it out. Wait for it to settle down.”

    Just like it has with the multiple palereconstructions that show recent temperatures rising in a hockey blade shape, right?

  512. miker613 says:

    Dhogaza, this is a much more focussed issue.

  513. Joseph says:

    Why is this difficult?

    Yes, why is it difficult for to imagine a satisfactory result to this episode? I am looking for some details about that result, Miker, if you aren’t understanding my question.

  514. miker613 says:

    “I don’t understand what that comment (9:23) meant.” I meant that the natural outcome of your inquiries is to make all the public folks involved in climate science, on both sides, look like politicians rather than scientists. Of course, many of them are.indeed politicians, or are explicitly PR folks, and no one in their right minds trusts what they say without checking.
    But there are other ones who are scientists. Most of those are on the pro-AGW side, though not all. They have a big natural advantage: the public trusts scientists a whole lot more than politicians. IMHO, they are fools to cast away that natural advantage in favor of illusory PR gains.
    However, they seem to think they know better, and keep right on focussing on PR, and undermining the trust that they should have. They will call what I’m saying “concern trolling”.

    But whatever – I’m concerned. I think that realclimate should have a post on the “pause” that admits the serious problems it poses: see BEST’s post on the “pause” if you want a lesson, or James Annan’s posts. Since they don’t, I trust BEST and I don’t trust realclimate. Their fans will continue to cheer for them, and keep posting their links that prove that there is no “pause”, but the result is that they don’t have credibility out my way. BEST does, because it admits truths whichever way they point. Find me a realclimate post that admits that the skeptics are right about something.
    Nic Lewis is the same. He’s a hero to the skeptics, because he has articles in peer-reviewed journals that challenge part of the “consensus”, but then he goes and behaves like a politician. I think that costs him much more than he gains.

  515. miker613 says:

    Joseph, I don’t know what will happen. I’m guessing that in the next several days, some of the very competent folks out there will come up with ways to test these things out. We’ll start seeing Monte Carlo simulations of the regressions involved, or who knows what. Pekka did a good job a little while ago working out some issues on climateaudit about some of the statistical methods in PAGES2K: http://climateaudit.org/2014/10/11/decomposing-paico/
    I found it completely convincing, as would anyone (unless they don’t read climateaudit, of course).
    What can I say? I see plenty of issues get settled, though you have to follow both sides to see it. Otherwise, you just see posts by your side and never hear about the rebuttals – see the recent comment by dhogaza (10:57) for a good example. He wasn’t too specific, though.

  516. JCH says:

    Probably more like Antarctica, rage rage rage, cannot be warming, well, a little, but mostly it’s, wait, look over here and watch us rage. Rage rage rage.

    Meanwhile Miker613, what did Antarctica do?

  517. Joseph says:

    Pekka did a good job a little while ago working out some issues on climateaudit about some of the statistical methods in PAGES2K:

    And how did that resolve anything? I don’t understand the issues discussed, but did anyone involved acknowledge a mistake? What makes that a satisfactory outcome for you?

    We’ll start seeing Monte Carlo simulations of the regressions involved, or who knows what.

    Do you think someone is going to end up acknowledging their mistake and if in your mind Marotzke’s results are proven to be fundamentally flawed (because of what Lewis and Mcintyre say) will you be satisfied if they don’t retract their paper?

  518. miker613 says:

    If their results are proven to be fundamentally flawed – not just according to Lewis and McIntyre, but because it becomes obvious following the discussion between them and Pekka and Nick Stokes (eventually) and ATTP and even M&F themselves – then, yes, I have no doubt that they will withdraw the paper. I don’t know Forster, but Lewis said he is a good scientist, so that is what he would do. Why would you doubt it? Do you think so badly of every climate scientist?

    JCH, no idea you want from Antarctica. I don’t know why several commenters here feel that they have to follow the Socratic method. If you have something to say, say it. If you’re referring to Steig, you sure picked a bad example. You may want to read Robert Way’s comments about him here: http://climateaudit.org/2013/11/20/behind-the-sks-curtain/
    Indeed, anyone following that issue knew how it turned out – except those who don’t read both sides.

  519. Willard says:

    >If their results are proven to be fundamentally flawed

    And if Nic’s criticism is proven to have no merit, say because it’s a strawnan, what happens, MikeR?

  520. dhogaza says:

    mr:

    “Dhogaza, this is a much more focussed issue.”

    Not at all, the focus is on discrediting climate science, as a field. You could argue that this is a less important data point in that fight, but the focus is no different.

  521. dhogaza says:

    Mr:

    “If their results are proven to be fundamentally flawed – not just according to Lewis and McIntyre, but because it becomes obvious following the discussion between them and Pekka and Nick Stokes (eventually) and ATTP and even M&F themselves – then, yes, I have no doubt that they will withdraw the paper.”

    One very interesting aspect of the denialsphere is how d-word people like miker collapse the entire field into disagreements between a handful of people. It’s like they’re entirely unaware that there are thousands of scientists at work …

    Meanwhile, mike, would you like to bet money on the paper being withdrawn? I’m prepared to meet you to the high five-digit level, if you are.

  522. Joshua says:

    It strikes me that with this whole affair, we not only have open and proud tribalism on the part of Nic Lewis, we also have open and proud “pal review” via his hand-picked team of statisticians/attack dogs that he can quote to parlay his tribalism into impugning climate scientists and journal reviewers alike.

    It’s quite amazing how closely “skeptics” mirror the developments that so concern them – like the constant stream of sociopath analogies (Lysenko, McCarthy, Genghis Kahn, etc.) to express their “outrage, outrage I say” at the use of analogies to sociopaths.

  523. Joshua says:

    ==> “Not at all, the focus is on discrediting climate science, as a field. You could argue that this is a less important data point in that fight, but the focus is no different.”

    Indeed. The more I think about it, the more apparent it becomes. Nic could have been content to criticize the paper. He might have gotten input from non-aligned statisticians. But his overt tribalism actually lays bare his focus. Why else would he formulate improbable conclusions so poorly based?

    The amusing part is that miker want to create an exclusive club of those who care about “how the thing works out in the end,” that would actually have to exclude Nic.

    “The thing” is not really Nic’s focus.

  524. dhogaza says:

    Miker:

    “I don’t know Forster, but Lewis said he is a good scientist, so that is what he would do.”

    Note miker’s criteria for “who is a good scientist”. Lewis’s endorsement …

  525. Joshua says:

    ==> “Note miker’s criteria for “who is a good scientist”. Lewis’s endorsement ”

    A “good scientist” that previously Nic said either “tacitly” or overtly accepted the misuse of his data and findings.

  526. What I wrote yesterday about the M&F paper stops halfway. To understand better the paper we must understand, what the paper is really about.

    As far as understand correctly the paper the idea is based on the assumption that the temperature variations (or trends over 15 years and 62 years) that the CMIP5 models produce can be summarized by the formula

    ΔT = a + b ΔF + c α + d κ + e

    where
    – ΔT is the temperature change or trend of the model (or CMIP5 run) considered
    – ΔF is the change or trend in ERF (effective radiative forcing) over the same period as deduced from CMIP5 database in Forster et al (2013)
    – α is the (constant) feedback parameter of that model as deduced from CMIP5 database and reported in Forster et al (2013)
    – κ is the (constant) ocean heat uptake parameter of that model as deduced from CMIP5 database and reported in Forster et al (2013)

    a, b, c, and d are regression coefficients that are determined by fitting the above formula to the CMIP5 based data of all model runs included in the study ensemble. e is the unexplained residual that’s minimized in the determination of the regression coefficients.

    The same procedure is repeated independently for every 15 year and 62 year period that the overall period includes.

    The next observation is that F was determined in Forster (2013) as

    F = N + α T

    where N is the TOA imbalance included in the variables of CMIP5 data base, while F itself is not included. F is not directly an externally defined forcing used in model calculations but a model result defined by the above formula (lacking better ways of determining the forcing from CMIP5).

    Thus the formula used in regression can be rewritten (as I did before)

    (1 – b α) ΔT = a + b ΔN + c α + d κ + e

    This formula is effectively used by M&F to determine the coefficients a, b, c, and d for each period. The resulting coefficients are supposed to tell, how strongly the temperature variations depend on ΔF, α, and κ according to the first formula of this comment.

    The problem of circularity arises from the fact that N is actually used rather than an independent F, because the value of F is not determined directly, but by the approximate formula given above. It’s expected that temperature variations grow with increasing ΔF and decrease with increasing α and κ, i.e. it’s expected that b > 0, c < 0, and d 1 the coefficient of ΔT is negative in the regression formula. Such cases might affect in a strange way all coefficients obtained as some of the models (those that have highest α) work against the others. This problem does not arise, if the resulting b is always so low that b α < 1 for all models. (If many of the values are close to 1 some lesser problems may arise).

    Intuitively I would expect that something more dramatic would come out than seen in the results of M&F, if the problem is serious in this particular analysis, but that's only the first intuitive guess. It's also possible that this issue leads to cancelling effects that force the coefficients c and d to be always small (a surprising result of the M&F paper), but that's not my first intuitive guess.

    As you can see, the above is totally based on the formulas chosen by Forster (2013) and M&F. It's not necessary – or appropriate – to confuse this logic by physical arguments, which are not part of the actual calculation. I used earlier the expression pure mathematics, perhaps I should have used the more accurate expression pure computation. Starting values are from a database, formulas are given. Results follow from that. Variable F is not a real forcing, it’s a derived construct (ERF) defined in Forster (2013), motivated by physics, but not an externally given forcing.

  527. In the above b > 0, c < 0, and d 1 should be b > 0, c < 0, and d < 0.

  528. Pekka,
    I must admit that I’m not quite sure where you’re getting your formula from as the actual one is initially

    \Delta T = \dfrac{\Delta F}{\alpha + \kappa} + \epsilon,

    which they expand to

    \Delta T + \Delta T' = \dfrac{\Delta F}{\alpha + \kappa} + \dfrac{\Delta T'}{\alpha + \kappa}  - \dfrac{\Delta F \alpha'}{\alpha + \kappa}^2 - \dfrac{\Delta F \kappa'}{\alpha + \kappa}^2 + \epsilon.

    Here’s where I think the problem is. You’re assuming that you can simply replace \kappa \Delta T with dN, but I don’t think that’s quite right. The term \kappa \Delta T is an approximation for the TOA imbalance when the only thing operating is an external forcing (i.e., it closes the energy budget when there is forcing applied followed by a temperature response). It doesn’t include the variability which is included in the formula used to determine \Delta F. So, I don’t think it’s true that

    \kappa \Delta T = \Delta N.

    what is true is

    \kappa \Delta T = \Delta N_{forced}.

    So, it seems to me that the substitution that you and Nic Lewis want to make takes the formula from one that depends on an external factor (\Delta F) and two model constants (\alpha and \kappa) and replaces it with a term that includes both the forced response and internal variabilty. Not only does this not allow you to answer the question posed by Marotzke & Forster (since you can’t extract the forced response), it also isn’t correct since you included the variability in \Delta N and in \epsilon.

    So, I think both you and Nic are criticising the paper for not doing what you wanted them to do, rather than for criticising them for doing what they did do incorrectly. Your substitution would completely change what they would be doing and I don’t think that is valid criticism of a paper.

  529. Joshua

    It strikes me that with this whole affair, we not only have open and proud tribalism on the part of Nic Lewis, we also have open and proud “pal review” via his hand-picked team of statisticians/attack dogs that he can quote to parlay his tribalism into impugning climate scientists and journal reviewers alike.

    I think that the original post of Nic represented overconfidence and arrogance. He noticed a real issue, but he had not looked at it’s implications carefully enough to justify presenting it as more than an issue that may be significant – or not.

  530. aTTP,
    I’m using the actual formula of the analysis, not the physical formula that acts only as motivation for the actual formula.

    This is the problem in all your thinking. You don’t look at the actual analysis, you discuss something that is in your mind.

  531. Pekka,

    This is the problem in all your thinking. You don’t look at the actual analysis, you discuss something that is in your mind.

    Serisouly, you’re going to start playing this game now? I really thought better of you. I’m out. Why don’t you go and play at Climate Audit where they encourage this type of behaviour. I’d rather not have it here. I’ll happily read what you write there and acknowledge my error if you can convince me, but I’m not interested in discussions with those who think that’s an appropriate way to respond.

  532. Pekka,
    Maybe think about this. The equation

    \Delta T = \dfrac{\Delta F}{\alpha + \kappa},

    can produce a forced temperature timeseries for a given time interval without needing to consider any of the model outputs (\Delta T and \Delta N) when doing so. The term \Delta F has already been computed and represents an externally imposed forcing timeseries and the terms \alpha and \kappa represent model constants as long as the time interval isn’t too long. Think about this, at least!

  533. And where I got the formula more specifically. From the paper of M&F, I made it only easier to type by using a, b, c, d, and e. It’s the unnumbered formula above formula (4), which presents the result of the regression.

    The real analysis of the paper is based on regression, and this is the formula used in the regression.

  534. aTTP,

    Of course your formulas are valid formulas for many physical considerations, but this thread is specifically about the M&F paper. Thus it’s natural to check, what they have done, not what some other physical considerations tell.

  535. There’s actually a somewhat bigger error in my lengthy post. It seems, that I have cut off some of what I wrote (using cut and paste, when I should have used copy paste). Perhaps aTTP can make the correction.

    The whole paragraph shoud be as follows

    The problem of circularity arises from the fact that N is actually used rather than an independent F, because the value of F is not determined directly, but by the approximate formula given above. It’s expected that temperature variations grow with increasing ΔF and decrease with increasing α and κ, i.e. it’s expected that b > 0, c < 0, and d 1 the coefficient of ΔT is negative in the regression formula. Such cases might affect in a strange way all coefficients obtained as some of the models (those that have highest α) work against the others. This problem does not arise, if the resulting b is always so low that b α < 1 for all models. (If many of the values are close to 1 some lesser problems may arise).

  536. Now I understand. The error is due to the use of < as as a symbol rather when I should have the letters & lt ;.

    Once more

    The problem of circularity arises from the fact that N is actually used rather than an independent F, because the value of F is not determined directly, but by the approximate formula given above. It’s expected that temperature variations grow with increasing ΔF and decrease with increasing α and κ, i.e. it’s expected that b > 0, c < 0, and d < 0. If b α > 1 the coefficient of ΔT is negative in the regression formula. Such cases might affect in a strange way all coefficients obtained as some of the models (those that have highest α) work against the others. This problem does not arise, if the resulting b is always so low that b α < 1 for all models. (If many of the values are close to 1 some lesser problems may arise).

  537. In reply to my above comment on February 9, 2015 at 9:33 am, Pekka Pirila said,

    “KandA,
    The claimed circularity goes as follows:
    y is first calculated from x based on a linear relationship adding some random fluctuations:
    y = c * x + e
    Then It’s checked, whether x is linearly dependent of y. That’s all of it, nothing more.”

    Also,

    “The problem of circularity arises from the fact that N is actually used rather than an independent F.”

    Your last claim of circularity does not seem consistent with Lewis’ claim of circularity. Lewis actually originally wrote, given by ATTP on February 9, 2015 at 12:43 pm:

    “As is now evident, Marotzke’s equation (3) involves regressing dT on a linear function of itself. This circularity fundamentally invalidates the regression model assumptions.”

    Lewis claims circularity on variable dT, not N, as you do. That is, he says that the output of the regression is dT and the input of the regression is a linear function of dT, this input of the regression we can write as f(dT) = dF = dF = a*dT + dN (the right-most expression was given by Lewis.)

    You wish this to be about *only* mathematical structure? OK. But this means that this about at least in part if not entirely about underlying relations or mappings or functions between sets of elements that are the domains of variables.

    Note that two variables x and y connected under a one-to-one mapping or function describe this following circularity, regardless of which variable we choose to be the input variable: We have two one-to-one mappings, x -> y and its inverse y -> x, where in the first mapping x is the independent variable and y is the dependent variable, and in the second mapping x is the dependent variable and y is the independent variable.

    There has been much talk about variables dependent on or independent of other variables. And so note that the above means that on the questions of which variable is dependent on the other and which is independent of the other, there is more than one right answer in each case, since it depends on which mapping we are addressing. That is, it is *not* the case that one is fixed as independent of or dependent on the other.

    Note that we can say that x and y are one-to-one functions of each other. Note also that if this one-to-one mapping is also linear, then these two variables are linear functions of each other.

    It seems to me that Lewis is claiming one of the following two:

    (1). If we have two variables connected under a one-to-one mapping given by some linear formula – which means that each of these variables according to said formula is a linear function of each other, as, for example, f(x) = y = ax (think of all the formulas out there in mathematics, physics, and engineering that are similar in general form), then we can never do a regression such that one of these variables is the input of the regression and the other is the output of the regression (we would then be regressing a variable on a linear function of itself). That is, if we ever find that two variables are connected in this way under a one-to-one mapping that is also linear, then a regression relation between the two is forever outlawed – and this includes all prior regressions between the two that were considered valid before finding that the two variables are so connected. (To connect to the debate at hand: Everything is invalid *merely* because there exists a linear formula containing dT equal to dF.)

    (2). It’s not OK to regress a variable on a linear function of itself in this instance even though it’s OK in some other instances.

    If it’s (2), then he needs to explain why it’s not OK in this instance but OK in some other instances.

    So, which is it?

    One final note: If Lewis claims that it’s not *merely* the existence of a linear formula containing dT equal to dF causing the problem but instead, that said formula was used, then I have this reply: The substitution property of equality is now banned? Don’t forget that this is supposed to be *only* about mathematics, no physics allowed.

  538. KandA,
    The circularity is in T, not N. That’s clear also from my more detailed discussion. That’s also consistent with my short argument. your interpretation of it is not correct.

  539. Willard says:

    > Thus it’s natural to check, what they have done, not what some other physical considerations tell.

    I am not sure we can reduce this ClimateBall episode to “checking” unless we have in mind a more physical concept, e.g.:

  540. Joshua says:

    Pekka –

    Re: your 9:22. Thanks for the clarification. I understand that you’re generally reluctant to make that kind of statement.

  541. Pekka Pirila said, in reply to the part of my comment February 10, 2015 at 11:09 am on his use of the variable N in his comment on February 10, 2015 at 9:02 am, in which he said, “The problem of circularity arises from the fact that N is actually used…”

    “The circularity is in T, not N. That’s clear also from my more detailed discussion. That’s also consistent with my short argument. your interpretation of it is not correct.”

    OK. So let’s stick to T. So which of my (1) and (2) is Lewis actually claiming, and since you’re somewhat defending Lewis, which of my (1) and (2) are you actually claiming? These cover all the logical possibilities of what Lewis is claiming, under the two categories of universal quantification and existential quantification, and so note that it would be incorrect to say “neither”.

    If you disagree with these choices of (1) and (2), then note again Lewis’ language, “linear function of itself”, and note that when we have variables x and y under a one-to-one mapping, each is a one-to-one function of itself by the definition of one-to-one functions and inverse functions, and if these one-to-one functions are also linear, then each is a linear function of itself by the definition of linear functions (and also by the definition of one-to-one functions and inverse functions).

    It seems to me that he’s claiming (1), since his actual language that I quoted via ATTP’s comment was a blanket statement, and so it seems to imply the universal or general case.

  542. KandA,

    It’s OK to have the variable of the left hand side also on the right hand side, if it’s certain that the coefficient on the right hand side is always less than one by a safe margin. If that’s not the case the regression is likely to produce spurious results. (A coefficient always well above 1 is also stable, but leads probably to spurious results as it’s likely to indicate that the model being used is not, what it was thought to be.)

    From the paper of M&F it’s not clear, whether the regression is safe, and if it is not safe, it’s impossible to tell from the paper, how serious the effect is. The authors of the paper have to the best of my knowledge not reported on the safety anywhere. It’s still unknown to me, whether the regression has been safe or whether the results are significantly affected by this issue.

    It’s perhaps useful to notice that the regression formula is not linear in ΔT and α as it contains the product of these variables. It is, linear in the regression coefficients, and that’s technically enough for the regression, but the product term is the source of the problems discussed.

  543. Willard says:

    Why shouldn’t any “tautological” regression be unsafe, Pekka?

  544. Why shouldn’t any “tautological” regression be unsafe, Pekka?

    Actually a better question that I first thought.

    One more way of looking at this particular regression is to rearrange my above formula to give the value of the residual e:

    e = (1 – b α) ΔT – a – b ΔN – c α – d κ

    This formula is used to calculate the residual for each model run separately. Then the sum of squares of the residuals is calculated. The result is a function of the regression coefficients. The regression estimate of the coefficients is found as the set of coefficients that minimizes the sum. That step is always well behaved for the present problem, because at least some of the models have surely for every period significantly nonzero values for ΔT, ΔN, α, and κ.

    Thus the issue is really whether the situations I have described as problematic indicate failure of the regression model to describe properly the CMIP5 models in cases that may affect strongly the result of the regression, while they do not describe the model properly.

    I must think on that more, but I don’t want to delay this answer further.

  545. Pekka,
    You do realise that they’re actually doing a regression of \Delta T', \Delta F', \alpha', and \kappa' and that the prime indicates the variation from the ensemble average? As I see it, all they’re doing is using the known spread in model values for \Delta F, \alpha, and \kappa to determine the externally forced and unforced (internal variability) contributions to the model trends.

  546. AndyL says:

    aTTP
    Referring to above exchange:
    Pekka: This is the problem in all your thinking. You don’t look at the actual analysis, you discuss something that is in your mind.
    aTTP: Serisouly, you’re going to start playing this game now?

    I think you are wrong to take offence at Pekka’s words, given he is responding in what is clearly not his first language. It is similar to the way people jumped on his use of the term “pure” mathematics.
    It seems clear that what Pekka meant was that we should consider the actual analysis used by M&F, not the underlying physics which is in your mind but was not used in the paper. I think he was referring specifically to your post at 9:21

  547. AndyL,
    Possibly, but starting something with “the problem is all in your thinking…” would seem to be universally regarded as a bad way to start.

    It seems clear that what Pekka meant was that we should consider the actual analysis used by M&F, not the underlying physics which is in your mind but was not used in the paper. I think he was referring specifically to your post at 9:21

    Firstly, I’m not quite sure how – or why – you would separate the physics from the analysis. The physics is a crucial part of the assumptions that they make when doing the analysis. I don’t think you can critique their analysis unless you’ve understood the underlying assumptions.

    But anyway, it doesn’t matter. I’m trying to take a step back and think about this a bit more. Clearly pointing out that energy conservation is important and a fundamental law hasn’t helped.

  548. AndyL says:

    aTTP,
    While stepping back, could you consider whether you agree with Pekka that there would be circularity (fatal or not) without the energy conservation. We can then focus on the area of (dis)agreement

    Also, Pekka’s last para on Climate Lab seems relevant:
    “As you can see, the above is totally based on the formulas chosen by Forster (2013) and M&F. It’s not necessary – or appropriate – to confuse this logic by physical arguments, which are not part of the actual calculation. I used earlier the expression pure mathematics, perhaps I should have used the more accurate expression pure computation. Starting values are from a database, formulas are given. Results follow from that. Variable F is not a real forcing, it’s a derived construct (ERF) defined in Forster (2013), motivated by physics, but not an externally given forcing.

  549. Willard says:

    > You don’t look at the actual analysis, you discuss something that is in your mind.

    If you have the recipe to bypass the mind, please share, AndyL. Mother Nature may have invested way too much in our brains.

    More seriously, what’s the problem with Pekka trying to understand the problem at hand, or any other one that might shed light on what’s happening? That’s not a rhetorical question. If there’s nothing to add to the analysis, piling on is suboptimal. If you want to know how suboptimal it looks, consider Sir Rud’s comment:

    Pekka, with all due respect, your reply is illogical. Think it through rather than reflexively defending the apparently indefensible. Logic explained in more detail in other comments.

    http://climateaudit.org/2015/02/05/marotzke-and-forsters-circular-attribution-of-cmip5-intermodel-warming-differences/#comment-751261

    This kind of behavior is suboptimal on CA. On RC. On SkS. On BB. On CE. Everywhere. It is more corrosive than any use of label will ever be. This is playground stuff.

    This is a science blog. Equations are fair ball. Pekka earned his wings.

    Chill, guys. Let the Auditors pile on.

  550. I think that the situation starts to become clear to me.

    Marotzke and Forster define a regression model to describe the behavior of the CMIP5 models for the set of variables ΔT, ΔF, α, and κ. The determination of the coefficients of the regression model brings in also the variable ΔN, and it’s connection to the other variables. This extended set of variables introduces the possibility of writing the regression model in various different ways. It makes it also possible to calculate, what happens when ΔT is changed without a change in ΔN in a model described by the regression. If we do that in the cases that I have described as problematic that leads to temperature feedback that’s larger than the original change. The model taken in that way is not well behaved.

    The above is not, however, a real problem, as we can restrict the application of the model to the four variables used by M&F allowing ΔN to vary freely as required by the given relationship. Under these conditions the regression models do not have singular behavior at all. That’s the way M&F thought about the models throughout.

    The next question is, whether the potentially problematic behavior of the model does enter the determination of the regression coefficients, when the goal of the analysis is to use the resulting model in the way M&F have used it. The answer seems to be that it does not enter.. When the relationships are used in this direction, there are no singularities, there are only situations. where the coefficient of ΔT is zero, and that’s not a problem for the regression.

    Thus it seems that the issue Nic presented is not really a problem. There’s a circularity, but that enters in the direction, where it is not a problem at all.

    (It took me too long to understand the situation. Obviously I have worked with mathematics too little for years and even for decades.)

  551. Willard says:

    > This is playground stuff.

    This may be an understatement, for it goes deeper than that:

    A great movie for anyone who’d like to understand ClimateBall, and more importantly what is being done to people like Nick Stokes at the Auditor’s.

    CA is 10 years now. Wouldn’t it be time to grow out of these practices?

  552. AndyL,

    Variable F is not a real forcing, it’s a derived construct (ERF) defined in Forster (2013), motivated by physics, but not an externally given forcing.

    Yes, and this is maybe an important point. It’s true that F is not a “real forcing”, but derived from the models. However, the forcings are not imposed on the models as radiative forcings, the models self-consistently determine the forcings by determining the radiative influence of whatever is being imposed (volcano erupts, we add CO2 to the atmoshere,…). Also, the forcings are essentially defined as being the radiative effect at the TOA (i.e., how much does that influence change the TOA imbalance in the absense of a temperature response – okay, sometimes it includes the stratosphere responding). Since, the models are responding to these forcings (by allowing temperatures to change and feedbacks to operate) you can’t simply output the forcing, you need to determine what it is afterwards (it’s possible that CMIP6 may do this better).

    So, how do you do this? You just use energy conservation. If you apply a forcing dF to a model with a sensitivity \alpha, then if the temperatures changes by dT, the TOA imbalance must change by

    \Delta N = \Delta F - \alpha \Delta T.

    That’s just basic physics and if the models are conserving energy well enough, is self-evidently true. Hence if you use the model values for \Delta N, \alpha, and \Delta T to determine \Delta F, then it should be a reasonable approximation of the actual external forcing as long as the energy conservation in the model is okay.

    FWIW, I notice that there is some discussion over at Climate Audit about this Willis post that Nick Stokes criticised. I think this is a different issue, though. What Willis was claiming was that climate models work by simply taking an external forcing as input and converting it into a temperature change using some kind of climate sensitivity factor. Well, that’s not really true. They self-consistently determine the radiative influence of the different external factors, and then calculate how the system warms as a consequence of those external influences, and self-consistently determine the feedbacks.

    Of course, you can then cast that as an external radiative forcing and you can estimate the the climate sensitivity for that model. If you do so, you can determine how that model would warm for a given change in external forcing. That, however, does not mean that the external forcings are an explicit input to the models, or that climate sensitivity is simply some imposed factor, which is what I think Willis was claiming.

  553. Pekka,

    Thus it seems that the issue Nic presented is not really a problem. There’s a circularity, but that enters in the direction, where it is not a problem at all.

    I think I may just take a deep breath and leave it at that 🙂

  554. An update in regards to the comment https://andthentheresphysics.wordpress.com/2015/01/31/models-dont-over-estimate-warming/#comment-47277

    I am doing further regression with respect to the residual of the initial regression and I am finding a significant contribution to the ENSO SOI signal shifted by a few years and multiplied by itself.

    AndyL and MikeR are going to have to resign themselves to the fact that they won’t be able to prevent people from applying innovative regression techniques such as Symbolic Regression to understand climate time-series.

    [Chill. -W]

  555. miker613 says:

    @Willard “And if Nic’s criticism is proven to have no merit, say because it’s a strawnan, what happens, MikeR?” Obviously, I would expect Nic/McIntyre/Roman M to admit it if they are shown to be wrong. I would be disappointed if they don’t.
    But of course I’m not leaving it up to you to judge when that has happened!

    @dhogaza “Meanwhile, mike, would you like to bet money on the paper being withdrawn? I’m prepared to meet you to the high five-digit level, if you are.” Crazy idea. Dhogaza, I have no idea who is right. I hope to find out soon. [I thought I knew who was right initially, when the only opposing argument was “you’re allowed to invert equations” kind of stuff. But now that Climate Lab has posted saying that of course circularity would be a problem, but they didn’t actually do that, so I’ll have to wait for people who worked through those papers to work through the details. I have no idea now how it will end up.]
    I’m surprised that you think you know. If you aren’t a statistician (as I am not), and you haven’t been taking a major part in the discussion, why are you willing to bet? Because regular climate scientists are always right, and McIntyre and company are always wrong? If so, are you willing to offer someone (but not me, I have no money) 10 to 1 odds? If you are, what could be a mutually agreeable method of deciding who wins?

  556. miker613 says:

    @dhogaza, I see now that you made the condition of the bet that the paper is withdrawn, not just being proven wrong. For that you ought to offer better odds, as two things have to happen: first, being proven wrong, and also, being honorable enough to withdraw a wrong paper. For the second I have only Nic Lewis’s word, and I have no idea if he’s just being polite or something, or maybe misjudging the people.
    Anyhow, it would be easy to decide who wins, but you need to give better odds. 20 to 1? Again, not to me, as I am not a “high-five-digit” type of person. Low one-digit is my kind of limit. Maybe ask James Annan, he recently told me he might take a bet on slow temperature rise for the next decade (http://julesandjames.blogspot.com/2015/01/temperature-bet-update.html?showComment=1420295198724#c6232181107431597510). Maybe he’d be interested in this one.

  557. Joshua says:

    miker-

    Now that you’ve had more time – any change in your view as to whether Nic’s nasty, pointless, and gratuitous attack on the authors, climate scientists more generally, and all the reviewers at Nature is based on fallacious reasoning (that there was an obvious flaw that anyone with knowledge of stats should have seen immediately) and reflective of the kind of tribalism that undermines the credibility of “consensus” climate scientists?

    I suspect you haven’t evolved on that issue 🙂 but just wanted to check.

  558. Willard says:

    > Obviously, I would expect Nic/McIntyre/Roman M to admit it if they are shown to be wrong.

    That’s all, MikeR?

    ***

    > I’m not leaving it up to you to judge when that has happened!

    You leave that to whom, MikeR?

    Beware that this thread is about circularity.

  559. pbjamm says:

    >Pekka did a good job a little while ago working out some issues on climateaudit about some of the statistical methods in PAGES2K: http://climateaudit.org/2014/10/11/decomposing-paico/
    I found it completely convincing, as would anyone (unless they don’t read climateaudit, of course).

    Miker613 do you also find Pekka’s work here “completely convincing”?

    “Thus it seems that the issue Nic presented is not really a problem. There’s a circularity, but that enters in the direction, where it is not a problem at all.”

  560. Miker613,
    I think Joshua is making a perfectly valid point that I think you’re either ignoring or not getting. Nic Lewis wrote a post on Climate Audit claiming/asserting that there was a trivial, embarrassing, schoolboy error in Marotzke & Forster that the authors, reviewers and journal should have noticed, and that the paper should be retracted. Both insulting and demanding. However a number of people who should have the expertise to see the error disagree. This means that either

    1. These other people can see the error, but won’t admit it. They’re dishonest, in other words.

    2. There is an error but it’s nowhere near as simple as Nic lewis suggested and maybe his post should have been more circumspect and less insulting.

    3. Nic Lewis is wrong, should retract his post and apologise.

    So, which one is it? Currently I’m going for 3, but could be convinced that I’m wrong.

  561. pbjamm says:

    I would like to apologize for the dreadful job I did of formatting that last comment.

  562. BBD says:

    Asking miker a direct question is not generally a productive process.

  563. jsam says:

    Cheerleaders should always wear clean underwear.

  564. Joshua says:

    But Anders – there’s more.

    ==> “…”claiming/asserting that there was a trivial, embarrassing, schoolboy error in Marotzke & Forster that the authors, reviewers and journal should have noticed, and that the paper should be retracted.”

    Nic’s argument was more than simply that they should have noticed. His argument is that the fatal flaw was so obvious that it’s very existence justified insulting the competence of the authors, climate scientists generally, and all reviewers at Nature.

    It’s one thing to assert that there was a simple and obvious error, but it’s another to then conclude that the existence of that error justifies such a widespread accusations.

    IMO, Nic’s rhetoric was in service of an “activist” agenda. It went beyond the realm of science. Nic, IMO clearly, was trying to leverage a potential error in this paper to launch a broadside in the great climate change war.

    This is naked tribalism and flat out fallacious reasoning (willard can probably give the term for the fallacy) – I think it could be described maybe as guilt by association?

    I have little doubt that miker has, in the past, justified broad conclusions about “consensus” climate science on the basis of individual instances of naked tribalism and fallacious reasoning.

    Just to clarify, my point is definitely not that based on Nic’s tribalism and poor reasoning, we should assign guilt by association to “skeptics” more generally – but that we should call them into account individually, as it applies, for the often stated fallacious argument that underlies much of what we read about “skeptical” reasoning w/r/t “consensus” climate science.

    My point is that (as far as I can tell), an individual “skeptic” who has railed about the impact of climategate or nasty, pointless, and gratuitous behavior from “consensus” climate scientists has two basic choices within a logically coherent framework if they want to engage in good faith: (1) walk back the implications they have drawn from the existence of tribalism among “consensus” climate scientists or, (2) draw the same sorts of conclusions about the validity of Nic’s work, the work of the statisticians he’s associated with, and indeed, the lord god Stevie-Mac himself.

    Option #1 would be my choice if I were in their shoes, because I think that it is far more supportable.

  565. Joshua says:

    You could substitute jsam’s six words for my 1,000(?)

  566. Joshua,
    Okay, yes, I see what you mean. Interesting. I agree, if there was a desire to engage in good faith, 1 would be the obvious choice. My guess, though, is that it’s going to be 3 – continue to assert that Nic is right despite evidence to the contrary.

  567. Joshua says:

    ==> “My guess, though, is that it’s going to be 3

    Yeah, I think so too, Anders. Because Nic’s expertise can be trusted despite his tribalism and “consensus” scientists’ expertise can’t because of their tribalism.

    This is where Kahan’s work comes in. In fields where “knowledge” is so complex, we choose which “experts” we trust based on where we locate them within our own ideological constellation.

  568. I have a few embarrassing experiences about having been all too certain that I have found an error in someones work and presented that finding in a way that I can only regret afterwards. There are other cases as well, where I wasn’t so sure at all, but the other got the impression that I claim to know that her work has serious faults. This kind of experiences make me understand that others make similar misjudgments.

    In this case I’m surely not the ultimate judge. After thinking on the issue for quite a while I have reached conclusions I believe in – for now. I think that I have understood, what made Nic reach his conclusions, and why those conclusions are after all in error.

    Perhaps someone will still produce further arguments that force me reverse my thinking one more time. Right now I don’t expect that to happen, but who knows.

  569. Joseph says:

    so I’ll have to wait for people who worked through those papers to work through the details.

    Miker, which people? Which details? How will we know they have finished working through the details?

  570. miker613 says:

    ATTP, I think I’ve answered this already, and so has Pekka. Initially it sure looked like a serious problem, and there wasn’t enough information available to see otherwise. It may be that Climate Lab has presented a sufficient answer, in which case Nic Lewis should indeed retract and apologize. Could well be that even if the result turns out to be wrong he should also apologize, given that the problem clearly wasn’t as obvious as it appeared at first – though that may be a function of the way the papers were written.
    In any case we’re still in the middle.

    As for Joshua’s conclusions, I think they are all much too broad. Better to work on a case-by-case basis. Most of my conclusions about issues I have followed are based on the details of how those cases actually worked out. Whoever loses this particular issue (if that can eventually be determined) will also earn or lose trust based on whether they own up honestly to what happened.

  571. Joseph says:

    In any case we’re still in the middle.

    And at some point we are going to get to a “conclusion,” right Miker?

  572. miker613 says:

    Probably worth mentioning that we’re currently discussing one particular detailed issue: is there an egregious mistake in the regression used by M&F. My very first comment on this post was concerned with a entirely different issue: the study seems to me to be problematic even if the regression is totally fine. I got the impression that Pekka
    https://andthentheresphysics.wordpress.com/2015/01/31/models-dont-over-estimate-warming/#comment-47207
    was making a similar point (see my comment right after his): the study does not show what it claims to be studying. I’m not sure if Pekka has changed his mind, though.

  573. miker613 says:

    ‘And at some point we are going to get to a “conclusion,” right Miker?’ As I’ve said several times, Joseph, I expect so. This will die down after a few days or weeks. It isn’t theology, it’s math, and there’s such a thing as objectively right and wrong.

  574. Joseph says:

    This will die down after a few days or weeks.

    Just because it may down after a few days or weeks doesn’t mean anything was resolved (and I agree I think it will die down soon)..

  575. Willard says:

    > it’s math

    And then there’s mathematical physics, MikeR.

  576. Miker613,
    I think Joshua is making a broader point. We hear a lot about behaviour in this debate “if only climate scientists behaved better, we could trust them more” and yet poor behaviour on the other “side” is excused or dismissed. I think Joshua’s point is about behaviour and identity politics.

  577. miker613 says:

    ATTP, I answered that earlier. I expect most skeptics to misbehave, as they are partisans, not scientists. I expect most pro-AGW commenters to misbehave, as they are partisans, not scientists. There are a lot more pro-AGW scientists. I would have normally expected them to have a big advantage in the public trust. They can lose it, though, by behaving like partisans, and a lot of them have. And they _insist_ that it’s necessary and I’m a concern troll.
    Nic Lewis and Richard Tol etc. can do the same. In either case it’s a shame, and a loss for science.
    But I would think the pro-AGW scientists have more to lose, as they are (mostly) the establishment.

  578. pbjamm says:

    miker613 how can the pro-AGW scientists lose by by telling the truth? Maybe they have to raise their voices a bit to be heard over the squabbling but so what? The only things that risks getting lost in the endless nonsense are the facts.

  579. David Young says:

    There is a point here often overlooked. Those who wield power are often held to a higher standard especially governments who wield life and death power. So it is disturbing when it appears the Fda may have a track record of failing to disclose errors and misconduct in drug trials even to its own scientific advisors.

  580. DY,
    Firstly, scientists don’t have power, at least no more than anyone else, and probably less than many others. Secondly, just because there are examples of poor practice and misconduct elsewhere doesn’t necessarily imply anything with respect to climate science. Personally I find it particularly annoying when people make these false equivalences.

  581. David Young says:

    Attp, You seem to have read something in my comment that was not there. Many scientists however work for the government.

  582. DY,
    Maybe so, but what is the implication of the latter part of your comment? I’m also not convinced it’s actually true. University academics do not work for the government. UK Met Office do and, I think, NASA in the US do, but I’m not sure how many in other countries work for their governments work at universities whic are typically independent.

  583. I think Joshua is making a broader point. We hear a lot about behaviour in this debate “if only climate scientists behaved better, we could trust them more” and yet poor behaviour on the other “side” is excused or dismissed. I think Joshua’s point is about behaviour and identity politics.

    You have different expectations about behavior when the subject is a small child. I have also different expectations when the subject tells about science rather than spreads nonsense.

    Those who speak for science have more to lose with bad behavior in public.

  584. Pekka,
    Yes, but that doesn’t mean that explicitly criticising some while excusing the same in others is acceptable.

  585. Joshua says:

    miker –

    ==> ” They can lose it, though, by behaving like partisans, and a lot of them have.”

    Evidence needed. There are solid data that the majority of the public trust “consensus” climate scientists above all other sources for information about the climate. IMO, most of those who supposedly “lost trust” never had it to begin with. And I think I’ve seen evidence to support that conjecture.

    ==> “And they _insist_ that it’s necessary and I’m a concern troll.”

    I’ve seen you be critical of “skeptics” at Judith’s. I think that if you’re here, and you’ve lasted here, it probably indicates that you’re interested in having your beliefs challenged (most “skeptics” don’t show up at someplace like here and most who do get moderated out because they have a different focus). But I’m’ not sure who “they” are, what they’re “insisting” on, who it is that’s calling you a “concern troll” and how that’s relevant.

    ==> “Nic Lewis and Richard Tol etc. can do the same.”

    The same what? So are you agreeing that Nic (as Tol often does) was leveraging poor reasoning and tribalism to lob a broadside to advance a partisan agenda?

    ==> “In either case it’s a shame, and a loss for science.”

    IMO, it has nothing to do with the science. So science doesn’t lose from this bullshit. Where something is lost is in the public discussion of how to address policy change.

    ==> “But I would think the pro-AGW scientists have more to lose, as they are (mostly) the establishment.”

    I think that is an unfortunate conceptualization of what’s going on. IMO, “consensus” scientists don’t win or lose and neither do “skeptical” scientists; this is not a zero sum scenario. This is about building a constructive and informative discussion. It’s about whether people are having productive or counterproductive input towards that goal.

    This is where I disagree with you w/r/t to the discussion of the M&F paper, also. IMO, what’s really important about that paper is not whether they made an error or whether Nic was right. In the end, the climate will do what it will do irrespective of that one paper. That one paper makes only a small contribution to the body of literature on climate change.

    But here’s what I’ve seen in the “skept-o-sphere.” In following Nic’s lead, “skeptics” have lined up to talk about how this paper is an example of why the “consensus” is wrong about climate change: They’re wrong because the mistakes that they make are so bleedin’ obvious and yet they don’t know it. They’re wrong because they don’t know shit about statistics. They’re wrong because anyone who knows anything about statistics can see the fundamental flaws in their work. They’re wrong because they’re frauds and trying to create a one world government in order to starve poor children in Africa. And the process of peer review cannot be trusted because the reviewers are so incompetent (or dishonest) that they can’t even see flaws that college-level students in statistics would recognize immediately.

    I want to bring you back, again, to the point that I think that you are ducking, repeatedly. No matter whether M&F made a fundamental error in their work, what’s revealing is that Nic – who has gained elevated status among “skeptics” for the quality of his science – felt that the flaws were so obvious that they justified assertions of incompetence through guilt-by-association to not only the authors, but to climate scientists in general and all the reviewers of Nature (and by extension, the very process of peer review). Take a stroll around the “skept-o-sphere” and tell me how many “skeptics” you say say to Nic – “Hey, I love you bro, but maybe your insults and guilt-by-association are a bit over the top and more than just a tad unscientific in nature.”) I think that you won’t see it often.

    Yet by Nic’s logic, statisticians around the world could be asking Pekka to guest lecture as an case study in poor understanding of statistics because he has not seen an obvious and fundamental flaw in the paper – but instead said that he sees what might be a problem but it is a problem that isn’t exactly obvious.

    So what does that mean? For me, does it mean that I “skeptics” are always wrong or that Nic’s science is always shit? No, it doesn’t. What it means is that we can use this example as a lesson in how people fall into fallacious ways of reasoning in this debate. People could build from this situation to create better dialog. “Skeptics” can use this as a lesson in how their self-professed skepticism is not all it’s cracked up to be. Nic could climb down off his high horse w/r/t the impact of “activism” on the science that informs climate change policy.

    Will that happen?

    Did I ever tell you that I’ve got a really beautiful bridge right near Manhatten that I could let go really cheap?

  586. jsam says:

    Is Nic Lewis a small child? Who knew?

    [Chill.-W]

  587. Joshua says:

    miker –

    ==> “As for Joshua’s conclusions, I think they are all much too broad. ”

    All of them? Really?

    How about if you just break down a couple for me – to explain how they are too broad?

    ==> “Whoever loses this particular issue (if that can eventually be determined) will also earn or lose trust based on whether they own up honestly to what happened.

    Pie-in-the-sky, bro. Your faith in this human nature is quaint, but IMO nothing of significance will change as an outcome of this process. Very few if any “skeptics” will lose or gain trust in Nic based on who “looses” this particular issue. And every few if any “realists” will lose or gain trust in M&F on the basis of them “winning”” or “losing.” People have already made up their minds about who has gained or lost trust through this issue, and they will fit whatever the outcome is into their existing framework. Cognitive dissonance will be successfully averted once again.

    And further, both sides will use whatever happens to not only have their beliefs about the individual scientist involved confirmed, they will double-down to feel justified in their broader partisan conclusion about how “we” are just peachy keen and “they” want to starve poor children in Africa.

  588. This figure has the M&F decadal cliimate trends as a background image (both GCMs as the random wiggles and data as black circles). In the foreground I placed the GISS time series along with a CSALT model which composes on a regression of the major non-temperature factors

    TCR for equivalent CO2 is 2C.

    [Chill. -W]

  589. Joshua says:

    [Chill. -W]

  590. AndyL says:

    WHT ./ Joshua / whoever
    I’m not sure why I’m being insulted here, though a lot of the recent discussion has moved away from science and toward playground invective which I tend to ignore.

    For what it’s worth, I came here because I was looking for discussion on Lewis’ post and AFAICT this site was the first site in the scientific mainstream to actively consider it. While I don’t necessarily agree with a lot of aTTP’s points, Pekka worked through his findings here. The ball is now in Lewis’ (and possibly others including McIntyre’s) court.

    Also FWIW, I agree that Lewis’ original post was over the top. I also suspect that Lewis quoted comments from Hughes that were not intended for publication. aTTP said that in a similar situation he would be scathing in private, which may not be that different to Hughes. Now that the complexity is in the open, it is more apparent that Lewis was OTT, though I will agree that it should have been obvious at the time.

    I began by saying it will be interesting to see how this pans out. I agree with miker that we are still in the middle-game. Ed Hawkins has invited Lewis and McIntyre to respond. I hope that Marotzke provides data to McIntyre as requested. If he does, McIntyre will be committed to publishing his findings whichever way they fall.

    Once the situation is resolved the reaction of the main players will be illuminating.

  591. Pekka,
    Yes, but that doesn’t mean that explicitly criticising some while excusing the same in others is acceptable.

    No, it doesn’t, but we should concentrate on our own behavior and perhaps tell to our friends, when we think that they may cause damage to our common issue, and trust that others can also see, when our opponents behave badly.

  592. Joshua says:

    AndyL –

    WHT’s middle name is attack. I wasn’t attacking you – only having some fun with WHT”s creativity.

    Your 11:30 PM seems quite reasonable to me, even if OTT doesn’t quite paint an complete picture, IMO.


  593. WHT ./ Joshua / whoever
    I’m not sure why I’m being insulted here, though a lot of the recent discussion has moved away from science and toward playground invective which I tend to ignore.

    M&F said ” Using a multiple regression approach that is physically motivated by surface energy balance”

    How is that a different motivation than the approach that I took with the CSALT model


    http://ContextEarth.com/2013/10/26/csalt-model/

    Foundation of the CSALT model

    As always physics rules when it comes to defining natural behavior, and so we start by looking at energy balance. Consider a Gibbs energy formulation as a variational approach:”

    and on from there. Maybe you should look into this, eh?

  594. Willard says:

    Gentlemen,

    Play the ball, please.

  595. David Young says:

    For example, if i am a physician in private practice, I have an ethical and legal obligation to my patients.

    If I am a scientist publishing a study about the efficacy of a costly treatment, I have a higher obligation because my work will influence thousands of physicians and institutions.

    If I am an FDA scientist, I have a very high obligation because my decisions can cause immense harm if they are wrong. For example, the adverse data on VIOXX was in the report to the FDA. It was omitted from the published paper. When it was publicly brought to light, the FDA thought it was important. If that was true, was it not important before or was it overlooked?

    Thus, not all have equal obligations, some need to exercise more care and circumspection than others. Circumspection does not equal dishonest by the way.

  596. Steve Bloom says:

    “CA is 10 years now. Wouldn’t it be time to grow out of these practices?”

    No. You of all people should be clear that no such thing will happen. It would be contrary to the blog’s reason for existing.

    If anything, the invective level there has gotten worse since the early days (when I made the mistake of taking the time to be a regular participant). [The ball, please. -W]

  597. dhogaza says:

    [Chill. -W]

  598. miker613 says:

    [Yes, but MikeM.]

  599. [Please don’t take baits, AT.]

  600. Andrew Dodds says:

    miker613 –

    This is not a matter of homework marking, though.

    The normal process would have been for McIntyre to have produced his own temperature curve from the data, using the practices and techniques that he considered best suited. If this curve differed significantly from that found by MBH, this would be where the comparison of techniques would start.

    Except that as far as I know, McIntyre has never done this, even though it would be a final slam-dunk – ‘Here’s my reconstruction, it’s the best (statistically) and it shows no great 20th century warming’. That would be it. Game over, as it were. You have to ask why he hasn’t done this. Endless meandering posts about how everyone is involved in some sort of vague grand conspiracy, just-so stories, whatever.. but no meat. No beef.

    I’d also add that since MBH ’98 is now 17 years old and has been superseded many times over, the very idea of spending years going through it in painstaking detail is frankly stupid – if it were that bad than it would have been demonstrated to be wrong by subsequent studies and the ‘auditing’ process pointless. But since it’s been broadly confirmed by subsequent studies.. then the ‘Auditing’ process is still pointless.

  601. AndyL says:

    aTTP
    This is a statistical paper, which has been criticised for handling of statistics, and for which Pekka has mounted a statistical defence. If we are to judge the validity of the statistics, surely it is best for everyone to use exactly the same data set.

    If M&F decide to take a different position, such as your conservation of energy argument, then maybe it is not so relevant for M&F to share the data at this point.

    It doesn’t help the situation to take offence on behalf of others. If the authors request a change to the post as a pre-condition to sharing data, then McIntyre will almost certainly make the changes. As it is, if Lewis ends up having to back down, he will also have to amend and apologise for his words. All that can wait.

  602. AndyL says:

    Tempted though I am to respond to Andrew Dodds, I suggest that this thread remain on topic.