Sensitivity to cumulative emissions

Something I’ve mentioned here quite regularly is the idea that warming depends roughly linearly on cumulative (total) emissions. This is slightly counter intuitive, in that warming depends logarithmically on atmospheric CO2 concentration. The reason is essentially that it incorporates climate sensitivity (which depends on changing atmospheric concentrations) and carbon cycle feedbacks, into a single quantity. It seems that the airborne fraction is expected to increase so as to compensate for the logarithmic dependence on atmospheric CO2 concentration. In others words, the expectation is that if we double how much we’ve emitted, we’ll more than double the human contribution to the atmospheric CO2 concentration.

There are a number of papers that have considered this and the general result is that it appears to be a reasonable relationship for most realistic future emission pathways, although it might over-estimate the warming from the highest emission pathway. The quantity is called the transient response to cumulative carbon emissions (TCRE) and is thought to have a range of 0.8 to 2.5oC per 1000GtC for real emission pathways, and 1 to 2oC per 1000GtC, for a 1% per year CO2 only emission pathway. The reason for the difference is simply that the real emission pathways include non-CO2 GHGs, while the TCRE is defined in terms of the CO2 emissions only.

Credit : Nic Lewis

Credit : Nic Lewis

The reason I’m telling you this is because Nic Lewis has a guest post on Climate Etc. in which he is suggesting that the TCRE is quite a bit lower than other estimates suggest. The figure on the right shows his analysis (solid lines) and the IPCC values (dashed lines). Nic Lewis appears to be suggesting a best estimate for the TCRE of 1.15oC per 1000 GtC, or 0.9oC per 1000GtC if the forcing is CO2 only. His analysis suggests much less warming (along the same emission pathways as used by the IPCC).

The reason, I think, for his result, is pretty straightforward. His underlying model has a low TCR (about 1.35oC) and he is assuming that the carbon cycle feedbacks are on the low end of the range. The carbon cycle feedbacks essentially relate to how the carbon sinks respond to increased CO2 and to increased temperatures. If they’re on the low side, then the sinks are not significantly influenced by increased CO2 levels and warming, and the airborne fraction will remain roughly constant. Hence, the logarithmic nature of CO2 is not compensated for by an increasing airborne fraction. So, the overall warming is reduced because of the lower TCR and because of the weaker carbon cycle feedbacks. A double-whammy.

As I understand it, this all within the realms of possibility, so it could well be what happens. However, discovering that one can develop a plausible model that suggests that warming will be on the low side, is not really evidence that it will be. Also, bear in mind that there is probably something like a range of ± 0.5oC on either side of values presented by Nic Lewis. so even his lower estimates doesn’t rule out greater than 2oC even along an RCP6 emission pathway.

Credit : Matthews et al. (2009)

Credit : Matthews et al. (2009)

Okay, I’ve managed to bumble through this post a bit. What I wanted to highlight was that, as usual, Nic Lewis’s work is being highlight as being observationally based. However, the figure on the left (from Matthews et al. (2009)) shows TCRE determined from 20th century observations (based on warming and CO2 emissions relative to 1900-1909). The range varies from about 1oC to over 2oC per 1000 GtC (depending on the time period considered), with a best estimate of about 1.5oC per 1000GtC. It is pretty similar to the IPCC range I mentioned earlier, and quite a bit more than Nic Lewis estimate of around 1.15oC per 1000GtC. Also, the break in Nic Lewis’s graphs seems to suggest that he thinks we will go from a TCRE of probably around 1.5oC per 1000GtC, to one around 0.5oC per 1000 GtC, starting about now. A little odd as many others seem to think that global warming is probably going to start accelerating.

I guess we will know in the next few decades if Nic Lewis’s suggestion is correct. On the other hand, we’ll also know, in the next few decades, if the 2oC budget really is around 300GtC. Given how we appear to be unwilling to do anything to actually cut emissions, I’m hoping Nic Lewis is correct. I’m not hopeful, though. I also think it would be better to consider all the evidence, not just select what gives us what we’d like to see, but maybe that’s just me.

This entry was posted in Climate change, Climate sensitivity, ClimateBall, Global warming, IPCC, Judith Curry, Science and tagged , , , , , , . Bookmark the permalink.

202 Responses to Sensitivity to cumulative emissions

  1. This is slightly counter intuitive, in that warming depends logarithmically on atmospheric CO2 concentration

    The relationship is also not a logarithmic function of cumulative emissions because we did not start with a CO2 concentration of zero.

    May I ask the BBD question: is the estimate of Nic Lewis consistent with the observational estimates from paleo studies?

  2. Victor,

    The relationship is also not a logarithmic function of cumulative emissions because we did not start with a CO2 concentration of zero.

    Yes, but I think that if the airborne fraction remained constant, then the warming wouldn’t (I think) depend linearly on cumulative emissions.

    I’ll let BBD answer his question.

  3. BBD likes to answer: “but paleo”.

    Let’s add a Victor question. Is the Nic Lewis estimate consistent with our physical understanding of the climate system?

    The IPCC range is consistent with our understanding of the radiative properties of CO2, of the increase in humidity that goes with the warming and with the albedo feedback from less snow and ice due to warming. If the IPCC range would not have fitted to this physical understanding, I would have said that we do not really have a solid case yet.

    It is fine to produce a fun outlier result the way Nic Lewis did, but I only start to see it as part of our scientific understanding of the climate system when we understand the physical reasons why the climate sensitivity were so low. Until then, I will see it as an outlier and quite likely an indicator that the method he is using is quite sensitive to the assumptions.

    Everything should fit together. We only have one reality.

  4. Everything should fit together. We only have one reality.

    Exactly. It’s important to understand the likely future realities, not choose one that might seem nice, but doesn’t end up matching reality.

  5. Andy Skuce says:

    In others words, the expectation is that if we double how much we’ve emitted, we’ll more than double the atmospheric CO2 concentration.

    Not quite, because, as Victor pointed out, we didn’t start from zero. Perhaps just add a few words, e.g.:

    In others words, the expectation is that if we double how much we’ve emitted, we’ll more than double the human contribution to atmospheric CO2 concentration.

  6. Andy,
    Thanks, yes, of course. I see now what Victor was getting it. Always obvious in retrospect 🙂

    Oh, and I’ve edit the post as you suggested.

  7. izen says:

    Re Paleo; the evidence is scanty and open to uncertainty(sic) but the much discussed lag in the rise of CO2 before the warming from a glacial period is also present in the fall of CO2 as the interglacial cools into the peak glacial period. In the last cooling period ~120,000 years ago the CO2 levels lagged ~5000 years behind the temperature to drop 40 ppm.

    Of course this time the CO2 rise is much more rapid and we will not be starting from any comparable paleo situation when(if) CO2 emission reduce by 90%. Therefore NL has the liberty to invent whatever tenuously credible version of the carbon cycle under these exceptional conditions he desires.

    Or if the analysis is likely to be accepted as unbiased, include the mainstream and opposite extreme (paleo?) conditions as alternatives or error ranges.
    (the link was irresistible!)

  8. We only have one reality.

    I’ve run into an awful lot of folks who don’t seem to grasp this basic, fundamental fact. They seem to subscribe to a “selective reality” model of the universe.

  9. RickA says:

    ATTP:

    Nic’s climate sensitivity was not just picked at random. Nic said (in his guest post):

    “I select the simple ESM’s key climate, and land and ocean carbon-cycle, sub-model parameters so that its simulated global temperature, heat uptake and carbon-cycle changes since preindustrial best match recent observational estimates, sourced largely from AR5.”

    So his model parameters are observationally constrained.

    I think a key point to ponder is that observationally constrained work tends to a low sensitivity value, while quite a few of the global climate models project an accelerating warming with higher sensitivity numbers.

    As you point out – we will know in a few decades who is correct.

    I have laid in a 30 year stock of popcorn and am watching avidly to see who turns out to be correct in this debate.

    One thing we know for sure is that it is pretty hard to make predictions – especially about the future (grin).

  10. Nic’s climate sensitivity was not just picked at random. Nic said (in his guest post):

    Yes, which is why I wrote what I wrote. I didn’t say it was picked.

    So his model parameters are observationally constrained.

    Well, there are other observationally contrained estimates that are higher (see Cawley et al., for example).

    I think a key point to ponder is that observationally constrained work tends to a low sensitivity value, while quite a few of the global climate models project an accelerating warming with higher sensitivity numbers.

    There are plenty of plausible arguments as to why this is the case. I won’t list them again, as I’ve grown tired of doing so.

    As you point out – we will know in a few decades who is correct.

    Indeed, and we can’t rule out the chance that in 30 years we will say “oh, shit!”

    One thing we know for sure is that it is pretty hard to make predictions – especially about the future (grin).

    Well, that’s partly why we call them projections. That it hard, and uncertain isn’t an argument for ignoring what is being suggested.

  11. BBD says:

    Victor

    May I ask the BBD question: is the estimate of Nic Lewis consistent with the observational estimates from paleo studies?

    No 😉

    For a comprehensive evaluation of the evidence spanning the Cenozoic see Rohling et al. (2012) which estimates a range of 2.2K – 4.8K per doubling of CO2.

  12. BBD says:

    Victor writes:

    Everything should fit together. We only have one reality.

    And that’s the problem with (and for) lukewarmerism.

  13. RickA: I think a key point to ponder is that observationally constrained work tends to a low sensitivity value, while quite a few of the global climate models project an accelerating warming with higher sensitivity numbers.

    The comprehensive climate models match the observational constraints just as well.

    It is thus not a comparison between an “observational method” and climate modelling, but between a highly simplified statistical model and a comprehensive physical model. As long as the statistical model gives results that are unphysical according to our best current understanding, I know what I see as the most likely resolution of this discrepancy.

  14. RickA says:

    Victor:

    How can Nics results be unphysical if they fall within the IPCC range?

    What I find unpersuasive is that 100% or 110% of the warming since 1950 has been caused by the human emissions of CO2. That assumption is what leads to such high sensitivities.

    That strikes me as unphysical.

    Perhaps we will (in 20 or 30 years) find out that 1/2 of the warming (since 1950) is natural and the other half caused by human emissions. Again – we will see.

    I also note that the sinks are taking up more of the emitted CO2 than was forecast.

    I read stuff everyday which shows that nature is reacting in ways which are different than we thought it would – all of which need to be put back into model tweaks.

    More snow and therefore more mass on Antarctica than expected.
    Corals growing better than expected over the last couple decades.
    Arctic ice rebounding more than expected.
    Europe colder than expected.
    And on and on.

    Perhaps in 20 or 30 years the models will be better than they are today, and projections drawn from them will be more accurate.

    They sure don’t feel accurate now.

    Bottom line is that there are a lot of wild guesses being published and only time and more data will sort out who is correct and who is wrong.

    I lean strongly toward a low climate sensitivity based on what I have read (say an ECS of 1.5 to 1.8C per doubling). But only time will tell.

    I am not against taking action – I just want the action we propose to take to be subjected to a decent cost/benefit analysis.

    I think we should really ramp up electrical production with nuclear power and try to get to 75% in the USA instead of 20% ish. As we reach end-of-life for coal power plants, why not replace them with nuclear?

    I would also like to see some research done on non-carbon producing power technology development which is cheaper than coal, natural gas or oil. Power storage also. The more renewables we deploy the more important power storage becomes.

    If we throw 20 billion per year at these issues for a decade I bet we make some progress.

    Maybe fusion will become economical.
    Maybe space based solar will become a reality and hopefully cheaper than hydrocarbons.

    Imagine manufacturing mirrors in space from asteroids we mine and just harvesting all that extra solar radiation (the stuff which just misses the Earth now). There is a ton of money to be made working on that. Free power from space – available 24/7 365 days a year.

    Congress just passed an asteroid mining law and I believe President Obama is planning on signing it.

    I am very optimistic about the future.

  15. What I find unpersuasive is that 100% or 110% of the warming since 1950 has been caused by the human emissions of CO2. That assumption is what leads to such high sensitivities.

    No, it doesn’t. Are you sure you know what you’re talking about?

    There are many things we will understand better in 30 years time, including whether or not we should have started doing something now, rather than waiting.

  16. verytallguy says:

    What I find unpersuasive is that 100% or 110% of the warming since 1950 has been caused by the human emissions of CO2. That assumption is what leads to such high sensitivities.

    That strikes me as unphysical.

    Whereas virtually every climate scientist believes it is not unphysical.

    So, what to believe: what “strikes ” Rick, or what climate science supports. Hmm.

    It’s a tough one, I grant you. I’ll get back to you later.

  17. paulski0 says:

    Looking at the graph I feel like I must be missing something. Why are cumulative CO2 emissions in Lewis’ curve greater than the IPCC curve at all the decadal average points?

  18. paulski0 says:

    RickA,

    What I find unpersuasive is that 100% or 110% of the warming since 1950 has been caused by the human emissions of CO2. That assumption is what leads to such high sensitivities.

    That strikes me as unphysical.

    Can you explain why you believe this is unphysical?

  19. paulskio,
    I’d missed that. Yes, I don’t understand that either. I thought he was using the same emission pathways (rather than the same concentration pathways) and, hence, you’d expect the cumulative emissions to be the same at each decadal average point.

  20. RickA: “How can Nics results be unphysical if they fall within the IPCC range?

    That formulation was a bit sloppy. I wanted to shortly repeat what I had written before: “The IPCC range is consistent with our understanding of the radiative properties of CO2, of the increase in humidity that goes with the warming and with the albedo feedback from less snow and ice due to warming. If the IPCC range would not have fitted to this physical understanding, I would have said that we do not really have a solid case yet.”

    A result can be wrong while being in the right range. If you are confident that you will get a 1 every time you throw a 6-sided die, you are wrong, but it is in the right range.

  21. niclewis says:

    paulskio
    “Why are cumulative CO2 emissions in Lewis’ curve greater than the IPCC curve at all the decadal average points?”

    ATTP
    ” Yes, I don’t understand that either. I thought he was using the same emission pathways (rather than the same concentration pathways) and, hence, you’d expect the cumulative emissions to be the same at each decadal average point.”

    Actually, I surprised ATTP doesn’t understand, as I recently explained the reason for the difference in the last paragraph of an answer to him, here: http://judithcurry.com/2015/11/30/how-sensitive-is-global-temperature-to-cumulative-co2-emissions/#comment-747849

  22. Nic,
    You said,

    My model uses the RCP emission pathways

    So, I’m still confused. If the emission pathways are the same, then surely the cumulative emissions at every point in time should be the same, shouldn’t they?

    Also, I will add that your response in that Climate etc. comment did the remarkable trick of saying I was wrong while repeating what I’d just said. Odd that.

  23. paulski0 says:

    ATTP,

    I think what Nic’s saying is that each model used the same RCP emissions pathway as him in its simulation, but concentrations produced by those models tended to be higher* than the RCP concentrations pathway. In producing the graphic they decided to scale the emissions to be consistent with the RCP concentration at each datapoint.

    * I may mean lower here

  24. paulskio,

    I think what Nic’s saying is that each model used the same RCP emissions pathway as him in its simulation, but concentrations produced by those models tended to be higher* than the RCP concentrations pathway. In producing the graphic they decided to scale the emissions to be consistent with the RCP concentration at each datapoint.

    I’m not sure it is. As I understand it, Nic is using the same emission pathways as the IPCCs RCPs and, because of his assumptions about carbon cycle feedback, then gets a different concentration pathway (which is a bit odd given that RCP8.5 – for example – is defined in terms of the forcing pathway, which – I think – Nic’s model no longer follows). However, I still don’t see – if each dot represents a decade – how the cumulative emission at each decade can be different for his emission pathways, than for those used by the IPCC, if they are both the same. Could it be that his cumulative emissions are decadal averages, while the IPCC values are not?

  25. niclewis says:

    paulskio, ATTP,
    You seem to have overlloked the final part of my answer:
    ” the CMIP5 ESM results in my figure and Figure SPM.10 use RCP concentration pathways and diagnose, in each CMIP5 model, what emission pathways would produce those concentration pathways.”

    So, I use the underlying RCP emission pathways, but the CMIP5 models used the RCP concentration pathways (which had been diagnosed from the emission pathways using the MAGICC6 EMIC). Each CMIP5 ESM then worked out what emission pathway it would require to produce the RCP concentration pathway it was given. SPM.10 shows the mean of those diagnosed emission pathways (at decadal mean points, as for my model). The mean CMIP5 ESM diagnosed emission pathways differ from the original RCP emission pathways since the CMIP5 ESMs behave differently from MAGICC6.

  26. RickA says:

    paulskio asked me “Can you explain why you believe this is unphysical?”

    Sure.

    The Earth has been warming since the LIA, and all of that warming is mostly natural (at least up until about 1950).

    Why did this natural warming stop?

    We know that warming (whether natural or human caused) releases additional CO2 (and methane) – so some of the CO2 being added to the atmosphere is a result of the natural warming from 1750ish on. A feedback effect.

    It seems likely to me, that since it has continued to warm since 1750ish or so, that whatever is causing that continues today. If some natural warming is still occurring then that natural warming will cause additional CO2 to be released from the environment (from the ocean or ground under retreating glaciers and so on).

    I see no evidence to support the notion that this century scale natural warming just turned off. So that assumption strikes me as unphysical.

    On a longer time frame – it has been warming since the last ice age (about 20,000 years ago).

    The sea has risen 120 meters since 20,000 years ago,

    I see no evidence to support the notion that this millennial scale natural warming just turned off. So that assumption strikes me as unphysical.

    I agree that a portion of the warming since 1880 (and since 1950) is caused by humans.

    I just don’t see any evidence to support the notion that all of the warming since 1950 is human caused.

    The only reason I have read for this conclusion is that it is the only explanation which makes the models work – which is not a very good reason in my opinion. Especially since actual observations are quite a bit cooler than the model output (of most of the models and the model mean).

    There are still naturally occurring forcings (orbital forcings for one, el ninos are still happening as a second example) – and I see no evidence they have turned off – so therefore they have to still be affecting the Earth – on all their various timescales.

    The warming effects from el nino are not human caused – correct?

    So how can we discount the warming from el nino and conclude it has no effect – but that all warming is caused by humans?

    So 100% or more strikes me as unphysical.

    50/50 or 25/75 or 75/25 I could buy – but not all of the warming.

    It just makes no sense to me.

    Hope that explanation helps you understand my view on this matter.

  27. Nic,

    So, I use the underlying RCP emission pathways, but the CMIP5 models used the RCP concentration pathways

    Yes, that makes sense, which is what I initially said in the comment to which you responded with “no you’ve got it the wrong way around”.

    Each CMIP5 ESM then worked out what emission pathway it would require to produce the RCP concentration pathway it was given.

    Again, yes, this is what I was suggesting when you responded with “no you’ve got it the wrong way around”.

    The mean CMIP5 ESM diagnosed emission pathways differ from the original RCP emission pathways since the CMIP5 ESMs behave differently from MAGICC6.

    Ahh, I see, so the emission pathways in your figure are not the same. Well, that was easy enough. Pity you didn’t say that earlier.

  28. RickA,

    So 100% or more strikes me as unphysical.

    Well, that’s just silly. Some things do cause cooling.

    It just makes no sense to me.

    Maybe, but this isn’t to your credit.

  29. rustneversleeps says:

    Forget the go-forward emissions, which is confusing enough.

    How can the “historical” cumulative anthropogenic CO2 emissions be different for the two plots? How can the “historical” temperature anomalies be different??

  30. rust,
    Yes, I’d missed that too. That is rather strange. Maybe Nic can explain that one?

  31. JCH says:

    Why did this natural warming stop?

    Right around the time Nathaniel Hawthorne’s “Scarlett Letter” was published. After the affair, things cooled off for 60 years.

  32. RickA says:

    ATTP:

    You said “Well, that’s just silly. Some things do cause cooling.”

    True.

    However, the net of all the warming things and all the cooling things is a warming of the Earth from 1950 to the present. I doubt we disagree on that point.

    100% of that net warming from 1950 on is caused by humans (according to some).

    Because of aerosols emitted by humans, which cause cooling, but for our aerosols it would be even warmer – which is how we get to 110% of the warming is caused by humans (since 1950).

    Perhaps I am being too literal.

    But to me this means that the el nino isn’t warming California right now (or anywhere else)..

    How can there be natural warming if 100% of the warming is caused by humans?

    el ninos are not caused by humans – they are natural – therefore since 1950 they no longer warm.

    What is your understanding of the meaning of 100% of the warming from 1950 on is attributed to humans?

    I am fine with some warming being natural (say the current el nino) and some caused by humans – but I really don’t get this 100% unnatural warming theory.

  33. niclewis says:

    ATTP,
    When you wrote “” As I understand it RCPs are concentration pathways from which emission pathways are then determined. Presumably here you’re using the emission pathways, not the concentration pathways.”
    and I responded
    “No, you’ve got it the wrong way round.”
    I meant that you had got it the wrong way round in your first sentence, not in your second sentence. I think that was clear from how I continued:
    “The RCP concentration pathways are determined from the RCP emission pathways, using a single EMIC. See Meinshausen et al 2011. My model uses the RCP emission pathways, but the CMIP5 ESM results in my figure and Figure SPM.10 use RCP concentration pathways and diagnose, in each CMIP5 model, what emission pathways would produce those concentration pathways.”

    rustneversleeps and ATTP
    How can the “historical” temperature anomalies be different??
    Simple. As stated just above the graph in my post at Climate Etc:
    “the black lines show simulation results up to 2000–09.”
    So, these are, model simulated, not observed, temperatures. The CMIP5 models simulated a higher GMST rise than that observed per HadCRUT4 (horizontal pink line), whereas my model, being observationally-constrained by that temperature data, matched its rise to 2000-09.

  34. rustneversleeps says:

    So the historical cumulative anthropogenic CO2 emissions are simulated as well??

  35. snarkrates says:

    Rick A., It isn’t that you are being too literal. It is that you are ignoring physics. What the physics says is that when you have a forcing that increases, the planet warms. As it warms, it emits more IR radiation–the amount determined to first order by the temperature. It continues to warm until the increased outgoing IR matches the increase in the original forcing that triggered the effect. Yes, there are feedbacks, positive and negative, but that is the gist of the process.
    It is likely that the LIA was caused by increased volcanic aerosols and a decrease in solar output. Once the volcanic aerosols rain out (a matter of years) and the solar output goes back to “normal”, the temperature then rises, but only to the point where outgoing IR restores equilibrium.

    Warming doesn’t happen without adding energy. We know CO2 as a greenhouse gas adds energy. That CO2 would warm the planet was predicted clear back in 1896 by Arrhenius. That we are seeing warming now ought not to be surprising unless you weren’t paying attention over the last 120 years.

  36. izen says:

    @-“The Earth has been warming since the LIA, and all of that warming is mostly natural (at least up until about 1950).”

    The warming from, and cooling into, the LIA are not ‘Natural’. The MWP-LIA change is rather smaller on a global scale than that measured since 1950. They have real physical causes, the 2LoT requires that. A good deal is known about those causes, that you are not aware of the causes of the LIA and its end can be corrected with a little work.

    @-“On a longer time frame – it has been warming since the last ice age (about 20,000 years ago). …I see no evidence to support the notion that this millennial scale natural warming just turned off. So that assumption strikes me as unphysical.”

    That conflates two climates, a cold glacial period and the present interstadial. The Milankovitch trigger for the warming in the middle of your timescale from the glacial is well established.
    In the ~8000 years since the peak warmth the global temperature has slowly dropped by about a degree (with lots of noise) as is similarly seen in every past paleoclimate record of the glacial cycles.

    Until now. There is no evidence of a change in the inherent (‘natural’?) forcings that are expected in a glacial cycle, to explain the recent rapid rise back to Holocene peak temperatures. There is however this wopping and coincidental rise in anthropogenic CO2.
    And the numbers fit, using the best guess we have. In fact the CO2 rise could have caused a slightly larger rise in temperature than that measured. So then other factors must have cooled/negated part of the CO2 impact.

  37. @RickA
    Of course there’s natural warming—and cooling—and it’s always there, caused by such things as small changes in Earth’s tilt and orbit (slow acting); in the sun’s radiance and volcanic activity (fast acting), but it’s observed and is accounted for.

    On the other hand, atmospheric CO2 concentrations have risen over the last 100 years to levels not seen for at least 800,000 years and, low and behold, the warming which physics tells us the greater concentration would be expected to create, exactly matches the observed increase. So if you want to attribute the warming to natural causes, please can you explain how the amount of increased atmospheric CO2 recorded did not produce the warming we would anticipate?

  38. Jim Eager says:

    RickA wrote: “We know that warming (whether natural or human caused) releases additional CO2 (and methane) – so some of the CO2 being added to the atmosphere is a result of the natural warming from 1750ish on.”

    Rick is conveniently forgetting the “CO2 lags temperature” meme here, meaning he is about 700 years too early to use the CO2 feedback as his argument.

    RickA: “I see no evidence to support the notion that this century scale natural warming just turned off.”

    Really? Ever notice how the solar sunspot proxi peaked circa 1960:

    Rick: “it has been warming since the last ice age (about 20,000 years ago).”

    Ah, no, it has not. Orbitally forced peak post-glacial warming was 8000-6000 years ago during the Holocene Climate Optimum. Global mean temperature has been slowly declining ever since, albeit with extended episodes above and below that trend. The fact is over just the last half century we have reversed all of that long term decline:

    Rick specifically cites orbital forcing as a continuing natural forcing. Pity it is of the wrong sign to support his argument.

    Rick also cites el nino. Pity that ENSO is not a forcing.

    In short, Rick’s view on this matter is clearly based on misunderstanding.

  39. izen says:

    @-“The RCP concentration pathways are determined from the RCP emission pathways, using a single EMIC. See Meinshausen et al 2011. My model uses the RCP emission pathways, but the CMIP5 ESM results in my figure and Figure SPM.10 use RCP concentration pathways and diagnose, in each CMIP5 model, what emission pathways would produce those concentration pathways.”

    Yes. It is clear a large part of the offset is created by converting from emissions to concentrations by one method, and then back from concentrations to emissions by another method.

    @-“So, these are, model simulated, not observed, temperatures. The CMIP5 models simulated a higher GMST rise than that observed per HadCRUT4 (horizontal pink line), whereas my model, being observationally-constrained by that temperature data, matched its rise to 2000-09.”

    Total emissions to date are less than 500Gt. The current anomaly above the pre-emission era is close to (possibly over) 1degC. A level the solid lines do not reach until almost 1000Gt has accumulated. The observational constraints may have shifted.

  40. @Nic,

    So, these are, model simulated, not observed, temperatures. The CMIP5 models simulated a higher GMST rise than that observed per HadCRUT4 (horizontal pink line), whereas my model, being observationally-constrained by that temperature data, matched its rise to 2000-09.

    I really do hope your model takes ENSO into account, so that the drastic GMST spike that we’re gonna witness won’t cause any sudden mismatch …

  41. anoilman says:

    K.a.r.S.t.e.N: … and solar variance, and volcanoes…

  42. Willard says:

    > Nic’s climate sensitivity was not just picked at random.

    Nobody said otherwise, and it’s the opposite of “picked at random” that is lukewarmingly troublesome.

  43. RickA says:

    izen:

    You said “The warming from, and cooling into, the LIA are not ‘Natural’. The MWP-LIA change is rather smaller on a global scale than that measured since 1950.”

    What is your definition of “natural”.

    Mine is everything which effects the climate except human causes, like human CO2 emissions, human land use changes, and other human effects.

    So “natural” is changes in the sun, heliosphere, magnetic coupling, natural fires, volcanoes, orbital variations, clouds, ocean currents, etc.

    So I see the LIA and the MWP as being natural climate changes – i.e. not caused by humans.

    Please let me know if you disagree.

    Secondly, I see a change of almost 1C between the peak MWP and the bottom of the LIA – which is more than the warming since 1950 (but not much).

    I do question how much of the warming since 1950 is adjustments to the land temperature record. I do not know the answer to this but have seen that the past was cooled and the present warmed – but am not sure of the magnitude of those changes since 1950.

    It would be interesting to see what the raw temperature readings (unadjusted) show from 1950 to the present.

  44. RickA says:

    Jim said “Rick is conveniently forgetting the “CO2 lags temperature” meme here, meaning he is about 700 years too early to use the CO2 feedback as his argument.”

    Ok – how much of the warming last century and this century is from CO2 released due to the MWP? That ended about 700 years ago.

  45. The Very Reverend Jebediah Hypotenuse says:

    izen says:

    The observational constraints may have shifted.

    I predict that that sentence will become a common refrain in the not-too-distant future.

    Meanwhile, the growth of political pragmatism:
    http://www.commerce.senate.gov/public/index.cfm/2015/12/data-or-dogma-promoting-open-inquiry-in-the-debate-over-the-magnitude-of-human-impact-on-earth-s-climate

  46. RickA says:

    I appreciate all the corrections to my understanding which have been shared.

    I noticed nobody has taken a crack at answering my question “What is your understanding of the meaning of 100% of the warming from 1950 on is attributed to humans?”

    I am very skeptical of this assertion.

  47. bill shockley says:

    Nic,

    If it interests you, could you post your Anthropogenic CO2 emissions (inputs) and the corresponding ppm outputs from your model? I’d like to check them against my model. Also, if you can post a picture of your CO2 decay profile (if your model generates one), I could mimic it in my own model. Mainly, I would like to see for myself how well it backtests against fairly well known emissions and concentrations. My own decay profile, I mimicked from the one in a Joos paper which seems pretty well accepted, and then I adjusted it to get a perfect backtest, but my emissions data only include FF emissions, i.e., no biosphere emissions. But as you can see, the results square well with what other models say.

    Here’s my decay profile and a couple experiments I ran:
    https://googledrive.com/host/0B6KqW0UlivnVVks4cnN1THhYR3M

  48. RickA: “I do question how much of the warming since 1950 is adjustments to the land temperature record. I do not know the answer to this but have seen that the past was cooled and the present warmed – but am not sure of the magnitude of those changes since 1950.

    The adjustments over land make global warming about 0.2°C larger. The adjustments of the sea surface temperature make global warming smaller. In total the adjustments make global warming about 0.2°C smaller.

    Which direction the adjustments go does not tell you anything about global warming. That is only relevant if you expect a conspiracy.

    RickA: It would be interesting to see what the raw temperature readings (unadjusted) show from 1950 to the present.

    A well chosen time. After 1950 the adjustments make nearly no changes to the global mean temperature (regionally they do still matter). The 0.2°C adjustments I mentioned above are before 1950.

  49. Jim Eager says:

    RickA wrote: “I see a change of almost 1C between the peak MWP and the bottom of the LIA – which is more than the warming since 1950 (but not much).”

    That means the MWP was only about 0.5C above the long term downward trend since the HCO, and the LIA only about 0.5C below that trend, which means each were only around half the change since 1950.

    Your arguments are dissolving beneath your feet, Rick.

    RickA: “It would be interesting to see what the raw temperature readings (unadjusted) show from 1950 to the present.”

    And you expect anyone to take you seriously after that whopper?

  50. Nic,

    I meant that you had got it the wrong way round in your first sentence, not in your second sentence. I think that was clear from how I continued:

    Yes, and once again your chose to focus on an irrelevance (do you really not do this on purpose). The point was that GCMs use concentration pathways, not emission pathways. Given that the RCPs are defined in terms of their concentration pathways, whether or not you actually know the emission pathway in advance, or work it out later, they are still defined in terms of the concentration pathway, not the emission pathway, which is what I was trying to get across. If you really aren’t trying to do this, maybe you really should try harder to think a little before going “Aha, I’ve found something to criticise”. On the other hand, if your goal is simply to promote your low climate sensitivity, and now low carbon cycle feedbacks at all costs, rather than to actually engage in a serious discussion, carry on, you’re doing a fine job.

  51. Rick,

    I noticed nobody has taken a crack at answering my question “What is your understanding of the meaning of 100% of the warming from 1950 on is attributed to humans?”

    There’s a very good Realclimate post that discusses this, but their site seems to be down (search for “Realclimate attribution”). Also, if you look at the AR5 radiative forcing diagram, the best estimate for the change in anthropogenic forcing since 1950 is about 1.7W/m^2. So, this alone could explain more than half the observed warming. Feedbacks don’t have to be large to explain the rest. If anything, it’s harder to explain anthropogenic forcings providing less than all, than it is to explain them providing more than all.

  52. BBD says:

    RickA and his beloved talking points. Again.

    * * *

    As regards the carbon cycle, the PETM is instructive: a (geologically) abrupt and massive release of carbon followed by ~150ka of slow relaxation.

    Not exactly what one would expect if the recovery from carbon cycle perturbations is rapid and concomitant temperature forcing therefore relatively weak and of short duration.

  53. BBD says:

    Sorry: Source for figure.

    Not enough coffee.

  54. dikranmarsupial says:

    RickA: wrote “How can Nics results be unphysical if they fall within the IPCC range?”

    Consider, for example the cyclic model by Loehle and Scafetta, which can be made to give estimates of climate sensitivity that closely match the IPCC range, providing the gross errors of the model are corrected (especially if new errors are not introduced – mea maxima culpa – an nobody elses). However the Loehle and Scafetta model is still unphysical because there is no plausible evidence for the cyclic component. This is a problem with statistical model fits, you can often fit the data well with an incorrect model, and draw faulty conclusions as a result. That is why I (as a statistician) tend to have more faith in a model that is based on physics and explains the observations without having been explicitly fitted to them.

    There is also the point about all models should be as simple as possible, but no simpler. You can sometimes get a simple model that fits the observational data well, but it doesn’t extrapolate properly, because some important feature of the underlying system did not greatly affect the calibration period, but would be more active outside it. For instance, if we only observe the carbon cycle in a period where it is being vigorously driven by anthropogenic emissions, we can explain that without needing to model the interactions of the deep ocean and thermocline. However, such a model is not going to give a good account of what would happen if we were to stop those emissions.

  55. Also, that your result falls within the likely range, does not suddenly make it likely. Falling within the range would imply that it is plausible, but doesn’t necessarily mean that it’s more likely than other results. The main problem is – IMO – that we want to make decisions based on what could happen future, which involves considering everything. That one can show that warming might be low (which even the IPCC allows) does not suddenly mean that we should ignore that it might not be.

  56. dikranmarsupial says:

    ATTP indeed, from a Bayesian perspective we should consider all possible models, weighted by their plausibility (determined by their ability to explain the observations and the prior) and use the resulting distribution to choose the course of action (e.g. by minimising the expected loss). The difficulty there is usually in deciding the prior, but at least in the Bayesian framework this needs to be stated explicitly as part of the analysis.

  57. verytallguy says:

    Rick,

    It’s more simple than you think.

    The world has warned.

    If absent any human contribution, the world would have warmed anyway, then attribution is 100% human.

    The assessment is that natural effects are a very slight cooling over the period, hence attribution is 100-110% human.

  58. verytallguy says:

    Dunno what happened there. Last post should have read as follows- an Eddie from moss would be appreciated :

    Rick,

    It’s more simple than you think.

    The world has warned.

    If absent any human contribution, the world would have warmed anyway, then attribution is 100% human.

    The assessment is that natural effects are a very slight cooling over the period, hence attribution is 100-110% human.

  59. vtg,
    I’m not seeing the difference between the two posts, and I don’t follow the “If absent any human contribution” argument. “Eddie from moss” is very funny, though.

  60. verytallguy says:

    For some weird reason, there’s a whole paragraph its refused to post in the middle of the post. It’s done it twice. It’s also removing “greater than” and “less than” signs.

    *penny drops * the greater than and less than signs have been interpreted as html, thereby removing the paragraph between them

    I’m going to give up.

  61. vtg,
    That makes sense now. Do you want me to delete anything, or are you happy to leave your “Eddie as moss” comment? 🙂

  62. verytallguy says:

    Don’t mind- you’re the boss!

    If it’s possible to extract the original comment including the greater and less thans that would be nice.

    It *did* make sense. Honest.

  63. It appears to have removed that portion even when I look at the comments in the editor, so it seems that I can’t extract the original.

  64. verytallguy says:

    Things to do, time to abandon!

  65. Bill,
    Nic’s published work is pretty interesting and deserves to be taken seriously, and many do. However, even James has some criticisms of Nic’s choice of priors.

  66. bill shockley says:

    vtg, you could do a screen capture of the text in notepad, for example, and post the image if you have a place where the image can be linked from. I’ve been that desperate at times. tinypic used to be great but its utility has pretty much disappeared now. I’ve resorted to google drive for images, which is very easy to use, but the images don’t embed on wordpress because wordpress doesn’t have an image tag. Completely nonsensical.

    You can also try some of the markdown tags from the wordpress markdown reference guide for example the code tags and the code block tags. WordPress is really fickle for such a large blog site.

  67. bill shockley says:

    ATTP, thanks.

  68. Nic’s published work is pretty interesting and deserves to be taken seriously, and many do.

    The problem is not his work, the problem is the spin outside of the scientific literature. The pretence that this outlier result equals the best scientific understanding of the climate system, ignoring all the other estimates based on many different methods and observations.

  69. BBD says:

    bill shockley

    NL’s estimates just do not fit with what is known about palaeoclimate behaviour. I agree strongly with Victor about NL’s tendency to over-exaggerate the importance of his results (and others to echo this spin). It is misleading and unhelpful.

  70. paulski0 says:

    RickA,

    Because of aerosols emitted by humans, which cause cooling, but for our aerosols it would be even warmer – which is how we get to 110% of the warming is caused by humans (since 1950).

    The 100/110% includes aerosol forcing contribution. Without including aerosols it would be about 150%.

    The Earth has been warming since the LIA, and all of that warming is mostly natural (at least up until about 1950).

    60% of the time this would be right every time 😉

    From the 1600s both simulations and proxy reconstructions indicate a general small warming up to about 1900, though perhaps not significant in the proxy data as far as it goes, on the order of 0.1degC/Century. However, it’s not at all continuous. This can be comfortably explained by known volcanic and solar forcing. Up to about 1900, yes I think warming from a generalised “LIA” is almost all natural. From 1900 to 1950 there is probably a significant anthropogenic influence, could be about half and half.

    Why did this natural warming stop?

    Because natural warming isn’t magical, it has a cause. Based on comparisons between forced simulations of the past millennia against historical simulations initialised by a generic pre-industrial control, I think natural warming probably accounts for about 15% of total observed historical warming since 1850. At a stretch possibly as much as 25%. However, nearly all of this occurs before 1960. The first half of the 19th Century featured extremely strong volcanic activity and the period 1920-1960 was quiet volcanically. We would therefore expect a warming trend between these periods. However, even the numbers I’m quoting here are quite dependent on a moderate sensitivity (>2C ECS). Trying to squeeze out further non-negligible warming extending to the present invokes a high-sensitivity Earth with a reasonably large ECS/TCR ratio, to supply the necessary “pipeline” warming. You’ll perhaps appreciate that’s incompatible with Nic Lewis’ model.

    We know that warming (whether natural or human caused) releases additional CO2 (and methane) – so some of the CO2 being added to the atmosphere is a result of the natural warming from 1750ish on. A feedback effect.

    Isn’t this essentially what Nic Lewis is arguing against?

  71. BBD says:

    However, even the numbers I’m quoting here are quite dependent on a moderate sensitivity (>2C ECS). Trying to squeeze out further non-negligible warming extending to the present invokes a high-sensitivity Earth with a reasonably large ECS/TCR ratio, to supply the necessary “pipeline” warming. You’ll perhaps appreciate that’s incompatible with Nic Lewis’ model.

    Lukewarmers seem to treat CO2 forcing as some kind of special case, rather than just another radiative perturbation of the climate system.

    While there are of course different efficacies of forcing, as far as I know they are not different enough to square the circle of lukewarmerist low sensitivity to radiative perturbation and observed (and palaeoclimate) behaviour.

    The two just aren’t compatible.

  72. paulski0 says:

    …enough to square the circle of lukewarmerist low sensitivity to radiative perturbation and observed (and palaeoclimate) behaviour.

    I’m not sure a low-end 1.5C ECS is necessarily incompatible with temperature evolution over the past millennium up to the present. But if you want that ECS and to also infer a large natural LIA recovery warming extending to the present I think you’re breaching the implausibility barrier.

  73. paulskio,

    But if you want that ECS and to also infer a large natural LIA recovery warming extending to the present I think you’re breaching the implausibility barrier.

    I think that is an issue that many don’t get. All of the variability gives us some indication of how sensitive our climate is to radiative perturbations. It’s hard to argue for a high sensitivity to natural perturbations and a low one to anthropogenic ones.

  74. bill shockley says:

    BBD,

    I agree with ATTP, that it’s interesting, from the standpoint of getting a handle on certainty. It’s different, it’s interesting, it’s a discipline (Bayesian theory). But substantively, I don’t take him seriously. I’m all on board the paleo-is-better wagon.

    Hansen says 800,000 years of paleo history PROVES ECS is 3.0 +/- 0.5C. If you want to add the caveat that sensitivity may be relative to the strength of forcing, then it’s going to be stronger now, not weaker than during the last million years.

  75. Willard says:

    > As I understand it, this all within the realms of possibility, so it could well be what happens. However, discovering that one can develop a plausible model that suggests that warming will be on the low side, is not really evidence that it will be.

    There’s a jump from possible to plausible in these two sentences. First, we need to argue that what is possible is plausible. Then we argue that what is plausible is likely.

  76. dikranmarsupial says:

    Rather than showing that climate sensitivity might be low, what we really want is a study that shows that climate sensitivity cannot be high, that would be far more reassuring.

  77. Willard,
    I’m going to have to think about that.

    Dikran,

    what we really want is a study that shows that climate sensitivity cannot be high, that would be far more reassuring.

    Exactly, and a problem is that Nic Lewis’s method cannot do so, by definition. If, for example, one assumes that feedbacks are linear, then you cannot say anything about whether they are, or not. By assuming that they’re linear, he’s fundamentally assuming that our warming in future will essentially continue as it has in the past. There is, however, plenty of evidence that this is probably too simplistic and that even if the feedbacks themselves are linear, there is a time dependence in the spatial response. As I understand it, polar amplification is largely accepted and is pretty strong evidence that Nic Lewis’s assumption of linear feedbacks is clearly wrong at some level.

  78. BBD says:

    paulski

    I’m not sure a low-end 1.5C ECS is necessarily incompatible with temperature evolution over the past millennium up to the present. But if you want that ECS and to also infer a large natural LIA recovery warming extending to the present I think you’re breaching the implausibility barrier.

    I think I’d agree with all of that (definitely the second sentence) but I can’t see a reason why climate sensitivity for the last ~1ka would be different than for the Pleistocene as a whole (eg. Hansen & Sato, 2012). What’s more, I can see evidence that 1.5C ECS *is* incompatible with palaeoclimate behaviour across the entire Cenozoic (Rohling et al., 2012). So by that reasoning I think we can probably discount arguments based on millennial climate behaviour ‘supporting’ an ECS of ~1.5C. Would you agree?

  79. BBD says:

    ATTP

    It’s possible that I might win the lottery, but not plausible to the extent that the bank would approve a loan on the possibility.

  80. JCH says:

    Just me, but when you have this gigantic pool of cold water called the deep oceans, and this phenomena called “anomalously strong winds for two decades”, I think anybody who thinks observations of SSTs and 2 meters above the surface that include that completely abnormal mess can determine climate sensitivity – as in, I know it’s low – is off their rocker.

  81. BBD,
    Okay, yes, I get it now 🙂

  82. bill shockley says:

    JCH: Just me, but when you have this gigantic pool of cold water called the deep oceans, and this phenomena called “anomalously strong winds for two decades”, I think anybody who thinks observations of SSTs and 2 meters above the surface that include that completely abnormal mess can determine climate sensitivity – as in, I know it’s low – is off their rocker.

    It’s not just you. I think Hansen has said essentially the same thing.

  83. what we really want is a study that shows that climate sensitivity cannot be high, that would be far more reassuring.

    Good point, if only because science is best at showing things wrong. Falsification is still hard, but a lot easier than being sure you are probably right about something.

    Also, coming back to the above graph the blog post was about with the future temperature increases, extrapolation is very dangerous based on statistical models. As dikranmarsupial wrote:

    This is a problem with statistical model fits, you can often fit the data well with an incorrect model, and draw faulty conclusions as a result. That is why I (as a statistician) tend to have more faith in a model that is based on physics and explains the observations without having been explicitly fitted to them.

    There is also the point about all models should be as simple as possible, but no simpler. You can sometimes get a simple model that fits the observational data well, but it doesn’t extrapolate properly, because some important feature of the underlying system did not greatly affect the calibration period, but would be more active outside it. For instance, if we only observe the carbon cycle in a period where it is being vigorously driven by anthropogenic emissions, we can explain that without needing to model the interactions of the deep ocean and thermocline. However, such a model is not going to give a good account of what would happen if we were to stop those emissions.

    For now, as long as we do not understand physics of the strong damping feedbacks that would be needed for the low climate sensitivity of Nic Lewis, his result is not just an outlier, but also just statistics.

  84. bill shockley says:

    This is Hansen’s amazing image “proving” ECS is 3.0C +/- 0.5C. I think it’s from around 2009.

  85. I was thinking a little about the point that rustneversleeps made about even the historical temperatures not lining up. My understanding (which may be wrong) is that the TCRE is really the CO2 (or GHG/anthro) attributable warming divided by the cumulative CO2 emissions. If we think that natural/internal variability has provided a cooling influence over the last decade or so (and certainly the best estimate for the anthropogenic warming since 1950 is about 110% of what’s been observed) then if Nic is fitting to the observed temperature only, then he may be underestimating the CO2 attributable portion by about 10%.

  86. Willard says:

    Before you think about plausible reasoning, AT, please consider where the “it’s consistent with” can lead:

    Auditors might begin to be puzzled by the expression “appears to be consistent with”, an expression that truly deserves due diligence, as is often the case.

    Open thread – Nov 2011

    (This thread might give you an idea why there’s no real open threads at Judy’s.)

    Honest brokers use “consistent with” to weasel their way out of rocky places.

    ***

    While an outlier O can be “consistent” with the overall picture, O is still an outlier. How do we know it’s an outlier? By looking at comparables. One model doesn’t tell us much – we need to compare it to others. The very idea that one model could be plausible stretches credulity. As Steve Easterbrook has pointed out in a presentation I’ve recently seen – an isolated model ain’t that useful.

    Hence my question at Judy’s, yet to be answered: would it be possible to Nic to come up with a model with an even lower sensitivity and even more modest carbon cycle that would be “consistent with” the official guestimates?

  87. paulski0 says:

    BBD,

    So by that reasoning I think we can probably discount arguments based on millennial climate behaviour ‘supporting’ an ECS of ~1.5C. Would you agree?

    It seems unlikely that a truly representative ECS would be as low as 1.5C based on the full range of evidence.

    ATTP,

    (and certainly the best estimate for the anthropogenic warming since 1950 is about 110% of what’s been observed)

    I think anthropogenic warming attribution being about 110% might be due to HadCRUT4 coverage bias. The D&A method scales modelled warming to HadCRUT4 warming, but only for grid cells with coverage, and determines a scaling factor which fits best. When producing an attributable warming estimate I would guess that scaling factor is applied to the full model global average, which would have a larger average temperature change than the HadCRUT4 covered area.

  88. dikranmarsupial says:

    The thing to remember about “it’s consistent with” is that “is not consistent with” is usually a severe problem for a theory precisely because “is consistent with” is such a low hurdle (and hence rather feint praise).

  89. Willard says:

    A stopped clock is consistent with a well-functioning clock at least twice a day.

  90. dikranmarsupial says:

    ;o) The example I would use is being able to differentiate cos(x) is consistent with being a mathematician. This is correct, but it doesn’t mean that person is a good (or even competent) mathematician.

    The phrase “is consistent with” is a useful phrase in a scientific context, provided the audience are aware of what (and usually, how little) it means”. I have noticed that many skeptics interpret the fact that observed temperature trends are “consistent with” the GCM ensemble as being some sort of ringing endorsement of the models, but I don’t think any climatologist actually means that. I suspect I have tried to explain that on blogs a number of times.

  91. paulskio,

    The D&A method scales modelled warming to HadCRUT4 warming, but only for grid cells with coverage, and determines a scaling factor which fits best. When producing an attributable warming estimate I would guess that scaling factor is applied to the full model global average, which would have a larger average temperature change than the HadCRUT4 covered area.

    Does that still not potentially explain the discrepancy. If Nic’s model is constrained to match HadCRUT4, then he will still potentially be under-estimated the attributable anthropogenic warming?

  92. Willard says:

    Let’s emphasize Nic’s consistency clauses in the main para of his piece:

    With this objective, I have recently developed a simple but physically-consistent ESM, with 2-box climate and ocean carbon sink sub-models. Its ocean carbon-cycle sub-model respects ocean carbonate chemistry and AR5 estimates of surface and deep ocean carbon reservoirs and flows, and its land carbon sink sub-model’s characteristics are consistent with feedback parameter estimates in a recent paper.[9] I select the simple ESM’s key climate, and land and ocean carbon-cycle, sub-model parameters so that its simulated global temperature, heat uptake and carbon-cycle changes since preindustrial best match recent observational estimates, sourced largely from AR5. The CO2 emissions and non-CO2 forcings used to drive the simple ESM are primarily taken from the RCP dataset, but with values modified to conform with the more recent AR5 estimates over 1765-2011.[10] That results in the model having an equilibrium climate sensitivity (ECS) of 1.7°C and a transient climate response (TCR) of a little over 1.35°C.

    How sensitive is global temperature to cumulative CO2 emissions?

    And then, by “being consistent” all the way, Nic stumbled upon a TCR that is consistent with every low-balling he did so far in his research.

    Sometimes, you’re just lucky.

  93. Actually, this is interesting

    and its land carbon sink sub-model’s characteristics are consistent with feedback parameter estimates in a recent paper.[9] I select the simple ESM’s key climate, and land and ocean carbon-cycle, sub-model parameters so that its simulated global temperature, heat uptake and carbon-cycle changes since preindustrial best match recent observational estimates

    I had somewhat missed that Nic also seems to have tried to constrain his carbon cycle feedback to match recent observations. I did ask Nic what would happen if he ran his model with a higher TCR (say 2C) to see how much his carbon cycle assumptions influence his results.

  94. Willard says:

    Considering the amount of weasel wording in that para, it might be interesting to pay due diligence to the two endtnotes:

    [9] Friedlingstein, P (2015) Carbon cycle feedbacks and future climate change. Phil Trans R Soc A.373:20140421

    [10] Except for volcanic forcing, where the lower RCP estimates are preferred.

    The wording of last footnote is intriguing. Let’s put it back in the claim it belongs:

    The CO2 emissions and non-CO2 forcings used to drive the simple ESM are primarily taken from the RCP dataset, but with values modified to conform with the more recent AR5 estimates over 1765-2011, [except for volcanic forcing, where the lower RCP estimates are preferred].

    No justification has been provided.

    Not that it matters much to Matt King Coal, whom we predict will promote this study in a short while.

    ***

    Coincidentally, there will be this hearing in a few days:

    U.S. Sen. Ted Cruz (R-Texas), chairman of the Subcommittee on Space, Science, and Competitiveness, will convene a hearing titled “Data or Dogma? Promoting Open Inquiry in the Debate over the Magnitude of Human Impact on Earth’s Climate” on Tuesday, December 8 at 3 p.m. The hearing will focus on the ongoing debate over climate science, the impact of federal funding on the objectivity of climate research, and the ways in which political pressure can suppress opposing viewpoints in the field of climate science.

    Senate Hearing: Data or Dogma?

    It will be interesting to see if Judy’s testimony will be consistent with Nic’s results.

    In any case, she’s looking for some tips:

    the syrian drought will almost certainly come up; david titley has been writing about this. i will try to research this before the hearing; pointers would be appreciated.

    Senate Hearing: Data or Dogma?

    Expertise experts such as Judy only need a few pointers and a few days to make their testimonies consistent with the state-of-the-art science in just about any climate science field.

  95. dikranmarsupial says:

    ” I select the simple ESM’s key climate, and land and ocean carbon-cycle, sub-model parameters so that its simulated global temperature, heat uptake and carbon-cycle changes since preindustrial best match recent observational estimates, sourced largely from AR5.”

    Optimisation is the root of all evil in statistics. If you only look at the parameter set that gives the best fit to the observations then over-fitting is potentially a pitfall (i.e. the realization of the noise in the observations may have a significant influence on the “best” set of parameters). But more importantly, while the set of “best” parameters in this sense may (or may not) be unique, there may be a wide range of other parameter settings that are almost as good.

    One approach would be to determine the set of parameters that can plausibly simulate the observed quantities (given the uncertainties). If you can’t get a climate sensitivity > X using a combination parameter values from this set, then you have evidence that climate sensitivity cannot be that high (providing you accept the model itself as reasonable).

  96. dikranmarsupial says:

    I should point out “is consistent with” is not “weasel wording”, it is normal scientific parlance.

  97. Joshua says:

    Dikran –

    Glad to see your 3:38. I was just about to write something similar, as one thing that I notice when reading WUWT is how frequently “skeptics” label appropriate quantification of uncertainty – of the sort they rarely provide (may, could, might,..) – as “weasel words.”

    And at least it’s better than “is not inconsistent with.”

  98. dikranmarsupial says:

    “and its land carbon sink sub-model’s characteristics are consistent with feedback parameter estimates in a recent paper.[9]”

    If it was me though, I would have pointed out that [9] is at the low end of modeled carbon cycle feedback estimates (something that Prof. Friedlingstein explicitly states in the abstract), in other words it is a bit of a cherry pick. Prof. Friedlingstein is a top carbon cycle researcher, and the paper is one that needs to be taken very seriously (especially as it is good news ;o), but it is a good idea to point out the questionable assumptions in your argument before your competitors do it for you!

  99. dikranmarsupial says:

    Joshua – yes, I have the same experience. Scientists tend to be rather moderate in their claims and give lots of caveats and statements of uncertainty, that generally are misinterpreted in a rhetorical discussion. There are good reasons for scientists to do this and this is one of the “impedance mismatches” involved when scientists interface with politicians and other groups where rhetoric is a valid means of making decisions.

  100. The Very Reverend Jebediah Hypotenuse says:


    A stopped clock is consistent with a well-functioning clock at least twice a day.

    But a man with two watches is never sure of the time.

  101. Joshua says:

    ==> “…where rhetoric is a valid means of making decisions.”

    And where acknowledging and quantifying uncertainty is spun into a sign of weakness. Consider Trump’s success in presenting opinions with complete certainty in the face of overwhelming contradicting evidence.

    One sure sign of “skepticism” (as opposed to skepticism) is when those who speak of the importance of acknowledging uncertainty then turn around and talk about appropriate quantification of uncertainty as “weasel words.”

    It must be tough to resist the tendency to respond to exploitation of appropriate uncertainty from “skeptics” by avoiding any acknowledgement of uncertainty.

  102. Willard says:

    > “is consistent with” is not “weasel wording”, it is normal scientific parlance.

    A wording can be [1] called weasel wording because of its usage, not because it uses a specific wording. In one context, a wording W can be perfectly fine, while in others it can be weasel wording [2]. That’s one of the beauty of using weasel wording [3].

    Consistency is first and foremost a concept that belongs to formal semantics. It refers to theories or models that are no contradictory: two theories are consistent if one can’t derive contradictions between them. Scientific theories are not (interpreted) deductive systems: to establish that one set of propositions is consistent with another one will oftentimes be a judgement call. In Nic’s case, it’s quite clear that he’s referring to sets of numerical entities, not truth values.

    Nic’s usage of “consistent with” clauses helps him hide the choices he made while appearing to justify them.

    ***

    Normal scientific parlance is more than consistent with weasel wording. It is replete with it [4]:

    http://www.jstor.org/stable/378102

    As long as there’s a political component in science, there’s no reason to think that it will stop containing [5] weasel wording. This tradition may very well be [6] exacerbated by the downsizing of the editing process and the open science model.

    [1]: Nothing’s easier than to argue from possibility.
    [2]: Truisms can hide profund truths.
    [3]: One out of how many, and what do I mean by beauty?
    [4]: Perhaps, but how many is too many?
    [5]: Double negatives to the rescue!
    [6]: As you can see, I have no problem using weasel wording myself.

  103. The Very Reverend Jebediah Hypotenuse says:


    Nothing’s easier than to argue from possibility.

    Nothing? That’s not impossible, but it’s implausible. At least it’s not inconsistent.

    But since anything follows from a contradiction, it might be easier to argue from impossibility.

  104. dikranmarsupial says:

    Willard, as Nic was presenting some science, his usage of “consistent with” was certainly “consistent with” the normal scientific usage of the phrase. As I said the thing that really deserved noting is that the estimate it was consistent with was at the low end of such estimates (as I suspect you were pointing out).

    “Consistency is first and foremost a concept that belongs to formal semantics” it is probably better to try and work out what Nic actually meant by it, rather than to interpret it in a framework that you consider to be a more correct one, at least if you aim is to understand the Nic’s intended meaning. Variants of Hanlon’s razor are also a useful guide (i.e. no not assume something is “weasel words” if there is a more charitable interpretation that remains plausible). As it happens “is not contradictory” is just what “is consistent” usually means in a scientific or statistical context.

    Nic could have written “the values I chose do not contradict the findings of [9]”, but the “is consistent with” wording would be immediately recognisable by scientists and is the phrasing that most would use.

  105. RickA says:

    joshua and dikranmarsupial:

    In this case though it is the skeptics who are saying the future is uncertain and the consensus scientists who are pushing back.

    The uncertainty monster and all that.

    Nic’s observational constrained ECS is within the IPCC range of 1.5 to 4.5 for ECS and I feel quite a bit of push back against that, just because it is at the low end of the range.

    Laypersons like myself read all the back and forth and think the future climate seems pretty uncertain – period – and certainly uncertain in the range of 1.5 to 4.5.

    I am sure you are right about uncertainty language in papers – but you sure don’t see it much in the lead up to Paris.

    http://www.nytimes.com/interactive/projects/cp/climate/2015-paris-climate-talks/at-paris-climate-talks-an-abundance-of-options

    The headline is “World leaders have 12 days to agree on plans to slow global warming.”

    Etc.

    On the other hand – that is must one persons perception, and like all people I am biased.

  106. RickA says:

    “just one persons perception” – fumble fingers

  107. The D&A method scales modelled warming to HadCRUT4 warming, but only for grid cells with coverage, and determines a scaling factor which fits best.

    For long-term trends matching the grids is sufficient to avoid biases due to, for example, polar amplification. The coverage was important for the comparison of models with observations for the recent decade because more warming took place in the Arctic than expected by just polar amplification. The long-term trends (more accurately: the 3-dimensional pattern of the trends) are used in Detection & Attribution (D&A).

  108. In this case though it is the skeptics who are saying the future is uncertain and the consensus scientists who are pushing back.

    I don’t think this is true. I think, at best, “skeptics” are saying that the future is uncertain, let’s not do anything until we know more, grrrrrowth. Others, on the other hand, are saying that it’s uncertain, but we have some understanding of what could happen, maybe we should consider doing to something to avoid the possibility that the impacts could be severely damaging.

  109. izen says:

    @-RickA
    “So “natural” is changes in the sun, heliosphere, magnetic coupling, natural fires, volcanoes, orbital variations, clouds, ocean currents, etc.”

    Okay, so ‘natural’ is any change in the energy balance that is caused by something independent of human activity.
    Those are all measurable, and measured, parameters. It is straight-forward to determine if any have altered in a manner that could explain the observed warming.

    @-“Secondly, I see a change of almost 1C between the peak MWP and the bottom of the LIA – which is more than the warming since 1950 (but not much).”

    1C for local peaks and bottoms, but not global. The LIA was a minor feature of some S hemisphere regions and the MWP peak warmth was not globally synchronous.

    @-“Ok – how much of the warming last century and this century is from CO2 released due to the MWP? That ended about 700 years ago.”

    You can derive how much CO2 rises when milankovitch triggered, and CO2 amplified, warming occurs at the end of a glacial period. You get about 80ppm increase for a 6-8 deg C temperature rise. Throw in a bit of Henry’s law and I think you get about 10ppm per deg C.

    But your question is… ill posed.
    Any potential CO2 rise initiated by the MWP warming will have been arrested and reversed by the LIA. Warming will only increase the airborne CO2 fraction where it is maintained until equilibrium is reached.

    This is an issue with the NL sensitivity paper discussed in this thread. How the carbon cycle will respond under climate conditions that are rare and extreme in the paleo record is uncertain. While this allows some optimistic projections which are – ahem – consistent with the modelling, it ignores the possibility that there is NO reduction in CO2 when/if CO2 emissions cease. the tectonic input and slow geological sequestration balance out at a new higher concentration and temperature.

  110. dikranmarsupial says:

    RickA says: “In this case though it is the skeptics who are saying the future is uncertain and the consensus scientists who are pushing back.”

    I don’t think this is actually true. Skeptics argue that climate sensitivity is low and focus only on estimates at the low end, which implies certainty that climate sensitivity is low. The mainstream scientists position is that there is a much broader range of values for climate sensitivity that are consistent with (parts of) what we know about climate. So the mainstream scientists are arguing that the uncertainty is greater than admitted by the skeptics.

    Fig 1. The uncertainty monster.

  111. dikranmarsupial says:

    Was hoping I could include an image, never mind, here is the uncertainty monster. ;o)

  112. Scientists always talk about uncertainties and the uncertainty monster is the favourite pet of the mitigation sceptics. The difference is that scientists use uncertainty to express the range of values that fits to the evidence, while the mitigation sceptics use it to pretend nothing will happen, which is very weird because the outcome of every single political or personal decision is uncertain.

    When you start seeing evidence and arguments as an attempt to understand the world and not as push-back and excuses you have made the first step to become a more rational than ideological person.

  113. BBD says:

    Dikranm

    Usually if you just post the url on its own, the image displays in the comment:

  114. dikranmarsupial says:

    BBD cheers. Fearsome chap, isn’t he?

  115. Willard says:

    > As it happens “is not contradictory” is just what “is consistent” usually means in a scientific or statistical context.

    Agreed. The main difference between the two expressions is the lack of double negatives in “is consistent.” The main difference between “is not contradictory” in a logic and in empirical science is that in logic it refers to a deduction, whence in empirical science it denotes something else. In Nic’s case, it is a selection: the subset of values he selected is “consistent with” the set in which he picked his values.

    That’s quite obvious, and carries very little information. To see it, remove all the consistency claims from the para:

    I have recently developed a simple ESM, with 2-box climate and ocean carbon sink and land carbon sink sub-models; the ocean carbon sink is unreferenced and the land carbon sink characteristics is inspired by [9]. I then selected parameters to simulate global temperature, heat uptake and carbon-cycle changes since preindustrial (from AR5 and elsewhere) that I consider to match recent observational estimates. To drive this ESM, I took forcings from the RCP dataset and adjusted its values (in some way to be specify later) to more recent AR5 estimates over 1765-2011. I then obtain an equilibrium climate sensitivity (ECS) of 1.7°C and a transient climate response (TCR) of about 1.35°C.

    As you can see, this para describes what Nic did. There are gaps in the specification, but I’m sure that Nic would complete them when needed. The main difference is that the argumentative tenor of his specification is gone, and that all his choices are made explicit. There is simply no reason to insist so much in the consistency of the selection when you can cite your sources.

    If that’s also something we can read in the scientific lichurchur, then so much the worse for the scientific lichurchur.

  116. paulski0 says:

    Victor

    For long-term trends matching the grids is sufficient to avoid biases

    Yes, the initial comparison shouldn’t be coverage biased (though may be due to use of sst versus sat). But the output of the D&A process is a scaling factor, which is then applied to the model global average for the anthrogenic historical run to find the amount of warming attributed. In this step, if they use the full global average this should be larger than the hadCRUT4-coverage global average.

  117. bill shockley says:

    image urls with an image extension like .jpg or .gif will embed automatically. .png, .bmp, not sure. But definitely not if it doesn’t have an image extension.

  118. Willard says:

    Here’s Judy’s monster:

  119. dikranmarsupial says:

    Willard ” In Nic’s case, it is a selection: the subset of values he selected is “consistent with” the set in which he picked his values.”

    You are missing an important point here. The set of values from which Nic picked his set were a superset of those considered plausible by [9], he could have picked a set that were outside the range considered plausible by [9], so the statement does actually convey information. The value of saying that his values are consistent with those of [9] is to convey that the values are considered plausible by an expert on the carbon cycle.

    “As you can see, this para describes what Nic did.” no, it describes your interpretation of what Nic did.

    ” There is simply no reason to insist so much in the consistency of the selection when you can cite your sources.”

    You don’t appear to understand the scientific usage of the phrase, Nic wasn’t insisting anything, he was just showing that there does exist a carbon cycle specialist that would agree the value used was plausible. That is in no way overstating his true position in that respect. He could only reference his source in this case if he used a point estimate explicitly provided by [9].

    “If that’s also something we can read in the scientific lichurchur, then so much the worse for the scientific lichurchur.”

    Perhaps you ought to become more familiar with scientific/statistical terminology/usage if you are going to make a point of analyzing scientific exchanges? BTW “consistent” has an everyday usage which is “consistent with” its scientific usage.

  120. dikranmarsupial says:

    To clarify a point in my last post:

    ”There is simply no reason to insist so much in the consistency of the selection when you can cite your sources.”

    Had Nic written “I used a value lambda = 2.178 [9]”, that would imply that [9] specifically advocated that particular value, and hence would be misleading unless it actually did so explicitly. Saying that the value used was “consistent with” [9] is actually a much weaker statement, implying a weaker endorsement. Nic could have written “the value used was within the range considered plausible by [9]”, but that is equivalent to “is consistent with [9]” in common scientific parlance and rather more verbose.

  121. izen says:

    @-dikranmarsupial
    “… it is probably better to try and work out what Nic actually meant by it, rather than to interpret it in a framework that you consider to be a more correct one, at least if you aim is to understand the Nic’s intended meaning.”

    The desire to understand the intentions and meaning behind the actions of another person is a very strong human motivation. It is probably impossible and inappropriate when applied to a scientific paper on this contentious subject.

    Whatever the intention of the word use as you correctly identify it is limited to “the values I chose do not contradict the findings of [9]”. Or at least the two statements are consistent with each other. –

    It is the lower limit, the minimum requirement that justifies bothering to do the research at all. In the absence of any further argument in favour of that choice, whatever the intention of the writer, it omits information about the reason for the selection made.

    Willard’s question is apposite as usual, – are there any other choices that could be made that would also be ‘consistent’ with other findings that could give a LOWER climate sensitivity, rate of warming and concentration reductions ?

    If not, then the lack of better justifications for the models and methods used may raise questions of intentionality.

  122. anoilman says:

    dikranmarsupial: Pictures don’t always work… remember some people will believe anything;

    Click to access jdm15923a.pdf

  123. dikranmarsupial says:

    I wondered where they all went, does that mean the Vogons are on their way?

  124. dikranmarsupial says:

    izen, as I said, the problem isn’t the “is consistent with”, it is not mentioning that the thing it is consistent with is at the lower end of the spectrum (as explicitly acknowledged in the abstract). Hanlon’s razor is a good idea, try to find the most charitable explanation of what is actually written that remains plausible and assume that is what was actually meant. If nothing else it guards against making incorrect claims of nefarious intent based on circumstantial evidence, so it is strategically a good idea as well as being the right thing to do anyway.

  125. dikranmarsupial says:

    anoilman – LOL, I’ll have to read that paper! (I enjoyed Frankfurt’s little book)

  126. The Very Reverend Jebediah Hypotenuse says:


    Any evidence-based argument that is more inclined to admit one type of evidence or argument rather than another tends to be biased. Parallel evidence-based analysis (sic) of competing hypotheses provides a framework whereby scientists with a plurality of viewpoints participate in an assessment. In a Bayesian analysis with multiple lines of evidence, it is conceivable that there are multiple lines of evidence that produce a high confidence level for each of two opposing arguments, which is referred to as the ambiguity of competing certainties. If uncertainty and ignorance are acknowledged adequately, then the competing certainties disappear. Disagreement then becomes the basis for focusing research in a certain area, and so moves the science forward.

    (Curry and Webster, BAMS, December 2011)

    Move the science forward. Admit all types of evidence and arguments to avoid bias. Make competing certainties disappear. Adequately acknowledge the monster.


    The monster is too big to hide, exorcise, or simplify.

    (Curry and Webster, BAMS, December 2011)

    Don’t panic.
    Just don your peril sensitive sunglasses.

  127. BBD says:

    Required reading for this sort of thing, DM.

  128. verytallguy says:

    RickA,

    It’s more simple than you think.

    The world has warmed.

    If absent any human contribution, the world would have warmed anyway, then attribution is less than 100% human.

    If absent any human contribution, the world would have cooled, then attribution is greater than 100% human.

    The actual assessment is that natural effects are a very slight cooling over the period, hence attribution is 100-110% human.

    See http://www.realclimate.org/index.php/archives/2013/10/the-ipcc-ar5-attribution-statement/ for more

  129. verytallguy says:

    html, on the other hand, turns out to be more difficult than I thought. Memo to self to use prose rather than symbols in future.

  130. Willard says:

    > You don’t appear to understand the scientific usage of the phrase […]

    In return, you don’t seem to understand how specifications are written in technical documentation, how citations work in scholarship, how argumentative function can’t be reduced to usage, that appealing to common usage is in this case fallacious, and that your ad hominems will he held against you. However, please continue, since that would be quite amusing.

    If Nic would have taken something outside [9], his appeal to the authority of “a recent paper” would have been quite moot. Besides, all his consistency remarks are meta, and belongs to a discussion.

    Had he created a para where he discussed his results, the first question he’d have to ask is: are his overall results “consistent with” other estimates of TCR and ECS? There’s an issue of compositionality here: that every bit of his analysis is consistent with the lichurchur may not imply his results are. A second question needs to be: in what way are these results more “realistic”? As far as I can see, Nic offered no explicit argument for his main claim. A third could very well be: are there other properties than consistency that such an analysis would need to meet? In other words, consistency is cheap and only suffices for lukewarm marketing efforts.

    That Nic’s sleezy rhetoric could be defended is beyond me.

    One only has to read lichurchur from decades ago to see that “but that’s how scientists write” is moot at best. While it may be easy for empirical scientists to snob social scientists’ writing, please be assured that the feeling can easily be reciprocated. Such slug fest would not be “unprecedented,” if you know what I mean.

  131. dikranmarsupial says:

    What ad-hominems? Saying that you appear not to understand a particular piece of scientific terminology is in no way an ad-hominem. There are plenty of items of terminology that I don’t understand [many of which you use on a regular basis] and I wouldn’t regard someone telling me that as an ad-hominem, just as someone pointing out that there is a specific terminological issue that I don’t understand. Perhaps I don’t understand the meaning of ad-hominem, that is possible.

  132. dikranmarsupial says:

    ” While it may be easy for empirical scientists to snob social scientists’ writing,”

    I have no idea where that comes from, empirical scientists and statisticians having adopted a particular phrase (generally indicating a very low level of endorsement/agreement) in no way suggests that they are better in some way than social scientists.

  133. Joshua says:

    I think that referencing “weasels” suggests judgement of intent (a deliberate effort to avoid accountability) which in many cases is unknowable.

    That is, of course, different than arguing that there was an insufficient quantification of uncertainty (which saying “is consistent with” is often…well…consistent with).

    In this particular case, while I can recognize the frame for asserting that there was a sub-optimal treatment of uncertainty, I don’t have the skillz to judge for myself. I certainly recognize a pattern in the past where Nic’s treatment of uncertainty was sub-optimal, but generalizing to specifics from general patterns is problematic if I can’t evaluate relevant details.

    But another pattern I see is a tendency to (selectively) demonize qualification on uncertainty (may, could, might, if this trend continues, etc.), as something a “weasel” does, and I think that is an unfortunate by-product of motivated reasoning of the sort so often manifest at WUWT. Not to say that was the case here, but I’m not sure that there is any value added by invoking weasels.

  134. BBD says:

    VV

    No confidence in J&D.

  135. Eli Rabett says:

    OK Josh, from now on Eli will used “stoating”

  136. Willard says:

    > Saying that you appear not to understand a particular piece of scientific terminology is in no way an ad-hominem.

    Of course it is. Understanding is mind-related, and this is basic mind probing. Unless you can argue for some kind of Cartesian duality between the person and the mind. Good luck with that.

    See? No need to say anything about your understanding of argumentation theory. Countering with an argument just works better. Your understanding is none of my concern, and my communication objective lies elsewhere.

    ***

    > I have no idea where that comes from, empirical scientists and statisticians having adopted a particular phrase (generally indicating a very low level of endorsement/agreement) in no way suggests that they are better in some way than social scientists.

    This comes from the underlying appeal to your status of being a scientist and a statistician, which is supposed to hint at your authority in matter of scientific parlance. This also comes from the usual jabs empirical scientists throw the social scientists’ way. So the point is that this discussion smells of corporatist defense, i.e. “who are you to tell how scientists ought to communicate.”

    My starting point is that most of lichurchur is crap. Therefore, appealing to lichurchur is to me weak sauce. Something will have to change, but that’s an issue that goes beyond Nic’s weasel wording.

    In Nic’s case, “is consistent with” does not indicate a very low level of endorsement or agreement. It merely indicates that he took some “numbers” in some papers, that his work rests on some part of the lichurchur, and that his model is backed up by published authorities. That’s all the “consistent with” does in that case. That Nic’s usage of “consistent with” could be rewritten using familiar prepositions provides a big tell.

    Another way to see that there’s a smokescreen is to rewrite the adjective into a property: “is consistent with” characterizes consistency. What would be the property (or the formal relationship) Nic is trying to describe?

    ***

    In any case, this discussion is of little relevance (unless one is interested in ClimateBall, it shan’t go without saying) if someone could answer this question: do Nic’s results agree with those already established? I have a vague feeling that Nic’s TCR and ECS are quite low, perhaps the lowest on the lukewarm market. I could scratch my own itch, but sensitivity matters bore me, and I find them of infinitesimal importance in the grand scheme of things. The more importance it gets, the more lukewarm klout there is.

    So that’s where I usually tag you, guys. Do some science and please let us stop all this fuss about philosophy of science and language.

    Is there a lowest TCR and ECS on the market?

  137. anoilman says:

    Victor: that doesn’t look like a good report. There’s no ‘US grid’, not like they are talking about, and its not cheap, by a long shot, and not efficient. Large scale sharing of renewables does level the load and improve up-time, but its not really feasible to distribute it over a large enough area.

    I recommend you read through the articles on renewables at the SOD.
    http://scienceofdoom.com/

    Moving energy (power lines) is expensive, has serious limits on its efficiency. This is why we can’t just put all of our solar panels in a desert and call it a day. The transportation issue results in ‘islands’ of renewables which have deficiencies that really can only be supported with batteries, or an external grid. It is likely that neighboring grids will have similar brown outs in productivity so.. not a great solution.

    In Germany’s case that ‘external grid’ is coal from other countries.

  138. Joshua says:

    Eli –

    Much better.

  139. izen says:

    I have a question about the graph in the post for anyone who knows an answer

    which I am sure could be resolved if I RTFP.
    But the vertical scale is labeled –
    “Temperature anomaly relative to 1861 – 1880”
    The Horizontal pink line is labeled –
    “Observed 2000-09 temperature anomaly per HadCRUT4v4”

    The line appears to be at ~ the 0.75 level.

    Is this HadCRUT anomaly adjusted to the 1861-1880 baseline, or is it the HadCrut anomaly relative to their 1961-1980 baseline ?

  140. Ethan Allen says:

    I’m with BBD on that one and MJ is in a CEE department. It’s 6.3 monies time, full on full scale prototype demonstration + a NAS/NAE study before that (meaning some real engineers get involved not just some academic desk jockey engineers)..

    Pumped hydro? Mile high dam in the Grand Canyon, fill it with the Pacific Ocean

    Roadmaps for 139 Countries and the 50 United States to Transition to 100% Clean, Renewable Wind, Water, and Solar (WWS) Power for all Purposes by 2050 and 80% by 2030

    Click to access 15-11-19-HouseEEC-MZJTestimony.pdf

    It’s nice that MJ sent it to the “Democratic Forum on Climate Change” but OMFG the USA Congress is MORE THAN HALF FULL OF DENIERS. Talk about a non sequitur.

    If MJ had instead sent that letter to the DOD they would have gone … Who is this utter nutter?

    Not going to say that the above can’t be done. I can push some very basic numbers on that buildout strategy though. But I’m kind of guessing it would make the WWII buildout look like a pea shooter fight by comparison.

    2030 – 2015 = 15 years @ 80% of world energy needs at that point in time. Say 20 TW in 2030 (low estimate?) that’s like a GW of brand new nameplate (I’ve added in a factor of 10 for produced) renewable energy added per HOUR (average). You might be able to ramp to that buildout capacity in oh say 15 years, so 2X for linear buildout (GW per 30 minutes at the end) and 4X for quadratic buildout (GW per 15 minutes at the end).

    And you build all that 100% renewable stuff with guess what? 100% renewables! You didn’t hear that one from me though.

    We’ll also assume no future conflicts at all so that we can divert 100% of global military spending towards this effort. I’m s-o-o-o-o-o-o-o-o-o-o-o-o sure that will happen (all nations/peoples stop fighting, like right now).

    You do get to enslave all shipbuilders and all rail builders and all road builders, the entire energy infrastructure sectors and the entire automobile infrastructure, while in the meantime, you also get to prematurely shut down all FF production.

    You might as well think about building a Great Pyramid of Cholula per day.

    Heck, it would take 15 years of head scratching and logistics to even get started.

    Impossible? No. Mind boggling? YES!

  141. JCH says:

    Izen – apparently you lack Judith Curry eyeballs.

    I’m not sure how good my eyeball estimates are, and you can pick other start/end dates. But no matter what, I am coming up with natural internal variability associated accounting for significantly MORE than half of the observed warming.

    Like I said, my mind is blown. I have long argued that the pause was associated with the climate shift in the Pacific Ocean circulation, characterized by the change to the cool phase of the PDO…

    Although this was not a specific conclusion of the paper (the focused on the period 2002-2012), the conclusion jumps out from their Fig 1 (and my eyeball analysis). – Professor Curry

    The author of the paper came up as little as 12.5%.

  142. Joshua says:

    JCH –

    Gotta linky?

  143. > I’m not sure that there is any value added by invoking weasels.

    It’s common parlance among editors (as opposed to auditors) and its meaning does not imply any intention to deceive. It only indicates a bad writing habit. I could settle for “wooden tongue,” but I’m not sure it carries the same connotation as langue de bois, which would be a proper translation candidate for what I have in mind.

    There are three problems with Nic’s usage of his “consistent” justification. First, the passive voice “deemphasizes” (H/T Junior) the fact that he, Nic Lewis, made selections. Second, the various concepts he used restrict the information required: being within the various ranges of his references does not justify why he, Nic Lewis, picked up the values he picked. (In fact, readers don’t even know what specific choices he made.) Third, the vocabulary hints at a logical relationship between his sources and his analysis, thereby minimizing that he, Nic Lewis, selected the values for his analysis.

    ***

    Here’s how I see this ClimateBall ™ episode. Nic picked up some numbers from the lichurchur, plugged them in his simple model, and got low sensitivity numbers. The only indirect justifications offered in his post are that they are consistent with the lichurchur and that his choices are observation-based. This leads him to conclude (in the lede) that “the mean carbon cycle behaviour of CMIP5 ESMs and EMICs may be quite unrealistic.”

    There’s a gap between Nic’s conclusion and Nic’s analysis that Nic’s weasel words can’t patch

    I think my argument against Nic’s weasel words shows that if there’s something unrealistic in this episode, it’s his own implicit argument.

  144. Jim Eager says:

    Izen replied to RickA: “Any potential CO2 rise initiated by the MWP warming will have been arrested and reversed by the LIA. Warming will only increase the airborne CO2 fraction where it is maintained until equilibrium is reached.”

    I was going to write much the same thing in reply to Rick, seeing as he addressed that to me, but then he wrote that he appreciated all the corrections to his understanding, so I backed off. In any case you’ve saved me the trouble.

  145. JCH says:

    “Any potential CO2 rise initiated by the MWP warming will have been arrested and reversed by the LIA. Warming will only increase the airborne CO2 fraction where it is maintained until equilibrium is reached.”

    That deserves to be published at least three times.

  146. RickA says:

    Jim and Izen:

    So the warming generated during the MWP cannot hide in the ocean and pop out after the LIA.

    That is a relief.

  147. Leto says:

    Regarding being “consistent with” the literature… If entities in the literature are associated with an uncertainty range (typically 95% confidence intervals [95%CI], or similar), then choosing a value “consistent with” the literature, to me, implies that the values are from within that range. As Dikran says, it is a very weak claim.

    But several “consistencies” combined do not always produce a pooled “consistency”. This point will be obvious to many of you, but it is worth making explicit: if an entity AB (A times B, where A and B are independent) is derived from the literature about A and the literature about B, then low-balling both A and B can produce a product that is no longer within any plausible 95%CI. Claiming that a doubly-low-balled AB is “consistent with” the literature is therefore deceptive, not simply weaselly.

    (For instance, there is a 1 in 20 chance that a 20-sided die will roll a 1, but the probability of two independent die rolls both being one is 1 in 400. Each die-roll is on the margin of what might be called “consistent with” expectations, but the paired result is merely physically possible, not plausible.)

    Nic Lewis is obliged to document his assumptions, and to present the entire parameter space that his methods explore, not just the corner of the space that can be reached from serial low-balling. From what has been presented here, it does not appear that he has done so.

  148. anoilman, thanks for providing some arguments why this paper may not be good. The costs of the grid should naturally be counted. Kt is strange to to count it in a paper arguing to have found a cheap solution, which means that they should have looked at the costs. Grids are currently a small part of the costs, but they should be counted whether in future for a stronger grid for renewables with longer routes or now so that France can dumb electricity from its inflexible nuclear power plants in the rest of Europe at night. France sells to a large part of Europe and Desert Tech wants to see electricity from northern Africa to Germany, thus it seems possible at somewhat normal costs.

    What I thought was a nice argument, I had at least not seen before, is to also consider the power needed for heating. This can be produced and stored whenever there is capacity, even in summer and then extracted (for example from the ground) when needed. A large part of our energy goes into heating. Thus that is potentially a large buffer.

    Germany has enough old-fashioned capacity. While the same scaremongering that the lights will go out is also tried here by the industry, Germany is a large exporter of electricity.

  149. JCH says:

    So the warming generated during the MWP cannot hide in the ocean and pop out after the LIA.

    There was missing heat during the MWP?

  150. Willard, if article A estimates value V to be 10 with a standard deviation of 5. Nic Lewis could write that he took the value from article A. Then I would expect him to take the value 10.

    In case Nic Lewis wrote that he took a value that is consistent with article A, he could also have taken 0. I do not know whether the took that literary freedom, I hope not, as reader I would feel conned, but in principle his formulation would be okay.

  151. So the warming generated during the MWP cannot hide in the ocean and pop out after the LIA.

    The oceans are warming up. The ocean heat content is increasing. You do not have to worry about heat from the MWP warming the air, then the ocean would become colder, which is not observed.

  152. Kevin O'Neill says:

    If we combine Willard’s exegesis with what Leto wrote we get close to the actual bait & switch that Nic Lewis has proffered.

    A robust model would show little sensitivity to the choices Nic made. A scientist interested in learning and sharing the knowledge gained would include results using alternate choices to show that the model is (or is not) robust to these (oftentimes arbitrary) decisions.

    Nic’s ‘weasel words’ divert attention from the fact these choices may in fact be all that’s needed to explain the results. Or, if one has a more suspicious nature, that these choices were made specifically to bias the results.

  153. izen says:

    @-JCH
    “Izen – apparently you lack Judith Curry eyeballs.”

    Sounds like an Eastern delicacy…
    But the question was serious, the multiple baselines on the graph have bothered me since I saw it. I am still not sure they all match up.

  154. izen says:

    The other thing about the graph that is amazing to these uncurried eyes is the indication we could emit 1600GtC, more than tripling our cumulative carbon emissions to the atmosphere so far and the temperature stays below 2 deg C.

  155. anoilman says:

    Victor Venema: The next leap ($$$ expensive) forward for Germany also involves several major (unsightly) power grid extensions.

  156. Steven Mosher says:

    Thanks. This was really clear.

  157. dikranmarsupial says:

    Google scholar reports “About 3,840,000 results” for the phrase “is consistent with”, so how do we differentiate between those examples that are “weasel words” and those that are just appropriate use of accepted scientific parlance?

    For example is “The structure is consistent with a thermal spectrum at 31, 53, and 90 0Hz as expected for cosmic microwave background anisotropy” weasel words, or scientific parlance, or perhaps even an element of understatement?

    To be clear, I am not saying that Lewis did not use “weasel words” in what he wrote, just that “is consistent with” wasn’t and example of it. If you mix reasonable and fair criticisms with criticisms that are unreasonable and unfair, you weaken your case. Interpreting scientific parlance as weasel words is something that happens too much on skeptic blogs e.g. to dismiss findings that don’t suit their position, lets not go there, we don’t have to?

  158. paulski0 says:

    ATTP

    If Nic’s model is constrained to match HadCRUT4, then he will still potentially be under-estimated the attributable anthropogenic warming?

    Yeah, anything which treats HadCRUT4 global average as a true global average is probably biased. See Ed Hawkins masking CMIP5 models by HadCRUT4 coverage.

    Also Cowtan et al. 2015.

  159. Thanks for your post about this graph of Nic Lewis.

    If I understand it correctly the airborne fraction in the model of Lewis is roughly constant. So that’s now about 50%. Looking at the carbon emission data of the RCP8.5 line in the graph of Lewis the 2010 decade-point is about 450 Gt C and the 2100 decade point about 2250 Gt C. That’s a difference of 1800 Gt C and with an airborne fraction of 50%, roughly half of it, 900 Gt, would have to be absorbed by the oceans and the land. The ocean sink will get lower in the future, but using current figures the two sinks are roughly equal in size. Therefore about 50% of that 900 Gt C will be absorbed by the oceans and the other 50% will be converted into plants and trees. That’s about 450 Gt of carbon.

    Looking at IPCC figure 6.1 the current vegetation holds 550 Gt C (450-650). So in the RCP8.5 model representation of Lewis the vegetation will have to grow with about 80% (70-100%). That’s 80% more plants and trees. In a mathematical world trees can probably grow into heaven, I don’t think they will be able to do that in the real world. Or am I missing something?

  160. JCH says:

    The deserts are greening. Haven’t you heard? CO2 is plant food…

    or, maybe not:

  161. Jos,
    That’s something I’ve also wondered. We can potentially emit enough to increase the mass of the biosphere quite substantially. My understanding is that it is unlikely that it can continue to take up the same fraction as it does now if we do continue to follow a high emission pathway due to nutrient limits.

  162. ATTP,
    Indeed and in my opinion not very plausible this constant airborne fraction.
    Some Dutch so-called ‘skeptics’ like this graph, because it shows little warming in the future. However there is an element that’s not visible at all in the graph of Lewis: 450 Gt of carbon that will have to be absorbed by the oceans leads to an unprecedented rate of ocean acidification with very low carbonate-ion saturation levels, see for instance:
    http://www.sciencemag.org/content/335/6072/1058.short

  163. Pingback: Bayesian estimates of climate sensitivity | …and Then There's Physics

  164. Willard says:

    > Google scholar reports “About 3,840,000 results” for the phrase “is consistent with”, so how do we differentiate between those examples that are “weasel words” and those that are just appropriate use of accepted scientific parlance?

    By looking at specific results. The only way to that Nic did not weasel his way out of his specification is to read the goddam paragraph and say that it conveys enough information. I don’t think it does. In fact, I think it merges two different functions together.

    There are more than 126k hits for “growing body of evidence” – does it mean the weaselest weasel of all is not a weasel after all?

    ***

    > If you mix reasonable and fair criticisms with criticisms that are unreasonable and unfair, you weaken your case.

    The “you” and “your” work as weasel words here.

    ***

    > I am not saying that Lewis did not use “weasel words” in what he wrote [.]

    Of course there are others. Perhaps I may interest you with this one:

    Observationally-based evidence is thin on the ground […]

    How sensitive is global temperature to cumulative CO2 emissions?

    Considering that the word “observation” is the sesame-street-like word of the day for our beloved Bishop and RickA, it seems to me that this provides some indication as to why researchers usually don’t restrict themselves to what Nic call “observation-based evidence.”

  165. KR says:

    RickA – I would have to point out that you have things _exactly backwards_ with “In this case though it is the skeptics who are saying the future is uncertain and the consensus scientists who are pushing back.”

    The range of 1.5-4.5C sensitivity is the range of uncertainty, a range that has a probability distribution function based upon the evidence. That PDF indicates that the most likely value for ECS is around 3C, with the afore-mentioned range. And hence appropriate caveats should be applied when discussing new results, based on the the entire body of evidence.

    The ‘skeptics’, on the other hand, seem utterly certain that ECS is in the low end of the plausible range, and are quite certain that the high end can be discarded. I consider that more wishful thinking than anything else.

  166. Gator says:

    Free the code! Nic needs to post exactly what he used to make the graphs. Simple.
    Isn’t this what the auditors demand??
    Enormous amounts of internet electrons being wasted could be prevented if only Nic Lewis would simply post what he actually did instead of just saying “consistent with”.

  167. Leto says:

    Gator, he should not only post the code, but provide us with a simple function:

    public float LewisSensitivity(float assumption1, float assumption2, float assumption3)

    … so we can plot the whole parameter space for ourselves and see where his single headline result fits in the broader range of what is plausible.

  168. That function would be great, Leto. I predict it would echo Nic’s own earlier description:

    If one builds a model with a low ECS, and moderate climate-cycle feedbacks, warming peaks immediately if emissions cease and declines quite rapidly thereafter.

    Emission reductions

    You heard it first at AT’s, vintage September 30, 2015 at 8:13 pm.

  169. Joshua says:

    willard –

    ==> “It’s common parlance among editors (as opposed to auditors) and its meaning does not imply any intention to deceive.

    Perhaps among editors (I wouldn’t know), but in discussions among non-editors I’d say it almost always connotes an intent to deceive. So without supporting evidence about intent, I’m still hard-pressed to see any value added in the current context. Why not simply say something on the order of….Nic has not sufficiently supported his argument, explicated his position clearly, provided enough evidence to support his conclusions. sufficiently quantified uncertainty, etc.?

  170. Joshua says:

    ==> “Free the code! Nic needs to post exactly what he used to make the graphs. ……he should not only post the code, but provide us with a simple function:…public float LewisSensitivity(float assumption1, float assumption2, float assumption3)… so we can plot the whole parameter space for ourselves and see where his single headline result fits in the broader range of what is plausible.”

    I’m sure that Steven Mosher, Willis, Stevie-Mac, etc., have all sent him numerous emails to that effect….and that the data are forthcoming…but Nic prolly just hasn’t had time to back yet in response.

  171. BBD says:

    If one builds a model with a low ECS, and moderate climate-cycle feedbacks, warming peaks immediately if emissions cease and declines quite rapidly thereafter.

  172. BBD says:

    SP!!

    😉

    It’s Friday night. What can I say?

  173. Joshua says:

    RickA –

    ==> “In this case though it is the skeptics who are saying the future is uncertain and the consensus scientists who are pushing back.”

    Which “consensus scientists” have said that the future isn’t uncertain? IMO, things like error bars and confidence intervals are about accounting for uncertainty. I can’t evaluate the science, but I can see the following: (1) “skeptics” routinely misrpresent the certainty expressed by “consensus scientists,” (2) “skeptics” routinely exploit the uncertainties expressed by “consensus scientists” – either by holding expressed uncertainties hostage against unrealistic expectations of total certainty or dismissing expressed uncertainty by calling it “weasily” or other terms that take uncertainty out of context.

    One of the primary reasons that I have little trust in the analyses I see being presented by “skeptics” is inconsistency in approach to uncertainty, along with ubiquitous statements that seem obviously over-certain. It’s all over what they write in the blogosphere, pretty much from top to bottom – from what is written by Judith, Stevie-Mac, Willis, Nic, whose scientific analyses are given great weight in the “skeptic” community, to less known figures like Andy West or the tons o’ “guest” posters at WUWT, to the typical comments posted in “skept-o-sphere” comment threads.

  174. > [B]ut in discussions among non-editors I’d say it almost always connotes an intent to deceive.

    Sure you would. There are more than 25k citations using the expression “weasel words” in G Scholar. Research and report. Meanwhile, I will note this tu quoque:

    (a) Contrarians use words W.
    (b) You use the words W.
    (c) You should use other words than W.

    The conclusion is usually implicit, and its formulation is negotiable. Take this example:

    (a) Contrarians use “non sequitur,”
    (b) You use “non sequitur.”
    (c) You should use other words than “non sequitur.”

    One problem with this kind of argument is that it conflates words with their usages. The problem is not the word non sequitur (say), it’s the fact that it’s being used incorrectly. Most of the times I asked a contrarian what doesn’t follow from what, I met a blank stare and then got a proof by assertion with more squirrel chasing (i.e. strawman). Another problem is that it’s ad hominem.

    ***

    > Why not simply say something on the order of…

    Because the suggestions puts the cart before the horse and bypasses what I wish to convey. What I wanted to show is that weasel wording usually indicates something regarding the rhetoric structure of a text. In my opinion, Nic’s weasel wording clearly indicates (a) argumentativeness and (b) underspecification. We need way more editors than we need auditors. Looking for formulations written in wooden tongue is one of the trick editors need to hone. Hedging, clichés, doublespeak, and other such linguistic phenomena reveal inferences that can’t be observed by looking at code and datasets.

    Two important caveats editors should bear in mind is that: (a) indicators are seldom decisive; (b) indicators are almost always present. In other words, Nic could certainly could have used the very same words without being overly argumentative and without underspecifying his method. It just so happens that he did not.

    ***

    Wooden tongue is here to stay. It serves a function. It saves time. It wastes money. What’s not to like? Take this latest fall by the Auditor:

    However, it still seems like one of those too typical situations where the less alarming explanation is presented in specialist literature, but left unmentioned or unconfronted when retreat of West Antarctic glaciers is presented as a cause of alarm.

    Antarctic Ice Mass Controversies

    It would be faster to count non-weasel words. The only thing that gets clearly conveyed is the CAGW meme. Another contrarian outlet manager who can say: mission accomplished.

  175. Joshua says:

    willard –

    I’m afraid that most of that is too complicated for me to follow. For example, I don’t understand how you got a tu quoque out of what I said…and this:

    (a) Contrarians use words W.
    (b) You use the words W.
    (c) You should use other words than W.

    Doesn’t particular seem to me like what I was trying to say. I’m not saying that you shouldn’t do something because that’s what “skeptics” do (in fact, I’m not even saying what you should or should not do). But maybe I’m missing something about your response.

    ==> “One problem with this kind of argument is that it conflates words with their usages.

    Well, I think that perhaps you’ve read me say, more than once before, that I’m a descriptivist. To me, saying that someone uses “weasel words” connotes a suggestion that they are intending to deceive. That certainly is, IMO, what is meant by the term when I read it at places like WUWT – and it is often used in the context of reaction to “realist” scientists quantifying understanding. Now that may well not be what you’re trying to say, and I might be some kind of an outlier in picking up that connotation – but I doubt it. Google searches or the parlance of editors does not seem to me to be particularly meaningful in that regard.

    ==> “Because the suggestions puts the cart before the horse and bypasses what I wish to convey.

    Well, OK. They were only suggestions. But in addition to being a descriptivist, I’m also a believer in writer-responsibility for the exchange of meaning. When you use “weasel word,” I hear “intent to deceive,” (an extension from the common connotation of “weasel” as a modifier”).
    Again, I might be an outlier, but I kind of doubt it. But then you have a choice, IMO.. You can determine that my perception of connotation is wrong, and therefore no reason to word your comments differently, or you can say that my perception of connotation isn’t what you intended, and then clarify. I would imagine that the choice between those two options (are there others?) would hinge on whether you think my perception is that of an outlier

    ==> “What I wanted to show is that weasel wording usually indicate something regarding the rhetoric structure of a text. In my opinion, Nic’s weasel wording clearly indicates (a) argumentativeness and (b) underspecification.”

    What I’m saying is that I could speculate about what you meant by saying “weasel words,” but that it might further communication if you describe it as you just did to reduce the chance of me coming up with mistaken interpretations. We can see from above that my suggested alternative phrasings, based on an assumption that unlike how the term is usually used (IMO), you did not intent to assert an intent to decieve did not meet with your intent,

    ==> “We need way more editors than we need auditors.”

    I agree. But, IMO, as someone who believes in writing as a process, editing is not a top-down or prescriptive process, it isn’t something that editors do, but an exchange about perceived and intended meanings of words between writers and readers.

    ==> “Looking for formulations written in wooden tongue is one of the trick editors need to hone.”

    Again, I agree, except I would add that it’s something that people (editors and writers and readers) need to build dialogue around.

    ==> ” Hedging, clichés, doublespeak, and other such linguistic phenomena reveal inferences that can’t be observed by looking at code and datasets.”

    I agree with that also. It’s an interesting point, and speaks to something that I’ve never really concretized, but felt more intuitively, about the limitations of calls for code and datasets. While providing codes and datasets may well help to promote sharing of perspective and mutual understanding, it can never resolve the rhetorical crutches that people rely on to escape the painful process of opening up to sharing perspective and reaching mutual understanding. Going far afield, it reminds me of discussions I’ve had with Japanese who think it’s weird that Americans rely on codifying rules of ethics (say business ethics) as a means of positively influencing ethical behaviors. Codes and datasets may help to advance good faith dialogue, but they can’t read the beneficial outcomes of good faith dialogue (mutual understanding) when there is no good faith.

    ==> “Two important caveats editors should bear in mind is that: (a) indicators are seldom decisive; (b) indicators are almost always present. In other words, Nic could certainly could have used the very same words without being overly argumentative and without underspecifying his method. It just so happens that he did not.”

    Agree w/o further comment.

    ***

    ==> “Wooden tongue is here to stay. It serves a function. It saves time. It wastes money. What’s not to like? ”

    No doubt, a lack of good faith, and preaching to the choir, serve a purpose – in the sense that taking heroin serves the purpose of getting the addict high.

    ==> “It would be faster to count non-weasel words. The only thing that gets clearly conveyed is the CAGW meme. Another contrarian outlet manager who can say: mission accomplished.”

    Counting is unnecessary. The interpretation is pre-ordained – by virtue of the orientation of the listener. There is no real intent there to convey content, only to confirm biases.

  176. > I don’t understand how you got a tu quoque out of what I said

    Here:

    [O]ne thing that I notice when reading WUWT is how frequently “skeptics” label appropriate quantification of uncertainty – of the sort they rarely provide (may, could, might,..) – as “weasel words.”

    Sensitivity to cumulative emissions

    I think it’s fair to say that you associate what I did with what contrarians often do and that you condemned what contrarians do; a conclusion not too unsimilar to the one I wrote earlier follows somewhat directly.

    (I hope having packed this sentence with enough weasel words to show that I’m not afraid of using them!)

    ***

    Since I consider “weasel word” quite OK and you don’t, Joshua, please tell me which synonym I should use. How about hedging?

  177. “I’m sure that Steven Mosher, Willis, Stevie-Mac, etc., have all sent him numerous emails to that effect….and that the data are forthcoming…but Nic prolly just hasn’t had time to back yet in response.”

    Brandon has asked for it on day one.

    I ask for things when I actually intend to look at it and run the shit.
    Since I dont understand this topic I will let brandon pester him.
    hmm History off the top of my head

    I asked for hansens: got it, read it all. Spent a couple weeks trying to get it running.
    Failed. Went to “weird Stuff” in Sunnyvale to see if I could find an old AIX machine
    cause I thought it would be cool to run in the original enviroment.
    E.M smith agreed to take over the effort. He eventually got it to run. Clear climate
    code guys got a python version working so I used that for a while. Peter Oneill also
    did a cool version.
    I asked for Monktons code: never got it
    I asked for Christy’s code. never got it.
    I asked for Scafetta’s code. Never got it.
    I asked for Nic’s on one of his earlier papers that I reviewed. He gave it.
    I cant recall if I specifically asked for Jones, but it was posted. When I did my emulation
    of his method years ago and the results matched, there was no need to instrument
    his code and run it to see where I made mistakes… because we matched.

    Basically I ask for code when I intend to use it. to do a sensitivity test or to make a translation
    in R easier.

    In my case those working in climate science have a better track record than skeptics.
    ah there was also some Tamino code.. I cant recal if he just posted it or gave it when I asked.
    I included his code in my packages, he writes good stuff. as does Nick Stokes,

  178. Speaking of Nick’s good stuff:

    In this note, I will show that the constancy, perversely, depends on the dynamics, and is a result of the near exponential increase in CO2 emissions. This effect is mostly independent of the actual mechanism for the sinks. It is really a consequence of linearity with exponential increase.

    http://moyhu.blogspot.com/2015/11/why-is-cumulative-co2-airborne-fraction.html

    I like it when Nick talks naughty.

  179. Joshua says:

    I stand corrected on the code-requesting point…at least w/r/t you if not others. I kinda recall you offering some lame excuse for why Stevie-Mac’s interest in auditing is so focused on “consensus” scientists’ work…could be that I ‘mis-remembered,” however.

  180. Joshua says:

    BTW – did Willis ever provide that code that you were requesting of him a while back?

  181. Joshua says:

    willard –

    I think it’s fair to say that you associate what I did with what contrarians often do and that you condemned what contrarians do; a conclusion not too unsimilar to the one I wrote earlier follows somewhat directly.

    At a surface level I was making an association, and suggesting that to avoid association at a more important level, more clarifying language would be less sub-optimal. I have no doubt that often I’ve read “skeptics” at WUWT labeling appropriate quantification of uncertainty as “weasel words” for rhetorical benefit. When I see them do that, I see that they are “skeptics” and not skeptics. Believing that you are not a “skeptic,” I’m offering the opinion that for this reader, more clarifying language would help to reinforce that distinction. Of course, as always, my opinion is worth exactly what you’ve paid for it.

    There’s another aspect, also. As I read the term, “weasel words” carries a connotation of judging intent (to deceive) Perhaps I’ll change my perception of connotations of the term on the basis of editors’ parlance or the result of Google searches, but I doubt it. Given that judging intent w/o sufficient evidence is something that I associate with “skepticism,” I am also saying that I find it “inconvenient” when I read someone I don’t think is “skeptical” using that term (unless accompanied by sufficient evidence to support an assessment of intent). All caveats apply: I may be an outlier, it’s an observation about my reaction as a reader, (and ultimately, it is my responsibility to deal with my reaction) not your intent as a writer (and it is up to you as to whether you want to assume any responsibility as a writer for my reaction as a reader), and it’s an observation that is worth exactly what you paid for it.

    (I hope having packed this sentence with enough weasel words to show that I’m not afraid of using them!)

    See – again there, that confuses me. I think that complex hierarchical and conditional structures in language so as to properly convey uncertainty are quite important, although they are very difficult to manage effectively, and even though they provide people with loopholes through which to get lost in one’s own uncontrolled motivated reasoning. There is certainly the potential for the use of such syntax to enable an intent to deceive (weasily behavior), but IMO, it is important to clarify when such syntax is overly complex, or unclear, or insufficiently considered, and when it is laying cover for intent to deceive (when we have sufficient evidence to judge).

    So does that mean that I am or am not afraid of using weasel words, just as you aren’t afraid to use them? Seems to me that probably I’m also not afraid to use what you are calling weasel words (btw, as I write that I think the notion of “fear” or lack thereof associates with an unfortunate connotation – as I might disagree about the usage of what you call “weasel words” w/o being “afraid” of using them) but that I think that “weasel words” connotes something different than what you’re attempting to convey through using the term.

    Since I consider “weasel word” quite OK and you don’t, Joshua,

    See above. I think that is confusing because I think that we define the term differently.

    please tell me which synonym I should use. How about hedging?

    For me, “hedging” has a less pejorative connotation, generally. It can, also, imply some measure of intent to deceive, but it such a connotation doesn’t jump out at me like it does with “weasel words.” “Hedging,” IMO, can be entirely appropriate within a good faith communicative context, e.g., “I am saying X as a hedge against the possibility that my theory of Y turns out to be unsubstantiated” Whereas it is hard for me to imagine when “weasel words” can ever be appropriate within a good faith context.

    Anyway, is the horse dead yet?

  182. Willard says:

    > I think that complex hierarchical and conditional structures in language so as to properly convey uncertainty are quite important, although they are very difficult to manage effectively, and even though they provide people with loopholes through which to get lost in one’s own uncontrolled motivated reasoning.

    Hence why weasel words are not always bad, and why I hinted at the fact that I am not afraid to use them.

    It could be perfectly fine to say:

    (N1) These results are pretty much consistent with most of the lichurchur.

    It does not say much, but it does say something, and saying something in the discussion part of the paper is better than saying nothing, like Nic did incidentally.

    More importantly, N1 says something that pertains to results, which is what researchers usually do with the consistency trope. To say:

    (N2) These numbers are consistent with the datasets where I selected them.

    is hollower that the usual cliché. It stretches the notion of consistency beyond credulity. It omits the fact that Nic does not say if his results are consistent with the lichurchur. Nic rubs in the consistency trope and yet omits the most important information this cliché is supposed to carry.

    How are Nic’s results consistent with the lichurchur?

    ***

    > At a surface level I was making an association, and suggesting that to avoid association at a more important level, more clarifying language would be less sub-optimal.

    Clarity is a property of speech patterns, not words. Raising concerns about “weasel word” mainly shows that it’s easy to raise concerns about words. If we allow ourselves to project intentions into them, it even becomes trivial.

    I’ve seen contrarians raise concerns over many words. I’m always thankful for the concerns they raise.

  183. Joshua says:

    BTW – I posted that gif before I read your 4:27, as a joke on myself not a comment on your comment.

  184. There’s a front page article in the Sunday Times today [ http://www.thesundaytimes.co.uk/sto/news/uk_news/Environment/article1641906.ece ] which is headlined ” Revealed: greenhouse gasses to fall”. The story is about the decoupling of emissions and global GDP, but to the uninitiated who aren’t aware that it’s total emissions that matter, it could be read like the danger of global warming is reducing. It’s phrased rather like all the ‘pause’ talk in that Jonathan Leake writes “Last year emissions stalled at 0.6%…” [increase per year], which produces the tagline “emissions decline”.

    Given that total emissions continue increasing year-on-year, the whole thing—if not blatant denial—looks suspiciously like whitewash of the lukewarmer kind.

  185. Sam taylor says:

    I very rarely see estimates of global emissions with error bars on them, despite the fact such such error bars would be massive and would probably completely void the recent narrative that emissions have decreased and that we have somehow decoupled. I have my doubts over whether the two recent datapoints represent a statistically significant change in the trend or not. Maybe emissions have just ‘paused’ for a while.

  186. john,
    That is indeed a little misleading, in that we need emissions to actually start dropping, not simply have the rate of increase reduce.

    Somewhat relevant though, is that Nick Stokes has a post in which he suggests that the reason the airborne fraction has been constant is that when you combine emissions that are increasing exponentially with a decay profile that has a fixed time constant (i.e., each pulse of CO2 decays – I think – exponentially) you then get a constant airborne fraction. Hence if we do decrease the rate at which we’re emitting CO2, it could allow the airborne fraction to reduce. On the other hand, my understanding is that most future emission pathways have an increasing airborne fraction, but I don’t know if that’s because it’s actually increasing, or because the other non-CO2 GHGs account for the extra CO2eq ppm.

  187. Willard says:

    It’s cool, Joshua.

  188. BBD says:

    John Russell

    Yes, I read that front page ST article with much the same reaction as you.

  189. BBD says:

    It’s a classic. The Murdoch press publishes misleading story; scientists object but are not widely heard; plus one for misinformation.

  190. anoilman says:

    Joshua, I always saw internet arguments this way;

  191. Pingback: 5000 GtC | …and Then There's Physics

  192. Pingback: The TCR-to-ECS ratio | …and Then There's Physics

  193. Pingback: Beyond equilibrium climate sensitivity | …and Then There's Physics

  194. Pingback: New ocean heat content analysis | …and Then There's Physics

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.