Judith Curry confuses laypeople about climate models

Judith Curry has written a report for the Global Warming Policy Foundation called Climate Models for the layman. As you can imagine, the key conclusions is that climate models are not fit for the purpose of justifying political policies to fundamentally alter world social, economic and energy systems. I thought I would comment on the key points.

  • GCMs have not been subject to the rigorous verification and validation that is
    the norm for engineering and regulatory science.

Well, yes, this is probably true. However, it’s primarily because we only have one planet and haven’t yet invented a time machine. We can’t run additional planetary-scale experiments and we can’t go back in time to collect more data from the past.

  • There are valid concerns about a fundamental lack of predictability in the complex
    nonlinear climate system.

This appears to relate to the fact that the system is non-linear and, hence, chaotic. Well, that it is chaotic does not mean that it can vary wildly; it’s still largely constrained by energy balance. It will tend towards a state in which the energy coming in, matches the energy going out. This is set by the amount of energy from the Sun, the amount reflected, and the composition of the atmosphere. It doesn’t have to exactly match this state, but given the heat capacity of the various parts of the system, it is largely constrained to remain fairly close to this state. Also, for the kind of changes we might expect in the coming decades, the response is expected to be roughly linear. This doesn’t mean that something unexpected can’t happen, simply that it is unlikely. Also, that some non-linearity might trigger some kind of unexpected, and substantial, change doesn’t somehow reduce the risks.

  • There are numerous arguments supporting the conclusion that climate models
    are not fit for the purpose of identifying with high confidence the proportion
    of the 20th century warming that was human-caused as opposed to natural.

This seems like a strawman argument. There isn’t really a claim that climate models can identify with high confidence the proportion of the 20th century warming that was human-caused as opposed to natural. However, they can be used to estimate attribution, and the conclusion is that it is extremely likely that more than half of the observed increase in global average surface temperature from 1951 to 2010 was caused by the anthropogenic increase in greenhouse gas concentrations and other anthropogenic forcings together (e.g., here ). Additionally, the best estimate of the human-induced contribution to warming is similar to the observed warming over this period (e.g., here ). One reason for this is that it is very difficult to construct a physically plausible, and consistent, scenario under which more than 50% of the warming is not anthropogenic.

  • There is growing evidence that climate models predict too much warming from
    increased atmospheric carbon dioxide.

This is mainly based on results from energy balance models. I think these are very interesting calculations, but they don’t rule out – with high confidence – equilibrium climate sensitivity values above 3K, and there are reasons to be somewhat cautious about these energy balance results. There are also indications that we can reconcile these estimates with estimates from climate models.

  • The climate model simulation results for the 21st century reported by the Intergovernmental Panel on Climate Change (IPCC) do not include key elements of climate variability, and hence are not useful as projections for how the 21st century climate will actually evolve.

This seems to be complaining that these models can’t predict things like volcanic activity and solar variability. Well, unless we somehow significantly reduce our emissions, the volcanic forcing will probably be small compared to anthropogenic forcings. Also, even if we went into another Grand Solar Minimum, the reduction in solar forcing will probably only compensate for increasing anthropogenic forcings for a decade or so, and this change will not persist. Again, unless we reduce our emissions, these factors will almost certainly be small compared to anthropogenic influences, so this doesn’t seem like a particularly significant issue.

The real problem with this report is not that it’s fundamentally flawed; it’s just simplistic, misrepresents what most scientists who work with these models actually think, and ignores caveats about alternative analyses while amplifying possible problems with climate models. Climate models are not perfect; they can’t model all aspects of the system at all scales, and clearly such a non-linear system could respond to perturbations in unexpected ways. However, this doesn’t mean that they don’t provide relevant information. They’re scientific tools that are mainly used to try and understand how the system will evolve. Noone claims that reality will definitely lie within the range presented by the model results; it’s simply regarded as unlikely that it will fall outside that range. Noone claims that the models couldn’t be improved, it’s just difficult to do so with current resources; both people to develop/update the codes and the required computing resources. They’re also not the only source of information, so noone is suggesting that they should dominate our decision making.

Something to consider is what our understanding would be if we did not have these climate models. Broadly, our understanding would be largely unchanged. We’d be aware that the world would warm as atmospheric CO2 increased, and we’d still have estimates for climate sensitivity that would not be very different to what we have now. We’d be aware that sea levels would rise, and we’d be able to make reasonable estimates for how much. We’d be aware that the hydrological cycle would intensify, and would be able to make estimates for changes in precipitation. It would, probably, mainly be some of the details that would be less clear. If anything, without climate models the argument for mitigation (reducing emissions) would probably be stronger because we’d be somewhat less sure of the consequences of increasing our emissions.

I think it would actually be very good if laypeople had a better understanding of climate models; their strengths, their weaknesses, and the role they play in policy-making. This report, however, does little to help public understanding; well, unless the goal is to confuse public understanding of climate models so as to undermine our ability to make informed decisions. If this is the goal, this report might be quite effective.

This entry was posted in Climate sensitivity, ClimateBall, Judith Curry, Science, The scientific method and tagged , , , , , . Bookmark the permalink.

110 Responses to Judith Curry confuses laypeople about climate models

  1. Judith’s report also includes a quote from Isaac Held

    ‘It’s fair to say all models have tuned it,’
    says Isaac Held, a scientist at the Geophysical Fluid Dynamics Laboratory, another prominent modelling center, in Princeton, New Jersey.

    Isaac Held actually discussed this quote in this post and says:

    I was interviewed recently for a news article on climate model tuning, which said: … nearly every model has been calibrated precisely to the 20th century climate records—otherwise it would have ended up in the trash. “It’s fair to say all models have tuned it,” says Isaac Held . The word “precisely” changes the flavor of this sentence a lot, raising the spectre of overfitting. (I have no memory of using that word.) But I don’t doubt that I did say the part inside the quotes. I am not very good at provided sound bites. Consistent with this post, a more accurate and long-winded sound bite would have been something like — in light of the continuing uncertainty in aerosol forcing and climate sensitivity, I think it’s reasonable to assume that there has been some tuning, implicit if not explicit, in models that fit the GMT evolution well.

  2. Christian Moe says:

    Good points. But I’m not quite clear about get your distinction between “can identify with high confidence the proportion of the 20th century warming that was human-caused as opposed to natural” and “can be used to estimate attribution.” Attribution *is* about estimating how much was man-made as opposed to natural, I think. So what’s the strawman, precisely? Talking about “the proportion”, implying a spuriously precise point estimate, when what *has* be stated with high confidence is a lower bound on the man-made contribution?

  3. Christian Moe says:

    Aargh. “About get your” –> “about your”. “Has be stated” –> “has been stated”. Sorry.

  4. Christian,
    The distinction I’m making is that the formal attribution doesn’t determine with high confidence the proportion that is anthropogenic, at least not in the sense of being able to produce a precise proportion. It produces a distributiom that indicates that it is extremely likely to be more than 50% since 1950 and that indicates that the best estimate is similar to the observed warming. However, it doesn’t produce a precise estimate of the proportion. The first figure in this Realclimate post illustrates it.

    If Judith is claiming that it can’t even do this, then I would argue that she’s wrong – it’s been done. On the other hand, if she’s claiming that it can’t determine a precise proportion (as I took her to mean) then that’s something that’s never really been claimed, so it seems like a strawman argument.

  5. Joshua says:

    Anders –

    I got hung up on a similar question to Christian’s (but was kind of embarrassed to ask and he opened the door). I’ll try asking a clarifying question and then let the technical discussion proceed.

    =={ It produces a distributiom that indicates that it is extremely likely to be more than 50% since 1950 and that indicates that the best estimate is similar to the observed warming. }==

    Are you saying that the models might show a likelihood of distribution and a degree of similaritywith observations, but that the “confidence” about those results of the model are a judgement of the observer of the models, not a product of the models themselves?

  6. Joshua,
    I’m suggesting that Judith seems to be claiming that climate models cannot produce a precise estimate for the proportion that is anthropogenic. That is probably true, but is not what is being claimed. What’s being claimed is that it is very probably not less than 50% and that the best estimate is similar to what was observed. The only thing that is presented with high confidence is that it is more than 50%. A precise proportion is not presented with high confidence.

    It’s possible that I’ve misunderstood what Judith is suggesting and that she is actually disputing that climate models can even determine, with high confidence, that it is more than 50%. If so, then I would argue that she is simply wrong. If, on the other hand, she’s suggesting that they can’t estimate a precise proportion, with high confidence, then she’s probably correct, but such a claim has never really been made – not formally, at least.

  7. Christian Moe says:

    ATTP,
    Thanks, that’s clear and makes good sense to me.

  8. Joshua says:

    Anders –

    Thanks. I understand what you’re saying now. :

  9. paulski0 says:

    There are valid concerns about a fundamental lack of predictability in the complex
    nonlinear climate system.

    There is growing evidence that climate models predict too much warming from
    increased atmospheric carbon dioxide. (i.e. from simple linear EBM sensitivity estimates)

    Does she realise that these beliefs are basically contradictory? Results from such simplistic models should be meaningless to someone who believes climate is fundamentally unpredictable.

    I note that she quotes Bjorn Stevens as arguing for an ECS upper-bound of 3.5C, but appears to have skipped mention that his equivalent lower bound is 2C. An accident, I’m sure.

  10. brandonrgates says:

    Along those same lines, I love this little bit of magical thinking:

    Whether or not human caused global warming is dangerous or not depends critically on whether the ECS value is closer to 1.5°C or 4.5°C.

  11. Steven Mosher says:

    “I’m suggesting that Judith seems to be claiming that climate models cannot produce a precise estimate for the proportion that is anthropogenic.”

    Let me see if I can elucidate her argument a bit more clearly than she has

    The attribution argument as made by the IPCC rests on comparing two series

    A) The predicted temperature with natural forcings only
    B) The temperature series with both natural and anthro forcing

    Schematically you’d look at B-A.

    However, this presupposes that the models can represent the climate well when the forcings are only natural,

    But

    The period 1910-1940 cannot be explained by natural and anthroforcing, therefore there must be some unexplained unicorn (A`) lurking about to close the gap. ( gap arguments are fun)

    This unicorn has to be found before we can trust models or attribution studies..

  12. Steven Mosher says:

    Willard will correct me if I have misdiagnosed her argument but it appears to a species of god of gaps genus.

  13. Michael 2 says:

    “it’s just simplistic, misrepresents what most scientists who work with these models actually think”

    Maybe, perhaps even likely. What I hope to learn someday is why, not what, this subset of scientists think what they think. Why do you think this but Judith Curry does not? As it is as unlikely that I will see your reply as it is that you will post this comment there’s no need to spend much time on it but that is a question on my mind.

    “unless the goal is to confuse public understanding of climate models so as to undermine our ability to make informed decisions.”

    I do not consider myself confused. I might be in error but that is quite a different thing than confused. It is more likely “the public” has no understanding of climate models thus not reaching a level of awareness that could be confused in the first place.

  14. Szilard says:

    SM: I think “god of gaps” is a bit mis-aimed here – since it doesn’t seem she’s positing some factor at work outside what science might discover.

    But in any case, the argument as you present it seems to be too substantial to be dismissed as just a rhetorical solecism. Is the notion of a 1910-1940 “gap” well-supported enough to require something to fill it, or not? (I wouldn’t have a clue.)

  15. Chubbs says:

    Ironic that someone who recently predicted that the hiatus would last till the 2020s thinks climate models are not fit for use.

  16. JCH says:

    Also, she saw one the “ice recoveries” as being a sign of the commencement of the stadium wave, which subsequently petered out. But was it 2020 or 2025? I think 2025.

  17. Michael E Fayette says:

    ATTP: Aren’t you possibly misinterpreting Judith’s statement that:

    “The climate model simulation results for the 21st century reported by the Intergovernmental Panel on Climate Change (IPCC) do not include key elements of climate variability, and hence are not useful as projections for how the 21st century climate will actually evolve.”

    to mean she is referring to truly random natural events like volcanoes? (Which are actually probably not random but are not reasonably part of a climate model)

    I read her statement to mean the models fail to predict observed changes such La Nina, El Nino and other cyclical climate “events” (like ice ages or the PDO) If you interpret her objection in this manner, she is correct, isn’t she?

  18. Steven Mosher says:

    “SM: I think “god of gaps” is a bit mis-aimed here – since it doesn’t seem she’s positing some factor at work outside what science might discover.”

    thats why I called it a species of a genus. basically arguing that what we know is undermined by what we dont know.

    As for the “warming” in 1910 -1940

    Start here

    https://skepticalscience.com/global-warming-early-20th-century-advanced.htm

    Peronally I am not seeing and knock down arguments from either side, but the conclusion she draws is unwarrented…. that is she concludes models cant be used.

    The issue is that when you make policy decisions you can’t avoid using models of some sort no matter how flawed they can be.. In fact some of her suggestions include the use of models, heuristics, worse case scarenarios and multitude of tools.

  19. izen says:

    @-SM
    “Let me see if I can elucidate her argument a bit more clearly than she has”

    I am greatly impressed with your ability to extract by divination some coherent and marginally cogent argument contained in JC’s prose. But I suggest such an exegesis is of academic value.
    In polemics the meaning is not in what is written, but in what is read.

    The majority of her approving readers will have no difficulty in translating her statements into a scientifically impregnable rejection of any climate modelling for any purpose.

    The ambiguity is a feature, not a flaw.

  20. Michael,

    I read her statement to mean the models fail to predict observed changes such La Nina, El Nino and other cyclical climate “events” (like ice ages or the PDO) If you interpret her objection in this manner, she is correct, isn’t she?

    I think that was more to do with the issue of it being a non-linear system. The report certainly has comments about future volcanic and solar forcings.

  21. Szilard says:

    SM: I guess she’s saying that GCM’s as they stand currently aren’t fit for purpose, not that models in general are useless.

    Tks for the SS piece – useful. But does it hit JC’s specific argument here – eg why aren’t the various factors reflected in the model outputs (if in fact they’re not); would including them widen uncertainties or whatever for the future outlook.

    As a document, reading it as a clueless “layperson”, I would take her piece as reflecting one person’s rather partisan view & I would want to know what other people in the field think about the issues she raises. This is my main problem with how I’ve seen JC present other things in the past: there’s rhetoric about uncertainty & the need to avoid partisanship, but the content often gives short-shrift to opposing views and arguments.

    ATTP’s comment is probably fair: “The real problem with this report is not that it’s fundamentally flawed; it’s just simplistic, misrepresents what most scientists who work with these models actually think, and ignores caveats about alternative analyses while amplifying possible problems with climate models.”

  22. verytallguy says:

    The attribution issue is interesting.

    The IPCC is actually very conservative in the skill it allows to GCMs.

    My summary of their attribution process was:

    1)Models produce a good simulation of natural variability. AR5 section 9.5.3 concludes “ Nevertheless, the lines of evidence above suggest with high confidence that models reproduce global and NH temperature variability on a wide range of time scales.”

    2)Model spread of natural variability in the 1950-2010 timeframe is ca zero +/- 0.1 degC

    3)Therefore the rest must be anthro

    4)To allow for “structural uncertainties” (is this effectively unkown unknowns?) the spread of natural is increased by an arbitrary amount determined by the judgment of the panel; this actually makes the attribution conservative compared to the direct model output.

    This came out of a discussion with Gavin Schmidt following an earlier foray of Judith’s into the fray.

    You can see his response here:

    http://www.realclimate.org/index.php/archives/2014/08/ipcc-attribution-statements-redux-a-response-to-judith-curry/comment-page-2/#comment-589707

    And Gavin’s view of Judith addressing the issues here:
    http://www.realclimate.org/index.php/archives/2014/08/ipcc-attribution-statements-redux-a-response-to-judith-curry/comment-page-2/#comment-589705

    Judith’s latest missive for the GWPF is unlikely to be interesting. She seems intent on promoting anything which puts the climate mainstream science into disrepute; the actual content appears almost irrelevant to her.

  23. vtg,
    The attribution issue is interesting. Technically, it potentially suffers from the prosecutor’s fallacy, which was discussed on James Annan’s blog. What I found interesting was this comment where James points out that

    But when you think about it carefully it doesn’t quite add up, and it certainly doesn’t add up when you look at a small subset of the evidence and argue as a result that we have low confidence of an anthropogenic effect…as I argue with respect to ocean warming in the linked paper. Apparently we only have moderate confidence that we have warmed the ocean, even though we have measured it both warming and expanding (similarly to expected from models), and we are very confident that we have warmed the atmosphere directly above it! That’s just logically incoherent.

  24. verytallguy says:

    Hmmm. The prosecutor’s fallacy argument is making my head spin. I don’t think I’ve properly understood it in this context.

    The other point James makes is that the IPCC are not internally consistent. I think that’s unsurprising, given the complexity of the subject and the sheer number of people involved. Ironically, it seems driven by an attempt to be highly conservative. Arguing that it’s sufficiently inconsistent to matter, is perhaps another example of my current favourite logical fallacy.

    https://en.wikipedia.org/wiki/Nirvana_fallacy

  25. verytallguy says:

    Anyway. One wonders if Judith has a cousin named Albrecht.

    THE DUCK.

  26. vtg,

    The prosecutor’s fallacy argument is making my head spin. I don’t think I’ve properly understood it in this context.

    It’s something I’ve struggled with too, but I think the point is that the attribution studies technically reject that it could be more than 50% non-anthropogenic at the 95% level. That means that there is a less than 5% chance that it could be natural. However, this is not technically the same as there being a 95% chance that it is anthropogenic. However, given that we only have two suspects (natural, anthropogenic) this seems more a technicality than a suggestion that it could be mostly natural (since that has been rejected).

  27. David says:

    “…unless the goal is to confuse public understanding of climate models so as to undermine our ability to make informed decisions.”

    I thought that was the GWPF’s motto?

  28. Chubbs says:

    The case for a natural component to the warming would be much stronger if the natural component or components could be identified and linked in a credible manner with the warming. These natural factors would also have to explain the current 0.8 W/m2 energy imbalance. The fact is man-made forcing can explain all of the warming and natural variability very little,

  29. paulski0 says:

    Szilard,

    The basis of the AR5 attribution statement is pretty simple. They get estimates for climate response from anthropogenic forcing and natural forcing for 1951-2010, then an estimate for potential influence of internal variability over a 60-year period. Those were, respectively:

    ANT = 0.7K +/- 0.1 (~17-83% range)
    NAT = 0K +/- 0.1 (~17-83% range)
    IntVar = 0K +/- 0.1 (~17-83% range)

    These are then compared to the observational estimate of 0.65 +/- 0.06 (5-95% range) using HadCRUT4. So we can see here trivially that the Anthropogenic estimate lower ~2.5% bound would be 0.5K, which is more than half the warming. Going the other way we can see that the combined upper bound (97.5%) for natural causation (NAT+IntVar) = 0.2K. This is less than half the observed warming, hence again suggesting that anthropogenic factors caused more than half observed warming.

    It actually wouldn’t be a problem to go through the same exercise for the 1910-1940 period, using the same analysis parameters. I’ll attempt a crude approximation here.

    Firstly an anthropogenic estimate. I’ll obtain this using a simple linear energy balance method by applying the result of ANTdeltaTemp/deltaForcing for 1951-2010 onto deltaForcing for 1910-1940. For forcing I’m using the AR5 history. I make it that 1951-2010 anthropogenic forcing = about 1.65W/m2, so the scaling factor = 0.7/1.65 = 0.4242. 1910-1940 forcing is about 0.3W/m2 so best estimate response = 0.3*0.4242 = 0.13K. For simplicity I’ll adopt the same uncertainty, so 0.13K +/- 0.1 (17-83% range).

    Now a natural forced estimate. Unlike for 1951-2010 there is good evidence that both solar and volcanic responses were positive over 1910-1940. Using a plot of CMIP5 historicalNat I make it that there is a best estimate 0.15K warming due to natural forcing. Again, I’ll adopt the same uncertainty so 0.15K +/- 0.1 (17-83% range).

    Now an estimate for internal variability. In this case we’re not really trying to estimate what was, but rather what potentially could happen over any 30-year period. Generally models indicate greater likelihood of larger trends over 30-year periods than 60-year periods, so I adopt a greater uncertainty of +/- 0.15K (17-83% range) for 1910-1940, centered on zero.

    Laying that all out, we have

    ANT = 0.13K +/- 0.1 (~17-83% range)
    NAT = 0.15K +/- 0.1 (~17-83% range)
    IntVar = 0K +/- 0.15 (~17-83% range)

    Observed warming, per HadCRUT4 is 0.4K. Uncertainty in annual anomalies is larger in that period due to poorer sampling, so I will extend 5-95% trend uncertainty to +/-0.1K.

    So, we can see a few things from this. The best estimate of all components comes to 0.28K, which is outside the lower bound for observed warming. However, this should not be surprising. 1910-1940 has not been chosen randomly. It has been chosen because it has an unusually large trend in relation to the wider period – it’s cherry-picked. Therefore we should a priori expect it to test the limits of our uncertainty range. Summing uncertainty bounds for all components indicates a ~97.5% estimate of 0.46K.

    We can also see that anthropogenic forcing is extremely unlikely to explain the full warming trend by itself. In terms of percentage of observed warming the 95% confidence range spans -17.5% to 82.5%. It appears likely that anthropogenic forcing contributed some warming in this period, which I think is what AR5 concluded, but at higher confidence levels there is uncertainty about the sign of trend.

    Taking the sum of natural causes the upper 97.5% bound is 0.4K. This only just hitting the observed trend again supports a likely but not definitive positive anthropogenic trend.

    In summary, it appears that applying the AR5 attribution approach to the 1910-1940 period can explain the observed warming over that time as a combination of all factors. Strong confidence in even the sign of anthropogenic contribution does not appear to be available here, and this is completely consistent with strong confidence in anthropogenic contribution >50% for 1951-2010.

  30. Magma says:

    Personally, I very much like the concept of ‘consilience’ wherein multiple independent lines of evidence converge on a single coherent hypothesis.

    Have human activities directly and indirectly sharply raised the atmospheic concentration of CO2, CH4, N2O and synthetic polyatomic gases relative to their pre-industrial levels? Certainly.

    Do those gases have absorption bands in the near to mid-infrared? Certainly. Does the ‘greenhouse gas effect’ exist and warm the surface and near-surface of the Earth? To a very high degree of certainty, yes.

    Are there multiple geological, geophysical, oceanographic, atmospheric and ecological indications that warming and other warmth- and CO2-driven global changes are occurring at exceptionally rapid rates, geologically-speaking? Certainly.

    Simple physics, multiple lines of very solid evidence vs. a very small number of contrarians, many of whom have known financial interests or ideological biases and none of whom have offered a coherent alternate hypothesis for observed climatic changes.

    This is not a difficult call. The tragedy is that elements of the fossil fuel industry have played their weak hand (PR, bought politicians, and a handful of second-rate scientists) as long and successfully as they have.

  31. Chubbs says:

    With the hiatus in the rear view mirror, the attribution section for the next IPCC report is going to be pretty east to write.

  32. The Very Reverend Jebediah Hypotenuse says:

    “Let me see if I can elucidate her argument a bit more clearly than she has”

    Really?

    There is an entire on-line cottage industry directed at elucidating Judith Curry’s ‘arguments’ more clearly than she has…

    Personally, I have never understood why.

    Curryball(™) – The only winning move is to get outside and walk the dog.

  33. Pingback: Grandi progressi, ma... - Ocasapiens - Blog - Repubblica.it

  34. Willard says:

    > Willard will correct me

    I haven’t read Judy’s latest in her SpeedoScience series.

    It would be possible to convince me otherwise.

  35. Susan Anderson says:

    “Ocean warming has not been measured” is wrong. Here: https://phys.org/news/2017-01-steady-oceans-years.html

    I see it revisits the hot-button topic of ships vs. buoys, which is irritating to tidy-minded people who want to throw out anything that smacks of acting like human beings who can think (walk and chew gum) calibrating different kinds of temperature measurements in the light of what we know.

    Meanwhile, back at the ranch, this has been proposed in our US Congress. “The Environmental Protection Agency shall terminate on December 31, 2018.” (screen shot) It appears to be unlikely to pass, but the Republican argument appears to be that since they won the opposition is disloyal to speak up at all.

    Above there are references to RealClimate from 2014, but the original Curry embrace of Montford and victim-bullying goes back much further. I’m referencing not the extended evasions she provides in RealClimate comments, but the follow-up interview with Dr. Schmidt at Collide-a-Scope because it illustrates the difference between Dr. Curry’s attack dog/victim bullying and Dr. Schmidt’s extreme courtesy: http://blogs.discovermagazine.com/collideascape/2010/08/04/gavins-perspective/

    Gavin Schmidt has won kudos from skeptics in the comments below, who appreciate his participation in the thread and his responses to their questions.

    Drive-by postings are not conducive to a nuanced discussion because too much gets said in-between times. We can always improve moderation – we deleted many comments that went too far in criticising posters (including Judy) rather than their arguments, but this is always hard when there is a lot of traffic, and over-moderation gets criticised just as much. If I can offer one observation that might help, it would be this – once you start to have an online presence in a field like this, it is inevitable that people will misunderstand and misrepresent you. You will be accused of thinking things you would actually find abhorrent and acting in ways that would be anathema. But it is important to remember that this has very little to do with you. You will end up as a some kind of symbol, and while people might talk about someone with your name and your place of work, it helps to think of them as an internet doppelganger.

    It’s a convenient argument for some people to claim we don’t tolerate dissent. They don’t even need to try to engage. But it doesn’t stack up if you actually read any of the threads – lot’s of people disagree with us on many issues. Where we draw the line is with comments that turn methodological issues into personal ones, misrepresent us or insist that we or scientific colleagues are frauds, or that just bring up tired old contrarian talking points over and again.

    The final sentence summarizes years and reams of unanswerable falsehood that has held the court of public opinion. Dr. Curry is grossly culpable for fanning those flames.

  36. Keith McClary says:

    Her garbled ref. 27 might be to one of these:

    Click to access grand-minimum-of-the-total-solar-irradiance-leads-to-the-little-ice-age-2329-6755.1000113.pdf

    Click to access 0354-98361500018A.pdf

    Click to access grand_minimum.pdf

    If she was writing a review article it would be reasonable to mention this sort of thing, but since she is basing her argument on it we have to assume she takes this stuff seriously.

  37. As far as a Grand Solar Minimum is concerned, this paper is worth reading.

    They found that a descent into MM-like conditions over the next ∼70 years would only decrease global mean surface temperatures by up to ∼0.2 K, with some uncertainty depending on the assumed reconstruction of past TSI. Feulner and Rahmstorf [2010] reached similar conclusions about the impact on global surface temperature using an intermediate complexity model and two scenarios for a decline in TSI of 0.08% and 0.25% relative to 1950 levels. These results make clear that even a large reduction in solar output would only offset a small fraction of the projected global warming due to anthropogenic activities. This has been further emphasized by Meehl et al. [2013], who used a comprehensive climate model to show that a 0.25% decrease in TSI in the mid-21st century would only offset the projected anthropogenic global warming trend by a few tenths of a degree.

    However, there are potentially large regional impacts.

  38. Willard says:

    > If this is the goal, this report might be quite effective.

    There are all kinds of goal in ClimateBall ™. Don’t you all want a history lesson by David P. Young [the last part of that sentence has been elided to parry legal threats – W]? Since you now are processing the question, I’m sure that you do:

    Andy, There is another view here that is more philosophical. It is expressed in Bertrand Russells History of Western Philosophy. Basically, Russell argues that starting with Rousseau and the romantics, the devaluation of reason and the elevation of feelings led to the disasters of the 20th Century, Fascism and Communism. It is impossible to be truly skeptical if you don’t recognize a concept of reality and truth independent of human ‘feelings.” There is a case to be made that this is really what is behind our current political divisions. Even though, this is an over generalization, conservatism tends to elevate reason and argumentation, while Leftist ideology elevates feelings and “safe spaces” which simply shut down argumentation.

    Source: Judy’s, in a thread where AndyW tries to justify the contrarian stance by linking it to some kind of innate skepticism, a module which allegedly is right next to Freud’s uncounscious.

    Have you ever tried to make up this kind of thing?

    I could not either.

    Where can you find Russell’s blockbuster online, readers may ask?

    Here.

    Will readers be able to find “leftism” or an adequation between the Left and both communism and fascism? Good luck with that.

  39. Keith McClary says:

    @Joshua
    Her quoted title is that of the second link and the journal name is wrong.

  40. JCH says:

    Russell thought Lee Harvey Oswald was innocent. Whatever.

  41. Joshua says:

    Keith –

    Thanks.

    willard –

    The intersection of David Young’s objective analysis with Andy’s objective hypothesis is such a fortuitous synergy

    (minor edits added)…

    Joshua | February 22, 2017 at 2:30 pm | Reply
    Your comment is awaiting moderation.
    Andy –

    =={ … conservatism tends to elevate reason and argumentation, while Leftist ideology elevates feelings and “safe spaces” which simply shut down argumentation. }==

    There you go. Another subject for your RCT I suggested above. Of course, we all know that David’s own political ideology is purely coincidental to his assertion, but even still it wouldn’t hurt to demonstrate how the data show that he’s overwhelmingly correct about his corollary to your ideas about innate skepticism.

    and

    Joshua | February 22, 2017 at 2:40 pm |
    Your comment is awaiting moderation.
    =={ What group is that?? }==

    Climate skeptics.

    =={ I would agree. You did read the head post, right? }==

    Well, I tried – but admittedly, much of it went over my head…

    That said, I think that there is much implied in your arguments along the lines that I’ve described. Although you don’t state it explicitly, and so I could well be wrong, but it seems clear to me that you find all kinds of comparatively positive attributes to “skeptics” as a group – including that their opinions re: climate change reflect some kind of “innate” characteristic that distinguishes them from those who have different opinions.

    See my comment in response to David Young below.

    =={ But we all have similar capacity for innate skepticism (I only presume not identically simply because no individuals are identical), and indeed it is *not* a function of any particular group of culture. }==

    Not only capacity, but also manifestation.

    =={ It will be expressed where collective deception is correctly detected,… {==

    Ah, but but the determination that the detection of a “correct” form of “collective deception” is in the eye of the beholders – in contrast to your view that it is disproportionately “correct[ly]” detected among those who agree with you on a particular topic.

    =={ but this is a function of his / her values, not the lack of capability for innate skepticism generally. }==

    I disagree. That looks entirely self-serving to me. It isn’t dependent on “values.” It is dependent on how a given individual is aligned ideologically in relation to a particular topic/issue. Those “values” that you describe are not uniform, but shape-shift as suited, in order to protect a sense of identity. Thus, we have Republicans shifting vis a vis, their “values” w/r/t “leaks” or the “insurance mandate” or any number of issues depending on context, and Democrats doing likewise.

  42. Steven Mosher says:

    My brain has been Hi-JAQed.

    I will have to search through some of my old writing on Phenomenology and questions.. sorry not published..

    but there is this

    https://www.jstor.org/stable/2103005?seq=1#page_scan_tab_contents

  43. Steven Mosher says:

    paulskio

    Thanks that was really clear.

  44. Joshua says:

    Certainly when teaching, asking questions can distract a student from the task at hand. There is frequently a moment when you can see from looking at a student’s eyes when asking them a question serves as a distraction.

    For example:

    T: How do you divide fractions?
    S1: Jeez. That is such a simple question. I know that we learned the answer to that when we were in 5th grade. When we were 11 fucking years old. I’m 14 now and I’m sure that all of my classmates remember the answer to that question. I’m clearly an idiot. I’m going to get a bad grade. My parents will be pissed and will dock my allowance. Too bad, because I was going to use that money to make a down payment on that car..so that I could rebuild the that engine and add a supercharger to pressurize air intake above normal atmospheric levels to get more air into the engine.

    or:

    T: How do you divide fractions?
    S2: Jeez. Such a simple question. It’s amazing that so many of my classmates don’t even know the answer. I’ll raise my hand it now so that the teacher will give me a good grade so that I can keep getting A’s so that I can go to Harvard as planned and get with some rich chick who will be able to give me money to buy dope.

  45. Harry Twinotter says:

    I am probably stating the obvious. Dr Curry is pushing FUD again. And again she makes a lot of claims with providing any evidence to back up her position. It is disingenuous criticizing a piece of science for being shabby by using an argument that is even more shabby.

  46. Szilard says:

    Thanks, paulskio – nice & succinct on why the “gap” is probably illusory/not important.

    I guess my question was narrower & less important: why is there then a gap (to the extent there is) in the outputs of GCM’s as they current stand? I’m thinking of the chart JC reproduces from AR5 as Fig 3 on p10, with the commentary, “However, the climate models do not capture the large warming from 1910 to 1940 …”

  47. Steven Mosher says:

    Sz. The current gap is gone. Judith failed to use Hawkins latest.

  48. angech says:

    paulski0 says: February 22, 2017 at 12:55 pm

    “Summing uncertainty bounds for all components indicates a ~97.5% estimate of 0.46K
    .the sum of natural causes the upper 97.5% bound is 0.4K.
    ANT = 0.13K +/- 0.1 (~17-83% range).”
    Does not compute or add up. 0.54 perhaps.

    “applying the AR5 attribution approach to the 1910-1940 period can explain the observed warming over that time as a combination of all factors. 1910-1940 has been chosen because it has an unusually large trend in relation to the wider period – it’s cherry-picked.”

    “Now an estimate for internal variability. Generally models indicate greater likelihood of larger trends over 30-year periods than 60-year periods, so I adopt a greater uncertainty of +/- 0.15K (17-83% range
    1910-1940, IntVar = 0K +/- 0.15 (~17-83% range)
    1951-2010 IntVar = 0K +/- 0.1 (~17-83% range)”

    A trend of 0 over 30 and 60 years is still always a trend of 0. Your definition of a trend in Internal variability is wrong. You are basically stating no trend exists. You are merely claiming a wider margin of error can occur.

    Your estimate of natural variability is equally intriguing.
    You claim a trend, which does not exist, only a variance remember, which is bigger than the variance standard deviations you have assigned to it.
    1951-2010, NAT = 0K +/- 0.1 (~17-83% range)
    -1910-1940, NAT = 0.15K +/- 0.1 (~17-83% range)
    Your own comment [tweaked] “Generally models indicate greater likelihood of greater uncertainty over 30-year periods than 60-year periods should have been applied to give 0.15 range here as well.
    If you wish to put a standard deviation it must normally be bigger than the deviations that naturally occur.
    You just do not get variations 50% greater than allowed on a one off estimate.

    With the benefit of hindsight you are claiming a positive bias in Natural variability.
    When you know what caused the temperature increase then it is no longer natural variability.
    This term only applies to unknown natural variations.
    Known ones are included in the anomaly baseline, remember, as known forcings.

    Basically you are admitting that natural variance and internal variability are trendless but in your figures add up to 0.4K.in a positive manner and -0.25 in a negative manner, A range of 0.65K in just 30 years. For what we call causes we do not understand.

    ” and this is completely consistent with strong confidence in anthropogenic contribution >50% for 1951-2010.”
    No, I think your argument has just shot anthropogenic contribution down in 0.65K flames.

  49. angech,

    Basically you are admitting that natural variance and internal variability are trendless but in your figures add up to 0.4K.in a positive manner and -0.25 in a negative manner, A range of 0.65K in just 30 years. For what we call causes we do not understand.

    I think you’ve done your error propagation incorrectly.

  50. There are valid concerns about a fundamental lack of predictability in the complex
    nonlinear climate system.

    This appears to relate to the fact that the system is non-linear and, hence, chaotic. Well, that it is chaotic does not mean that it can vary wildly; it’s still largely constrained by energy balance.

    I think you’re talking past one another by considering two different aspects.
    At the tropopause, in the global mean, I do believe the general warming of the atmosphere, implied by radiative forcing(RF), is predictable. That’s because the increase in RF occurs, largely independent of the variations of clouds and water vapor below. So the RF is (mostly) independent of climate fluctuation as I found by running a radiative model on a sample atmosphere across seasons.

    However, the chaotic fluctuation of fluid flow does mean that it ( climate, meaning, temperature, precipitation, cloudiness, storminess, winds, etc. ) can ( and does ) vary wildly internally. The physics of fluid flow, expressed in thevariation of model results indicates this:

    The distinction is global mean temperature, which is largely predictable versus climate for a point, region, or continent, which is largely unpredictable.

    Now, within that statement, there is wiggle room.

    Global mean temperature is largely predictable, but some amount of global energy balance may change from changes in fluid flow ( cloudiness, sea ice, etc. ). And, of course, volcanoes, asteroids, solar farts, etc. may also intervene unpredictably.

    Climate is largely unpredictable, but some things are likely to remain predictable. The Namib desert is thought to have existed for 60 milllion years. That includes all ilk of glacial,interglacial and many other influences. But the geology seems to have dominated the fact of the Namib, and so, the Namib will likely remain predictably a desert for the next 100 years. The effect of orbits, oceans, and mountains will not change significantly to those effects will also likely persist.

    But the chaotic fluid flow can and will persist, regardless of global mean temperature, and impose unpredictable variation for the next century ( and to perpetuity ). It’s in the physics.

  51. TE,

    However, the chaotic fluctuation of fluid flow does mean that it ( climate, meaning, temperature, precipitation, cloudiness, storminess, winds, etc. ) can ( and does ) vary wildly internally.

    Yes, but if we’re thinking of global averages, then it can’t vary wildly. The whole point is that we think that we can constrain how the climate (average over a reasonable timescale and region) will respond to changes in external forcing. We can’t, however, predict the weather. You should read Chris Colose’s comment here

    At the regional level, there is more noise, depending on the statistic and variable considered. It turns out that if you run a model 50 times with just a 0.00000000000001 degree change in the initial conditions, you can produce a different 50yr temperature trend at a single gridbox (which might be ~200 km on a side for a global model), such that maybe 43 such “ensemble members” produce warming at that location, 5 produce very little change, and 2 produce some cooling. The signal will better emerge from the noise for greater CO2 forcing or as one zooms out to start covering more grid boxes.

    The distinction is global mean temperature, which is largely predictable versus climate for a point, region, or continent, which is largely unpredictable.

    You’re confusing climate and weather, IMO. There is, as far as I’m aware, no formal claim that we can use GCMs to predict the precise climate state in a small region at some point in the future, given a certain change in external forcing. The claim is that if you consider a large enough region and a sufficiently long timescale (decades) you can determine how somechange in forcing will probably influence the typical climate state.

    But the chaotic fluid flow can and will persist, regardless of global mean temperature, and impose unpredictable variation for the next century ( and to perpetuity ). It’s in the physics.

    Yes, noone says it won’t. Again, you need to define what region and timescale you mean. That we can’t predict the climate in a small region of the globe in 2100, doesn’t mean we can’t make an estimate for how some change in forcing will influence the climate state, given a suitable region/timescale over which to average.

    Seriously, if you think that we shouldn’t use GCMs to inform policy because they are incapable of predicting the precise climate state in some small region, and a specific time in the future, then you’re essentially arguing for perfect knowledge before we can make any decisions. Since that would seem to be impossible, we should never make any decisions.

  52. TE- of what relevance is the display of regional charts when you’re talking global?

  53. Willard says:

    > There is, as far as I’m aware, no formal claim that we can use GCMs to predict the precise climate state in a small region at some point in the future, given a certain change in external forcing.

    Depends what you mean by “fornal” AT. Senior claims something like this all the time. To a point I’ve started to call it the meteorological fallacy.

    Even my SwiftKey autocorrect knows it.

  54. Formal was just meant to suggest that there are not, for example, peer-reviewed papers making such claims.

  55. The Very Reverend Jebediah Hypotenuse says:

    TE:

    Climate is largely unpredictable, but some things are likely to remain predictable.

    Exactly backwards.

    http://ipcc.ch/report/graphics/index.php?t=Assessment%20Reports&r=AR5%20-%20Synthesis%20Report&f=Topic%202


    There is, as far as I’m aware, no formal claim that we can use GCMs to predict the precise climate state in a small region at some point in the future, given a certain change in external forcing.

    There is no formal claim that we can use probability calculus to predict the precise winnings at one of the craps tables in a casino.

    And yet casinos almost always make money.

  56. You’re confusing climate and weather, IMO.
    What is depicted by the NCAR runs above is larger than continental scale and for half of a century, so the analysis of unpredictability is of climate, not weather. ( though climate is weather over a duriation

    What may be more important than temperature ( the color shadings ) in the model runs, is the pressure field anomaly ( line contours in black ). The pressure field determines and is a reflection of low and high pressure cells passing ( bringing storms or fair weather ) over the climatic period indicated.

    There is, as far as I’m aware, no formal claim that we can use GCMs to predict the precise climate state in a small region at some point in the future, given a certain change in external forcing.

    Unfortunately, the IPCC and other organizations did try to make ‘regional assessments’ of what they thought would happen with increased GHGs. Things such as precipitation ( and the lack of precipitation expressed as drought ), storms, cloudiness, and even temperature, are probably not predictable.

    As I indicated above, however, things aren’t absolute. Some aspects may be predictable to an extent. But probably not the aspects determined by fluid motion, which includes most of climate.

  57. Willard says:

    I always presumed that when Senior claims modulz should get local stuff right, they can:

    For regional downscaling (and global) models to add value (beyond what is available to the impacts community via the historical, recent paleorecord and a worst-case sequence of days), they must be able to skillfully predict changes in regional weather statistics in response to human climate forcings.

    http://onlinelibrary.wiley.com/doi/10.1029/2012EO050008/pdf

  58. paulski0 says:

    angech,

    Does not compute or add up. 0.54 perhaps.

    It’s possible I’ve got sums wrong. How did you get 0.54?

    A trend of 0 over 30 and 60 years is still always a trend of 0. Your definition of a trend in Internal variability is wrong. You are basically stating no trend exists. You are merely claiming a wider margin of error can occur.

    You appear to have missed the introduction to that part: “Now an estimate for internal variability. In this case we’re not really trying to estimate what was, but rather what potentially could happen over any 30-year period.”

    The best estimate is zero because, on average, the influence of internal variability will be zero. The AR5 authors assume that we don’t know the magnitude or sign of internal variability over a given period so the potential influence of internal variability is represented as an uncertainty range around zero.

    In the case of 1910-1940 there are actually some decent arguments that internal variability was likely a warming factor. However, the exercise here was to replicate the AR5 attribution approach so I’ve retained their method of representing internal variability.

    You are merely claiming a wider margin of error can occur.

    Yes, that’s how you represent a greater likelihood of higher trend when the best estimate is zero.

    You claim a trend, which does not exist, only a variance remember, which is bigger than the variance standard deviations you have assigned to it.
    1951-2010, NAT = 0K +/- 0.1 (~17-83% range)
    -1910-1940, NAT = 0.15K +/- 0.1 (~17-83% range)

    You seem a bit confused here. NAT and IntVar are entirely independent. NAT is the response from natural forcing (i.e. solar + volcanic), IntVar is Earth system unforced variability.

    What you’re showing there is the NAT – natural forced response – estimates. Different periods will have different NAT estimates because of the timing of solar and volcanic variability. NAT over 1951-2010 has a zero best estimate because there is no solar or volcanic trend over that particular period. 1910-1940 has a best estimate of 0.15K warming because there is a positive trend in both solar and volcanic terms over that particular period.

    With the benefit of hindsight you are claiming a positive bias in Natural variability.

    It’s with the benefit of reconstructions of historical solar and volcanic activity.

    When you know what caused the temperature increase then it is no longer natural variability.

    What?

    Known ones are included in the anomaly baseline, remember, as known forcings.

    I have no idea what this is meant to mean.

  59. Eli Rabett says:

    Ah yes, we pass into “Engineering Level Reports” Thought Bob Grumbine had killed that crap four years ago

  60. TE,

    What is depicted by the NCAR runs above is larger than continental scale and for half of a century, so the analysis of unpredictability is of climate, not weather. ( though climate is weather over a duriation

    Except that the largest variability is on scales smaller than continental. As far as I’m aware, that variability is potentially real (i.e., they are all plausible outcomes given the range of possible initial conditions and the imposed change in external forcing). Clearly on small scales there can be large variability; that’s not in dispute. However, as you increase both the region considered and the timescale over which you average, you would expect this variability to reduce. Also, as I understand it, one of the suggestions in the paper you’re using is that one should use these type of ensemble to try and understand when you would expect a forced signal to emerge.

    Unfortunately, the IPCC and other organizations did try to make ‘regional assessments’ of what they thought would happen with increased GHGs. Things such as precipitation ( and the lack of precipitation expressed as drought ), storms, cloudiness, and even temperature, are probably not predictable.

    There’s nothing wrong with trying (the alternative is ignorance). Did they claim that they could produce definitive predictions, or did they (as one would expect) present a range of possible outcomes?

  61. Something else to bear in mind about TE’s figure (or the one he’s included in an earlier comment) is that it is only December-January-February.

  62. The Very Reverend Jebediah Hypotenuse says:

    Willard,
    Most of the issues raised by both Judy’s and Senior’s claims have NOT been scrutinized.
    Perhaps your concerns are just the beginning of some truly epic scrutiny.

  63. izen says:

    @-TE
    “…are probably not predictable.”

    How probable? Better than 50-50 that a future prediction will be wrong?
    What odds do you think you would get for a bet that the 2030s in the CONUS49 will be warmer than now, or cooler?

    @-“Some aspects may be predictable to an extent. But probably not the aspects determined by fluid motion, which includes most of climate.”

    Wrong.
    Fluid motion is determined by the thermodynamics driving it. It determines HOW that energy flow is expressed in terms of weather. local, regional and short-term variation. Specific conditions are determined by fluid flow, boundary conditions by thermo.

    Fluid motion includes most of the weather, the local and temporal variation in the climate. But it is a dependent variable on the climate driven thermodynamics. That is evident from paleo changes, Greenland shows big local variation, but within the boundary of the glacial cycles.

  64. izen says:

    @-“Something else to bear in mind about TE’s figure is that it is only December-January-February.

    So TE ‘picked the 25% of the year which is known to have much greater variability than the rest of the year. Annual range for the US49 is around 5F. Winter range is around 10F.

    But (ironically?) the instrumental record shows that it is also the seasonal record that despite the larger variation most clearly shows the underlying trend. The climate forcing emerging from the weather variation.

  65. Hyperactive Hydrologist says:

    This paper explores rainfall projections in CMIP5 models. The results show a pretty consistent picture…… it is going to get drier.

    http://link.springer.com/article/10.1007/s10584-015-1575-z

    Water scarcity is critical in both Portugal and Spain; therefore, assessing future changes in rainfall for this region is vital. We analyse rainfall projections from climate models in the CMIP5 ensemble for the transnational basins of the Douro, Tagus and Guadiana with the aim of estimating future impacts on water resources. Two downscaling methods (change factor and a variation of empirical quantile mapping) and two ways of analysing future rainfall changes (differences between 30 years periods and trends in transient rainfall) are used. For the 2050s, most models project a reduction in rainfall for all months and for both methods, but there is significant spread between models. Almost all significant seasonal trends identified from 1961 to 2100 are negative. For annual rainfall, only 3 (2) models show no significant trends for the Douro/Tagus (Guadiana), while the rest show negative trends up to −6 % per decade. Reductions in rainfall are projected for spring and autumn by almost all models, both downscaling methods and both ways of analysing changes. This increases the confidence in the projection of the lengthening of the dry season which could have serious impacts for agriculture, water supply and forest fires in the region. A considerable part of the climate model disagreement in the projection of future rainfall changes for the 2050s is shown to be due to the use of 30 year intervals, leading to the conclusion that such intervals are too short to be used under conditions of high inter-annual variability as found in the Iberian Peninsula.

  66. This paper explores rainfall projections in CMIP5 models. The results show a pretty consistent picture…… it is going to get drier.

    So, why would you believe that to be true?

    Precipitation occurs from discrete events, typically involving low level convergence which creates lift, which creates condensation which precedes precipitation.

    But these discrete events are not predictable beyond seven days. Longer term means of these events are similarly unpredictable.

    As the graphic above indicates, there is an infinite array of equally valid circulation patters than can occur, regardless of global temperature, which change precipitation.

  67. Dryer in Mediterranean climates, but globally more energy is available for evaporation and what goes up must come down.

  68. TE,

    So, why would you believe that to be true?

    That wasn’t rreally the point.

    As the graphic above indicates, there is an infinite array of equally valid circulation patters than can occur, regardless of global temperature, which change precipitation.

    There may be an infinite array, but they can’t occupy all regions of parameter space. It’s the latter aspect that you seem to be ignoring.

  69. “There is, as far as I’m aware, no formal claim that we can use GCMs to predict the precise climate state in a small region at some point in the future, given a certain change in external forcing.”

    my sense is most folks recognize the OPPOSITE to be the case.

    See the good reverends casino analogy.

  70. Willard says:

    > there is an infinite array of equally valid circulation patters than can occur

    Just wait until GCMs get powered by improbability drive motors:

  71. Willard says:

    > Most of the issues raised by both Judy’s and Senior’s claims have NOT been scrutinized.

    One issue that seems NOT to have NOT been scrutinized is Do regional climate models add value compared to global models?

  72. angech says:

    Thanks for reply
    paulski0 says February 23, 2017 at 5:36 pm How did you get 0.54?
    .the sum of natural causes the upper 97.5% bound is 0.4K. ANT = 0.13K +/- 0.1.
    Mea culpa. Misread the 0.1 as 0.01, perhaps should be 0.63K.

    “NAT and IntVar are entirely independent.”
    Defined independently.
    but internal variability is the unknown way that the system reacts to all external and internal forcings plus any unknown internal and external forcings.
    An increase or decrease in Nat must have a definite link to IntVar even if we do not have it yet.

    ” NAT is the response from natural forcing (i.e. solar + volcanic), IntVar is Earth system unforced variability. ” Agree
    “ But you give a definition for increasing the IntVar range when you shorten the time interval.
    ” an estimate for internal variability. Generally models indicate greater likelihood of larger trends over 30-year periods than 60-year periods, so I adopt a greater uncertainty of +/- 0.15K (17-83% range) for 1910-1940, centered on zero.”
    By this logic it must also apply to the NAT over the shorter time period and also, I just realized, for the ANT = 0.13K +/- 0.1. [increase to +/_0.13]
    It is simply not good enough to say
    “For simplicity I’ll adopt the same uncertainty, so 0.13K +/- 0.1 (17-83% range).”
    Mind you the increased ranges probably helps and hinders your argument

  73. angech says:

    …and Then There’s Physics says:
    “Basically you are admitting that natural variance and internal variability are trendless but in your figures add up to 0.4K.in a positive manner and -0.25 in a negative manner, A range of 0.65K in just 30 years. For what we call causes we do not understand. ”
    “I think you’ve done your error propagation incorrectly.”
    I hate writing down figures.
    NAT = 0.15K +/- 0.1 (~17-83% range)
    IntVar = 0K +/- 0.15 (~17-83% range)
    Generally models indicate greater likelihood of larger trends over 30-year periods than 60-year periods, giving* greater uncertainty of +/- 0.15K (17-83% range) for 1910-1940, centered on zero.”
    Paulskio gave an uncertainty range of +/_0.25K for the combined unknowns when he should have give +/_0.30K .
    This means the error propagation is 0.50K by his figures and 0.60 K by what he should have used,
    The 0.65K was derived by adding the 0.15K of attributed warming which was not part of the uncertainty range to the Paulskio figure. Removing 0.15K does not mean the uncertainty in the NAT is changed. If dies not effect the conclusion which is that these are very large error range figures over a very short time.
    Which reduces the conclusion is that it is extremely likely that more than half of the observed increase in global average surface temperature from 1951 to 2010 was caused by the anthropogenic increase in greenhouse gas concentrations and other anthropogenic forcings together.Also meaning a physically plausible, and consistent, scenario under which more than 50% of the warming is not anthropogenic might be conceivable.

  74. anoilman says:

    Ben Santer… ROGUE SCIENTIST!

  75. angech,

    Paulskio gave an uncertainty range of +/_0.25K for the combined unknowns when he should have give +/_0.30K .

    I don’t see where Paul did this. How are you doing your error propagation. I get that Nat + IntVar to be 0.15 +- 0.18K.

  76. AoM,
    Thanks, I watched that. It’s a good interview.

  77. anoilman says:

    Willard: I wouldn’t mention the Improbability Drive. Pretty much all technology today requires a lot of random guesses. I’ve mentioned this before, but we use simulated annealing to solve incalculable (no proof completeness) problems, like designing circuit boards and silicone wafers. So pretty much everything we own is probably just a ‘good guess’.
    https://en.wikipedia.org/wiki/Simulated_annealing

    I do find it funny that the rather large denial community gets all upset at the science used in Global Warming, but don’t seem to be concerned about all the other things their lives depend on.

    No one seems to question whether nuclear sub marines work… which is just plain hilarious.

  78. “simulated annealing to solve incalculable (no proof completeness) problems, like designing circuit boards and silicone wafers. So pretty much everything we own is probably just a ‘good guess’.

    Thanks for reminding me of those examples

  79. paulski0 says:

    angech,

    “NAT and IntVar are entirely independent.”
    Defined independently.

    And that’s how AR5 defined them, which is the only thing that matters when evaluating the AR5 attribution process. The point here is to apply the AR5 attribution process to a different period as a robustness check. It would therefore be very wrong to adopt a different definition for the terms used.

    internal variability is the unknown way that the system reacts to all external and internal forcings plus any unknown internal and external forcings.

    No, the AR5 estimate for internal variability is purely an estimate of internal variability. Potential for unknown external forcings is represented by the authors extending uncertainty ranges for ANT and NAT.

    An increase or decrease in Nat must have a definite link to IntVar even if we do not have it yet.

    Maybe, but they are treated as entirely separate matters by the AR5 attribution process so this is an irrelevant argument in this context.

    By this logic it must also apply to the NAT over the shorter time period

    No, they’re independent. There’s no reason why one would affect the other.

    It is simply not good enough to say
    “For simplicity I’ll adopt the same uncertainty, so 0.13K +/- 0.1 (17-83% range).”

    I adopted that range for simple consistency with AR5. A more thorough analysis might also scale the ANT uncertainty relevant for the time. My guess would be that the 1910-1940 uncertainty is more likely to be a bit narrower than +/-0.1 due to smaller forcing meaning less spread in response.

    Mind you the increased ranges probably helps and hinders your argument

    I think you’re confused about the argument here. This is about whether the 1951-2010 attribution process could also explain the 1910-1940 warming. What I’ve done doesn’t, and cannot by definition, change anything about the 1951-2010 attribution numbers.

  80. angech says:

    “How are you doing your error propagation. I get that Nat + IntVar to be 0.15 +- 0.18K.”
    If using Paulskio’s baseline Nat as 0.15 you have to use a Nat variation of 0.10 +/_ and an IntVar of 0.15 +/_ combined. Equals 0.25 variance either way for a 0.50 K sum.
    Still feel “;Generally models indicate greater likelihood of larger trends over 30-year periods than 60-year periods, giving* greater uncertainty” means he should apply this rule to all uncertainties in the shorter range.
    Happy to leave it at that and stop my carping.

  81. angech,

    If using Paulskio’s baseline Nat as 0.15 you have to use a Nat variation of 0.10 +/_ and an IntVar of 0.15 +/_ combined. Equals 0.25 variance either way for a 0.50 K sum.

    If they’re independent, you combine them in quadrature. The combined error is the sum of the squares of the independent errors, not simply the sum of the individual errors.

  82. The Very Reverend Jebediah Hypotenuse says:

    Actually – We could employ the Infinite Improbability Drive to NOT NOT scrutinize all of Judy’s and Seniors claims simultaneously.

    However, if the Drive runs intuitionist logic, the double negation could possibly result in a long layover at the House Committee on Science, Space, and Technology – or maybe InfoWars.


    No one seems to question whether nuclear sub marines work… which is just plain hilarious.

    Forget annealing and nuclear reactor physics – No one even really understands now the double-slit experiment
    works…

  83. “If they’re independent, you combine them in quadrature.”

    First year at the university and first trimester to be able to do some first simple physics lab work. These are the people projecting to be convinced scientists are wrong and stupid. The Donald would say: Sad.

  84. angech says:

    Thanks ATTP.
    One learns everyday.
    I doubt internal variability affects natural variation but I am pretty sure natural variation strongly affects internal variability so would argue that they are not independent as you assert.
    Thanks Victor for the encouragement to thinking.

  85. angech,

    I doubt internal variability affects natural variation but I am pretty sure natural variation strongly affects internal variability so would argue that they are not independent as you assert.

    That sounds very confused to me. In this context, natural variation is things like volcanoes and solar, while internal variability is things like ENSO/PDO/AMO. If you’re suggesting that variations in volcanoes and solar strongly influences internal variability, then you’re making a remarkable assertion that I think many regard as not correct.

  86. JCH says:

    Natural variation that can bend spoons… that’s on the table.

  87. angech says:

    “If you’re suggesting that variations in volcanoes and solar strongly influences internal variability, then you’re making a remarkable assertion that I think many regard as not correct.”
    All forcings feed into the weather patterns. When and how they occur and to what extent is entirely due on how much force is being supplied at the critical times. Happy to argue on this point til the cows come home. Would expect support from some of the climatologists here.
    If this did not occur then there would be no problem from AGW in the first place.
    Will investigate further.

  88. angech,

    All forcings feed into the weather patterns. When and how they occur and to what extent is entirely due on how much force is being supplied at the critical times.

    Yes, but there is a difference between influencing the pattern, and influence the magnitude of some other process. We don’t expect relatively small changes in solar/volcanoes to substantially influence the magnitude of internally-driven cycles.

  89. Curry writes: “There is growing evidence that climate models predict too much warming from
    increased atmospheric carbon dioxide.” hmmm.

    Why does Curry ignore:
    * The growing evidence that less warming creates way more weather chaos and cyrosphere melting that projected?
    * What about assessing our society’s general unpreparedness in the face of any warming with its accompanying climate change? (Infrastructure as well as intellectually)

    I keep getting the image of a bunch of leaders hunkered down, all head huddled around the table, arguing over the models, data streams sweating every tiny detail and squiggle – obsessing over the map.
    All the while, outside the storm is gathering and nothing is being done to prepare,
    ( … our infrastructure; or our intellectual appreciation for down to earth physical realities.)
    because of. cowards hiding behind unrealistic exactitude expectations. so sad.

    Moral of the story. When will “we’ learn to appreciate the map is not the territory.
    Regarding manmade global warming driven climate changes, the territory is telling us plenty enough.

  90. Pingback: The feedback paradox | …and Then There's Physics

  91. Pingback: Lamar Smith’s Show Trial for Climate Models – Alternative Facts Wetware™

  92. Pingback: A Report on the State of the Arctic in 2017 | The Great White Con

  93. Pingback: Political activism | …and Then There's Physics

  94. Pingback: Altro consenso sul clima - Ocasapiens - Blog - Repubblica.it

  95. Pingback: Le origini dell'AGW per far prevalere il nucleare sul carbone? | NoGeoingegneria

  96. Martin Zumstein says:

    “Simulated annealing to solve incalculable (no proof completeness) problems, like designing circuit boards and silicone wafers. … So pretty much everything we own is probably just a ‘good guess’.”
    This is an interesting remark. But consider this:
    1. Simulated annealing hopefully terminates close to a global minimum (or maximum) if it is well tuned and tested on similar cases. So it may not be perfect but nearly perfect.
    2. The devices produced using such calculations are hopefully well verified and tested before they are used.
    I doubt whether 1 and 2 apply to global climate models and their application in public policies.

  97. Martin Zumstein says:

    @citizenschallenge.
    You write: “All the while, outside the storm is gathering and nothing is being done to prepare.”

    I completely agree. The question is WHAT SHOULD BE DONE? I am afraid a lot is being done, but perhaps a lot is wrong. For example, look at Germany’s “Energiewende.” They have increasing reliance on coal (since the nuclear power decision) and increasing carbon emissions, if that is the right criterion. Meanwhile, German electricity costs three times more than in other countries. Which hurts poor people most.

  98. Martin Zumstein says:

    @paulskio.

    You write:
    “Does she realise that these beliefs are basically contradictory? Results from such simplistic models should be meaningless to someone who believes climate is fundamentally unpredictable.”

    I think she does believe in the predictability, because she has started a business on predicting climate.

    She just criticises the current state of the art in climate modelling. I hope she does hers better…

  99. Martin,
    This thread has been largely inactive since February 2017.

  100. Martin Zumstein says:

    Here is my general comment about modelling. I would like to draw attention to the work of Prof. Lüdecke in Germany. He just fits the temperature series by fourier analysis. No climate modelling, no physics at all, at least not much. Just fourier analysis.

    So, as I said, this is very far from any climate modelling, but it very nicely demonstrates how well you can fit ANY DATA if you take ANY MODEL and use ENOUGH PARAMETERS. And surprisingly, he does not use many parameters. You might argue that these are ocean cycles, sun activity cycles, but it could be anything.

    Only the the future will prove who is right…

    Correlation is no proof of causality.

  101. Martin,

    Correlation is no proof of causality.

    Yes, obviously. What is your point?

  102. dikranmarsupial says:

    Martin, so why is it that no climate skeptic has taken the code for a GCM and tuned the parameters (without them taking on values that are physically implausible) and demonstrated that past climate can be explained without co2 being a greenhouse gas? It should be possible if any model can fit any data given sufficient parameters.

    I am a statistician, and so much prefer physics to statistics.

  103. izen says:

    At the risk of reanimating a zombie thread…

    There is a discussion on YT between Dr. Michael Mann, Dr. David Titley, Dr. Patrick Moore and Dr. Judith Curry at Charleston, West Virginia, June 12, 2018.

    One ‘skeptical’ comment was that it seemed a little unfair, 3 against 1, but at least they let the ‘skeptic’ go last. That was Dr Patrick Moore. Apparently JC admitting that 50% of the warming could be man-made puts her on the wrong side for some lay people.

  104. Martin Zumstein says:

    [Your necromancy has reached diminishing returns, Martin. Thank you for your concerns. -W]

  105. Reading this old thread come back to life, amusing to see reference to “silicone wafers”.

  106. Harry Twinotter says:

    Was someone testing a new robot reply program? They work better on more recent posts 🙂

  107. Pingback: Grandi progressi, ma… – L'archivio di Oca Sapiens

  108. Pingback: Altro consenso sul clima – L'archivio di Oca Sapiens

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.