Doubling down?

I wrote a post a little while ago commenting on a Sabine Hossenfelder video suggesting that she was now worried about climate change because the Equilibrium Climate Sensitivity (ECS) could be much higher than most estimates have suggested. I wasn’t too taken with Sabine’s arguments, and there were others who were also somewhat critical.

Sabine has since posted a response to the various reactions. I think this response is rather unfortunate and doesn’t really engage with the criticisms of her earlier video. She suggests that Andrew Dessler and Zeke Hausfather have lost touch with reality because they say:

Arguments over ECS are distractions. Whether it’s 3C or 5C is a bit like whether a firing squad has 6 rifleman or 10.

It might be a bit flippant, but I think they’re probably just being realistic. Whatever the ECS, the goal will be to rapidly decarbonise our societies and the rate at which we do so will probably be determined more by societal and political factors than by whether the ECS is 3oC or 5oC.

Sabine then goes on to criticise those who highlight that there are many lines of evidence and that we shouldn’t focus too much on individual studies. Sabine argues that she is making a different point and suggests that climate scientists are suffering from confirmation bias. The high-ECS ‘hot’ models have already been used in IPCC reports and arguing now that they should use climate sensitivity to screen out models implies an unjustified bias against the possibility that the ECS could actually be as high as these models suggest.

Essentially, once we’ve started collecting data and doing some analysis, we shouldn’t then change how the data is used, or modify the analysis, simply because the results aren’t consistent with previous expectations. However, this isn’t quite that simple. This is an ensemble of models that are developed to try and understand the physical climate system.

We can look at how well these models compare with observations. The ‘hot’ models tend to have poor agreement with historical temperatures and struggle to reproduce the last glacial maximum. If we select models based on their transient climate response (TCR) they do a better job of matching observations. So, the argument that we should screen models isn’t simply because they have a higher ECS than might be expected.

Of course, Sabine is correct that we can’t actually rule out high ECS values. The latest IPCC report says that the “best estimate of ECS is 3°C, the likely range is 2.5°C to 4°C, and the very likely range is 2°C to 5°C”. This certainly doesn’t rule out an ECS between 4oC and 5oC and doesn’t even entirely rule out values of 5oC and above, even if it suggests these are very unlikely.

Given that the highest risk is from the low-probability high-impact events, it seems entirely reasonable to be particularly concerned about the possibility that the ECS is something like 5oC, or higher. None of the information presented by climate scientists has ever really suggested that people shouldn’t do so. However, in general, the broader societal response has not been focussed on this possibility. I doubt that this is going to change anytime soon, and it’s certainly not because climate scientists have failed to highlight the potential risks associated with global warming and climate change.

This entry was posted in Climate change, Climate sensitivity, Philosophy for Bloggers, Policy, Scientists and tagged , , , , . Bookmark the permalink.

76 Responses to Doubling down?

  1. A point I was going to make in the post, but then forgot about, is that the ECS is formally defined in terms of fast feedbacks only. There are other slower feedbacks that can also amplify warming, but are generally regarded as operating on timescales that aren’t all that relevant when it comes to how we should respond. However, this could be wrong and it may be that we end up warming more than typical ECS estimates imply, even if the ECS is in line with expectations.

  2. “the highest risk is from the low-probability high-impact events” seems to be the right yardstick to help talk to the general public about climate change. The arguments about ecs or zero or net zero, etc. just create confusion for a lot of readers who are trying to understand the pressure to do something to reduce the emission load being carried in the atmosphere (any by association, the emission load accumuulating in the oceans)

    As the total impact of low probability, high-impact events builds, it should be possible to organize that overal impact and present it in a manner that demonstrates the need to address the driver of these events. The driver of increase is global warming. If people want to argue whether that is the case, it is not a good expenditure of time to join that argument imo.

    I think Sabine is wobbling a bit

  3. Chubbs says:

    Agree with the points made in the blog, but also think Sabine is raising a valid point, just not hitting the target. Climate scientists haven’t ruled out high climate sensitivity; but, society has. Sabine mentions the true bogeyman at the start of her first video. Her climate videos, no matter what is said, get a poor response. Most people don’t want to think about climate change, much less prepare for the worst case. Hence, the wry comment by Hausfather and Dessler. Our current climate plans are inadequate for 3C. We are betting the house on a favorable dice roll.

  4. Just Dean says:

    Most people don’t want to think about climate change and very few probably understand what ECS is or stands for – even Google thinks ECS means “Elastic Container Service.” At the end of the day people want to or need to appreciate how climate change is going to affect their lives and/or the lives of future generations. Even then it is hard to convince some people to care. As an example, at Noah Smith’s blog I tried to get a skeptic’s attention by pointing to the sea level rise the last time our atmospheric CO2 level was this high, i.e., 3 million years ago – between 16 and 82 feet higher according climate.gov, https://www.climate.gov/news-features/understanding-climate/climate-change-atmospheric-carbon-dioxide or 56 feet (17m) according to this paper by Dr. Jessica Tierney et al., https://www.science.org/doi/10.1126/science.aay3701 .

    Here was his parting shot, “Even if an 82 foot increase in sea level happens by the year 3000, no one going about their lives between now and then will notice or care about any change that happens during their lifetime.”

    At that point, I took my Climate Ball and went home.

  5. Is anybody anywhere putting percentages on ‘low probability’ and descriptions of ‘high impact?’

  6. Tom,
    Rather than asking an obviously leading question, maybe you could make a point.

  7. • most climate activists theory of change, c. 2010: “we must get the public/decision-makers to accept the scientific consensus on climate change, so they’ll respond effectively”

    • many climate activists theory of change, c. 2024: “we must get the public/decision-makers to reject the scientific consensus on climate change, so they’ll respond effectively”

    🙄

    And, interestingly, “have non-domain experts appeal directly to the public to undermine confidence in the scientific consensus!” is *exactly* the playbook strategy the denialists/delayists used.

    🙄🙄

    And as you say, what we need to *do* to mitigate and adapt to climate change – what we’ve needed to do for quite some time and still need to do! – is scarcely changed at all at this point whether the eventually revealed/observed climate sensitivity is closer to an ECS of 2.5, 3, 4, whatever.

    One thing that always frustrates me about this blamestorming as to who exactly bears responsibility for the insufficient response to date – the media! lobbying in politics! the oil companies! conservatives! concern trolls! our form of government and our laws! our electoral system! (and now, apparently!) the incompetent, biased scientists who failed to model accurately and warn us sufficiently! – is one party who is almost never mentioned and is seemingly largely blameless and simply powerless to have done anything… And that’s the public, specifically the electorate, at large.🤷

    The idea that the public and decision-makers don’t like dealing with uncertainty – and *very specifically* *REALLY* don’t like dealing with *subjective* uncertainty, where it’s not just, say, observational uncertainty or probabilistic uncertainty, but requires them to rely on expert judgement which is itself uncertain – almost never seems to be identified as part of the problem. Because the public’s and decision-makers’ go-to strategy for dealing with subjective uncertainty *has always been* (and continues to be!) to prefer to wait until uncertainty can be reduced! And people try to dance around this by leaning to blame *this* tendency entirely on “other” seemingly omnipotent “bad actors”.

    And I – sadly! – don’t find it surprising at all that at this late stage, the blamestorming for our present dilemma is expanding to “the climate economists!”, “the climate *scientists*!”, etc.🤷 Anyone, really, so long as it absolves “the public”.🤷

    Martin Weitzman and Ken Caldeira (and I am sure others) used to muse about whether climate catastrophe was “endogenous” or inevitable. By which they were referring to the possibility that *the public* wasn’t properly wired to respond to a slow-developing, stock problem like climate change and ghg emissions, and hence would not really respond until it was already immediately apparent, and by which time you’d already be at, oh, I don’t know, say 2.5 trillion tonnes of cumulative CO₂ emissions. Largely too late to avoid much of the damage.

    What a coincidence! That appears to have actually happened!

    And, as I said, unsurprisingly and unfortunately, another human tendency is looking for scapegoats. And avoiding any mirrors.🤷

  8. I think I italicized and bolded three separate words and my post is in purgatory

  9. RNS, I wrote a decade ago that we needed to move now regardless of eventual sensitivity calculations because the first steps were the same regardless of whether ECS is 1.5C or 4.5C.

    I’m proud to report that the world (well, part of it) listened and started to work on it, putting up thousands of wind turbines, millions of solar panels and buying 10s of millions of electric vehicles. Carbon pricing covers a significant portion of emissions and even (some) oil companies are looking at methane emissions (in some places).

    Thanks in part to the world’s efforts we no longer have to worry about RCP 8.5. Now we just need to keep going and work harder. Now is also the time to start investing in pre-adaptation, I might add…

  10. we’re saved! thank you. congratulations.

  11. Encomiums are nice, but cash donations are also cheerfully accepted.

  12. Here we go!👇

    *Another* fine example of the growing verbiage that is telling us that it is in fact *the climate scientists* who have miserably failed us. Who knew?

    🙄

    You say you doubt that’s what he actually says? Or hope that he doesn’t? Sorry to disappoint you! From just the 212-word executive summary:

    The science-based institutions on which we depend to address this crisis have comprehensively failed us… By not calling out these incontrovertible realities, mainstream scientists are at risk of becoming the new climate deniers.

    🙄

    Now, my sense is that most of these nouveau-consensus-rejecters are overwhelmingly “johnny-come-latelies” to any considered personal assessment of our climate dilemma. I would put Sabine in that category and many others (though obviously Poirrott and others are proof of many exceptions to the rule).

    And frankly, I find it enormously galling to have people appear on the scene many, many, *MANY* 100’s of gigatonnes of CO₂ emissions late relative to when the climate scientists were making it *abundantly* clear we needed to drastically reduce emissions. And are now (i) publicly scolding those very scientists, (ii) asserting that their personal extensive review – over the entire recent long weekend!, or somesuch self-flattering description – of the literature has revealed the definitive truth to them, and (iii) that in light of this new clarity they have just generously shared, *YOU* dear reader, need to demand much faster mitigation.

    It’s tedious.🥱

    But it’s also pernicious, corrosive, insidious – at a point where we need decades-long, economy-wide transition efforts. And what’s the first point of business? Undermining confidence in the scientists themselves.🙄

  13. Rust,

    I find it enormously galling to have people appear on the scene many, many, *MANY* 100’s of gigatonnes of CO₂ emissions late relative to when the climate scientists were making it *abundantly* clear we needed to drastically reduce emissions. And are now (i) publicly scolding those very scientists, (ii) asserting that their personal extensive review – over the entire recent long weekend!

    Indeed, and it is very frustrating.

    I think I may have suggested before that this kind of outcome is likely. It’s always the scientists’ fault and they really can’t win. Just bear in mind that the current dominant narratives seem to be criticising climate scientists for using RCP8.5 while at the same time others are criticising them for not highlighting enough that climate sensitivity could be high.

  14. verytallguy says:

    Is anybody anywhere putting percentages on ‘low probability’ and descriptions of ‘high impact?’

    No Tom, nobody carefully defined probability definitions well over a decade ago.

    Click to access AR5_Uncertainty_Guidance_Note.pdf

    And nobody has every summarised the impacts in a concise way

    Click to access IPCC_AR6_WGII_SummaryForPolicymakers.pdf

    As it’s virtually certain you are already aware of this, the real question is why you are JAQing.

  15. Chubbs says:

    While I don’t condone bashing scientists, perhaps its a sign that climate change is becoming more difficult to ignore. There is the saying “there is no such thing as bad publicity”. Hopefully thinking about climate, even in a non-constructive manner, is the first step towards taking action.

  16. Very Tall Guy, thanks for the links. I remember I used to post the WG2 SPMs on projected impacts and get yelled at online because they weren’t drastic enough–AR6 is more of the same.

    But your two links don’t answer my question–they don’t identify low probability or high impact. The first link just provides a method for doing so. The AR6 WG2 SPM would really piss off those ‘johnny come latelies’ as it doesn’t read as ‘doomy’ enough.

    Probably most readers here would agree that AR6 WG2 impacts are severe enough and approaching quickly enough to merit a more robust response than we have generated to date. But it’s fuzzy.

  17. Just Dean says:

    Ultimately and unfortunately the seriousness of climate change ends up being a value judgement. We can agree that the science is real but communicating the seriousness is a hard job, e.g., 17 m of sea level rise is acceptable to some people as long as it happens slowly enough and maybe that is true for citizens of rich countries that can adapt but it is not true for poor countries and/or the animal kingdom.

    I have argued in the past that estimating the temperature rise is the easy part but assessing and predicting the impact is the harder. Why, because this experiment we are doing is unprecedented. My question is why even go there.

    Yes, maybe we have moved too slow in the past but that is history – energy transitions take time. What matters now is the rate at which we decarbonize going forward and as ATTP points out that is more dependent on societal and political factors than the value of the ECS.

    To that end I believe we are reaching a tipping point where most of the new energy sources being added are renewable.

    One of the big political decisions facing the U.S. with regard to climate change progress is the 2024 election. Carbonbrief.org has put together an analysis of the negative impact electing Trump would have on progress towards reducing emissions, https://www.carbonbrief.org/analysis-trump-election-win-could-add-4bn-tonnes-to-us-emissions-by-2030/ . Besides personal efforts to reduce our individual carbon footprint, voting is probably the most effective way to influence our collective efforts.

  18. Tom,

    they don’t identify low probability or high impact.

    Maybe you’re using odd terminology, but it doesn’t take long to find figures in the WG2 SPM that illustrate pretty much what VTG was suggesting. e.g.,

  19. ATTP, I regard those as tools with which someone might be able to specify what is the probability percentage and to quantify the impacts, rather than actual percentages and quantifications. Perhaps I’m wrong.

  20. Willard says:

    Why use these tools when one can rely on the Bingo:

    1. 8.5 is bollocks
    2. 8.5 is not BAU
    3. The IPCC calls 8.5 “BAU”
    4. The IPCC uses it as such
    5. Centuries or millennia separate 8.5 and when we might see 8.5W/m2
    6. We never were on an 8.5 path
    7. Only using 8.5 is bad science
    8. Without 8.5, there is no huge alarm
    9. Don’t present a < 1% scenario like the IPCC does
    10. It is not about blame

    Source: https://andthentheresphysics.wordpress.com/2020/02/09/but-rcps/

    Why are contrarians still at step 1?

    So many questions.

  21. Tom,
    Maybe you can clarify what you actually mean. It’s clear that WG2 quantifies risk/impacts in terms of “Very high, “High”, “Moderate”, and “Undetectable”, which might not be an actual quantity, but is certainly indicative. Are you expecting something more than that? How would you quantify “Very high”, for example?

  22. I think it makes best sense to let all the back and forth go and just keep presenting the most accurate and least confusing message possible to the general public. Getting irritated over how scientists are treated by the public or policy makers, or how various groups of scientists are treated by various other groups of scientists is a distraction. Just keep presenting the most accurate and least confusing message possible and be ready to repeat that over and over in various, consistent iterations if you want your message to be heard and understood.

    I think the message to general public and policy makers should now be: We are now beginning to feel the impacts of global warming that has been caused by our past emissions. The impacts will only grow over the coming years until we reach the point where our emissions are no longer accumulating in the atmosphere and oceans. At that point, we will need to continue the work to reduce the accumulations in the atmosphere and oceans to reduce future harm. We should increase our actions to reduce accumulation as fast as possible. Some harms that we may trigger could turn out to be essentially irreversible. The cost of prevention on this global project will be much less than the cost of harm remediation. We need to live smart now on this beautiful planet and take care of it and each other.

  23. Hi ATTP,

    Well, it’s been done for SLR–Tol and Yohe among others. The US EPA did a good study back in the 80s–we talked about it here six or seven years ago. But I haven’t seen it done for other likely impacts and never anything done for differing levels of warming/sea level rise/storm frequency and strength.

  24. Tom,
    I still don’t know what you mean by “it’s been done”? What has been done? I’m asking you to explain what it is you think should be done. If you think it’s something that should be done, presumably you can actually explain what it is that you think should be done.

  25. Okayyyyy… Let’s start with sea level rise. Let’s calculate the impacts for net SLR (including subsidence) of 30cm, 50cm, 75 cm and 159 cm ( between now and the end of the century (taking into account the fact that people are still building in, and people are still moving to threatened areas). How many people will have to move? What are the costs of relocating, rebuilding, propping up the insurance companies, etc.?

    Then, looking at SLR for the past century, let’s assign probabilities to each. Straight line trend extension, additive increase, exponential growth, stall or decrease.

  26. Tom,
    Again, WG2 appears to have lots of discussion about the impact of sea level rise. Some of it is framed in terms of risks/impacts at different levels of warming, but there is a relationship between warming and sea level rise.

    let’s assign probabilities to each. Straight line trend extension, additive increase, exponential growth, stall or decrease.

    Okay, but what does this probability represent? Future sea level rise will depend on warming, which depends on both how much is emitted and on climate sensitivity. We can assign some kind of probability to the latter, but how do we assign a probability to the former?

    This – to me – is the key issue. How do we assign a robust probability to emission pathways? Not only is it intrinsically challenging, but it’s also a moving target. The probability we might assign to an emission pathway today, probably won’t be the same as we would have assigned in the past, or will assign in the future.

  27. Look at the front end of emissions. We have pretty good forecasts of energy consumption by fuel. It’s not going to look good for the next 50 years, but the mix should shift away from coal (except for China) pretty steadily. We know emissivity per tonne of the different fuels. Throw in a calculation for rice paddies and melting peat and we should be able to get fairly close.

    That’s actually do-able.

  28. So, you’re actually suggesting that we could assign a probability to emission pathways? I think there are very good arguments against doing so, especially as one reason for doing this kind of work is to actually influence the emission pathway that is followed. How can we then assign probabilities if one of the reasons for doing so is to influence the probability?

  29. Coal with a carbon content of 78 percent and a heating value of 14,000 Btu per pound emits about 204.3 pounds of carbon dioxide per million Btu when completely burned. (5) Complete combustion of 1 short ton (2,000 pounds) of this coal will generate about 5,720 pounds (2.86 short tons) of carbon dioxide.

    In 2022 the world burnt 8.42 billion tonnes. The IEA writes “By 2025, global coal demand is forecast to flatten out at around 7.4 billion tonnes.” Not sure if I buy that but it’s a specific figure.

    Do that for gas and oil and you can get a pretty good picture.

  30. Well, ATTP–we can reasonably project demand for energy–lots of organizations and companies do this. We can see what’s being built and what transitions are taking place. And my point is that we have (God I hate to use the term) the ability to create scenarios with probabilities of occurrence and estimates of impacts, including the Golden Path to zero emissions.

  31. Tom,
    You seem to be entirely missing my point, which is really not a huge surprise. What is the point of doing what you suggest if you don’t think that doing so will then influence future emission pathways? If you do think that this work could influence future emission pathways, what do the probabilities then represent?

  32. I do think understanding what our actions can lead to can influence our actions. Right now what we are being told is way too vague or completely impractical/impossible.

    If we can tell people that putting up 1,500 nuclear reactors will lower projected sea level rise by six inches, it is at least concrete.

    Contrast that with ‘We must achieve net zero by 2030.’ First, it’s impossible. Second, there is no mention of how to do it. Third, there is no discussion of consequences. Fourth, there is no fall-back alternative.

  33. Tom,

    If we can tell people that putting up 1,500 nuclear reactors will lower projected sea level rise by six inches, it is at least concrete.

    Yes, this might be concrete, but it’s not what you’ve been suggesting and it is already essentially possible to do this.

    Contrast that with ‘We must achieve net zero by 2030.’ First, it’s impossible. Second, there is no mention of how to do it. Third, there is no discussion of consequences. Fourth, there is no fall-back alternative.

    None of this is really true. Of course there is a fall-back alternative. We don’t meet the net-zero targets, we emit more than we would have done if we had met them, we warm more, and the impacts are more severe than they might have been. This isn’t all that complicated.

  34. I think the Bannon playbook has been employed: flood the field with bullshit

  35. Stevie’s going to jail and I will tap dance when he does. No, ATTP. That’s not what the ‘messaging’ is. Your points are obvious–but not made in public–well, outside the blogosphere.

    And it is what I’m suggesting. After you have estimates of probability and impact, you show what energy decisions–not emissions decisions–we can make to go from Scenario 4 to Scenario 3.

    CO2 emissions are colorless, odorless, invisible–they don’t even have a taste. Energy consumption produces most of those emissions and they are real, tangible and far from odorless. Why talk about emissions at all?

  36. Tom,
    Oh come on, of course different emission pathways will involve different decisions about energy. I thought I was having a discussion with someone who understood this obvious link.

    Look at this the other way. The WG2 report is full of assessments of the risks associated with different levels of global warming. It is straightforward to link these levels of global warming to emissions, with probabilities. If society decides that it would like to reduce the risk of certain impacts emerging, then this would involve doing things to influence the emission pathway that is followed, which will include making decisions about sources of energy.

    There doesn’t seem to be all that much of a difference between this and what you’re suggesting and at least considering it from the perspective of what would need to be done to reduce the risk of certain impacts, avoids assigning probabilities to outcomes we might like to influence.

  37. Willard says:

    > After you have estimates of probability

    Which “you,” whoever that might be, don’t.

  38. Just Dean says:

    A couple things.

    Circling back to the thesis of the post, I don’t find it useful/helpful for Sabine to be accusing climate scientists of confirmation bias. That is basically the same argument used by contrarians to explain the consensus of climate science and so it undermines their credibility by casting uncertainty and doubt. What is it with theoretical physicists playing for either side, e.g. Steven Koonin? Maybe it is like Sheldon from the Big Bang says, they feel they “have a working knowledge of the universe and everything in it.”

    I’ve encountered a serial denier over at Hannah Ritchie’s Substack, https://www.sustainabilitybynumbers.com/p/energy-security-minerals/ – see newest comments. His name is Nick Schroeder. He just recently showed up over there. I did a search and he has been at this for close to ten years spouting the same nonsense. I think his basic problems is understanding the ability of greenhouse gases to absorb and reradiate infrared radiation.

    He is giving engineers from Colorado a bad name – CSU vs CU. I have not engaged as it seems pointless. What is best approach? Do you ignore these guys, do you engage or do you report his nature/history to Hannah and hope she bans the guy?

  39. Just Dean says:

    Hot off the press! I just saw this X post/repost by Zeke about a new paper on ECS from Kyle Armour et al., https://twitter.com/hausfath/status/1767867900793409768?cn=ZmxleGlibGVfcmVjcw%3D%3D&refsrc=email .

    More to chew on ATTP!

  40. If it’s so straightforward why is nobody doing it? Really. What we get are decade-old jeremiads about the Greenland Ice Cap disappearing, Florida falling into the ocean and wildfires burning the planet down.

    Why don’t you do it, ATTP? Or willard–you’d have to put down your sniper’s rifle, but it might be a welcome change of pace.

  41. Tom,

    What we get are decade-old jeremiads about the Greenland Ice Cap disappearing, Florida falling into the ocean and wildfires burning the planet down.

    Huh?

    Why don’t you do it, ATTP? Or willard–you’d have to put down your sniper’s rifle, but it might be a welcome change of pace.

    Why would I do something that I disagree should be done. Especially as I think all the information you’re actually looking for is already available, it’s just not being presented as you think it should be presented. Why don’t you do it?

  42. Willard says:

    The Contrarian Two-Step is really simple. Step 1 is Denial, e.g.

    “Contrast that with ‘We must achieve net zero by 2030.’ First, it’s impossible. Second, there is no mention of how to do it. Third, there is no discussion of consequences. Fourth, there is no fall-back alternative. ”

    Step 2 is the famous Sammich Request, e.g.:

    “Is anybody anywhere putting percentages on ‘low probability’ and descriptions of ‘high impact?’”

    There is a facultative Step 3 – Saying Stuff. To that effect, pick any line of that comment.

    Cranks and luckwarmers alike keep using this two-step. It’s tedious. It’s also abusive. There is no reason to tolerate it.

  43. [Playing the ref. -W]

  44. [More playing the ref. -W]

  45. russellseitz says:

    Willard, “Why are contrarians still at step 1?” is in praxis structurally akin to

    “decade-old jeremiads about the Greenland Ice Cap disappearing, Florida falling into the ocean and wildfires burning the planet down.”

    or, as a famous social entrepreneur remarked for the 290th time last week:

    “Sweden in particular is very good at greenwashing and framing themselves as a climate leader, when we have very high emissions per capita if we include all our emissions, including consumption based and biogenic emissions etc and especially if we look at historic emissions. So we are not a climate leader at all.”

    because repetition is the first principle of advertising dubious products- agencies don’t much care how much boredom campaigns inflict once funded and set in in motion

  46. Willard says:

    > because repetition is the first principle of advertising

    “But Greta” fits that bill, Russell. And this time you did not even found a hook to Sabine’s video. It’s not that hard to see the problem that AT underlines:

    If someone claims that sensitivity matters, then the onus is on them to argue for it. This applies whether one believes it’s low, high, or in between.

    If a luckwarmer claims that sensitivity does not matter but should be low anyway, then that luckwarmer faces issues of coherence.

    If that person then transfers a decade of branding efforts toward guesstimating impacts right after they just admitted sensitivity did not matter, then there’s something more than incoherence at work.

    Punching doomers should be taken elsewhere.

  47. russellseitz says:

    What to you commend philosophically when, in dealing with canonically complex problems, climate being a first order one, some variables are observed to converge with thousands of man-years of research, while others do not?

    I fear that to the degree that that failure renders model intercomparison risky, it renders risk assessment problematic as well

  48. Joshua says:

    RNS, I wrote a decade ago that we needed to move now […]

    I’m proud to report that the world (well, part of it) listened…

    Never could I have imagined that Tom has such influence.

  49. Joshua,
    I think we’re going to see a lot of people claiming that what they said ages ago is actually what we’ve done and, therefore, they were right. Few, however, will consider that even if they did correctly predict what we would do that this may still not have been the best thing we could have done.

  50. Joshua says:

    No doubt many people see the world through a very self-reflecfing, individualized lens. I suspect it’s particularly prevalent among people who comment online.

  51. Chubbs says:

    Here’s a recent Nature paper which estimates the future cost of heat waves. Costs are substantial, even in a mitigation scenario. Africa is hard hit, but costs are spread across the world in part due to global supply chains. Have no idea whether the methodology is sound.

    https://www.nature.com/articles/s41586-024-07147-z

  52. Just Dean says:

    Has technology, e.g., the internet, AI, increased people’s tendency to claim expertise in areas outside their own area of specialization? This seems to be especially true of climate science. The field is evolving quite rapidly and unless you have time and knowledge in the field it is hard to keep up.

    In a previous comment I pointed to a recent paper that appear to suggest that maybe shortcoming in climate models have biased ECS and TCR on the low side, https://www.pnas.org/doi/10.1073/pnas.2312093121 , and don’t help constrain future warming.

    Today, I saw an X post by the same scientist, Kyle Armour, that says recent modeling work that includes spatial patterns of temperature change regarding the LGM helps tighten the constraint on LGM, https://twitter.com/karmour_uw/status/1767749520526647707 .

    Bottom line: People should stay in their lane or at least have some humility about their claims.

  53. Just Dean says:

    *”helps tighten the constraint on ECS”

  54. “Bottom line: People should stay in their lane or at least have some humility about their claims.”

    Old Man Yells at Cloud. Nothing one can do about the behemoths Google, NVIDIA, and Huawei investing millions in machine learning of climate and weather patterns. Should learn the lesson ever since IBM applied “expertise in areas outside their own area of specialization” by developing Deep Blue and beating chess grandmaster Gerry Kasparov in 1997.

    It was last November when Google claimed its Deep Mind was beating conventional forecasts produced by ECMWF. This will likely continue to be a monotonic progression in skill.

  55. Just Dean says:

    I don’t care what the tool is, it will still take someone with expertise to judge whether the result is correct and meaningful.

  56. Willard says:

    > by developing Deep Blue

    Which has nothing to do with current AI.

  57. Steven Mosher says:

    > by developing Deep Blue

    Which has nothing to do with current AI.

    if you want a cogent discussion of AI i suggest chatting with googles gemini.

    i talked with it the other day and its not too shabby

    BTW  youtube has been feeding me stuff on

    univalent  foundations.

    i used to think set theory was abstract, but type theory

    and category theory seem uttlerly devoid of empirical content

    https://en.wikipedia.org/wiki/Univalent_foundations

  58. russellseitz says:

    Thanks for reminding us that language is larger than chess.

  59. My two points were monotonic progression of skill, and questioning “staying in your lane”. First, embedded knowledge is like a ratchet in that it tends to not backtrack, instead it always moves forward (of course w/ exceptions). Second, consider that the IBM Deep Blue team was composed of 3 of 4 of the CompSci students who built Deep Thought while in grad school at CMU. All 4 wrote this in a 1990 Sci Am article

    “Deep Thought had a rather unusual history. First, it was developed by a team of graduate students who had no official sponsorship or direct faculty supervision.” …

    “It may seem strange that our machine can incorporate relatively little knowledge of chess and yet outplay excellent human players … sometimes produces insights that are overlooked by even top grandmasters” 

    Note that they weren’t supervised by CMU prof Geoffrey Hinton who is considered the godfather of NN-based deep learning. Their classmate Peter Brown, who started working at IBM, was however a student of Hinton’s and was able to convince an IBM VP to consider the chess project while they were both taking a leak at a urinal. It’s an interesting history that I related to because I was working at IBM Watson at the same time. That kind of stuff happened because management did take risks in what research to fund. 

  60. Willard says:

    Web,

    Computer Chess is as old as computer science. In fact, Alan Turing designed such a program on pen and paper in the 40s. This pet project turned into a program called Turochamp. To name two famous names, Max Euwe and Mikhail Botvinnik already predicted that machines would beat humans before year 2000. You should recognize those names, even if they’re kinda vintage.

    Chess has an evaluation function. Does climate have one? If not, then it’s not the same kind of problem. Simple as that. And no – Hinton’s neural networks have nothing to do with Watson. It’s built with case-base reasoning, in the GOFAI way.

    This is what is usually meant by staying in one’s lane.

  61. Just Dean says:

    A plumber, a theoretical physicist, and a climate scientist walk into a bar. Who should you ask about ECS?

  62. It’s a good discussion topic, as there are still many unsolved problems in climate science. And that doesn’t include ECS, which is at least narrowed down. So what path will the next breakthrough follow? It could be based on a neural network trained on data, it could be a symbolic regression tool also trained on the data, it could be a conventional GCM modified to include some new factor, it could be an algorithmic breakthrough in solving nonlinear fluid dynamics, or it could be some other mathematical physics breakthrough applied to a specific topology of the Earth’s climate system. 

    It’s interesting to follow the latter, where the physics researchers are active:

    https://mastodon.online/@bradmarston/112118740282468526

  63. verytallguy says:

    @willard,
    I remember reading about Turing’s chess program.
    To be able not merely to conceive of such a thing, but actually code it, before a machine was invented that could run it – what an astonishing feat of intellect and imagination!

  64. Just Dean says:

    Paul,

    I’m not sure why you chose to interpret my “stay in your lane” comment as my criticism or cynicism of the the ability and need for multidisciplinary teams to accomplish big science or make important contributions to science. Having worked in fusion research for 34 years, I’m well aware of the need of multidisciplinary teams for getting things done that sometimes lead to breakthroughs.

    My one and only reference was related to people outside their field of expertise, e.g., Sabine, in this case a theoretical physicist, taking things out of context to try and get people more concerned/excited about climate change. In the end, she is almost as guilty as Steven Koonin, at cherrypicking data and papers, e.g. the latest paper by James Hansen, to support her narrative, albeit at the opposite end of the spectrum to Koonin.

    To that point, if you look back at my comments, you will find links to two recently published papers about ECS. I doubt that Sabine could tell you what their relevance is in the context of the hundreds of papers that have been written about ECS. Again, I would look to a climate scientist for that guidance.

    If you really want to know how I feel about this whole business with Sabine, my feelings and thoughts closely align with that of Climate Adam, https://www.youtube.com/watch?v=q4EuvpDzlUY .

    Regards,

    Dean

  65. David B Benson says:

    Steven Mosher — Re: yours of March 21. Actually both type theory and category theory have empirical origins in the classification of our (mathematical) experience. For category theory, see the original 1944 paper by Eilenberg & Mac Lane. Type theory is the older, with Bertram Russel forced to introduce simple types to save his foundations work.

    But this seems far removed from the usual interests of this blog.

  66. Steven Mosher says:

    Tom

    “RNS, I wrote a decade ago that we needed to move now regardless of eventual sensitivity calculations because the first steps were the same regardless of whether ECS is 1.5C or 4.5C.”

    yes i remember  also no amount of I told you so will wake them up.

    simply  any opposition whatsoever to the party line

    has to be characterized as some form of MAGA inspired denialism.

  67. David B Benson says:

    Tom & Willard, thank you the corrections. Bertrand Russell indeed.

  68. Chubbs says:

    Machine learning is a tool which is being used by meteorologists and climate scientists to develop improved models. There has been progress in past year or so as machine learning tools have improved, and model forecasts from machine learning models are now available to the public on a few websites. However my limited experience indicates we are likely to see incremental improvement vs a breakthrough.

    In a similar vein, John Kennedy has a cautionary tale on a machine learning application:

  69. dikranmarsupial says:

    From John Kennedy’s article “OK, there’s a third alarm bell: “Deep learning”. When I hear that “deep learning” or neural networks were used to forecast ENSO, my spidey senses start tingling”

    very wise! I recently saw a paper on a topic I looked at 20 years ago, but using DNN, rather than the Multinomial Logistic Regression that I used. The paper did a good job of explaining its data sources and methods, so I redid the experiments in the same way using MLR instead, and it worked better than the DNNs (despite effectively being just the topmost layer of the DNN). I did try neural networks myself at the time, and I found the same result back then as well. 

    My research interests are in ML, and skepticism is well warranted, especially as DNNs are just another cycle of hype and disappointment that has happened repeatedly in ML – ironically we don’t seem to be able to learn from the data ;o)

    Don’t get me wrong, I am not against DNN, thy have achieved some stunning results, but that doesn’t mean they are the solution to every ML problem.

  70. russellseitz says:

    ATTP:

    “Arguments over ECS are distractions. Whether it’s 3C or 5C is a bit like whether a firing squad has 6 rifleman or 10. 

    It might be a bit flippant, but I think they’re probably just being realistic.”

    OTOH, the outcome depends in great measure on whether the six or ten face inwards or outwards , and whether or not they are at liberty to take their firing instructions from the world at latge.

  71. Two aspects to NN that I’ve wrestled with. One in regards to using layered connections to infer relationships a la hidden Markov models in a probabilistic sense. Some of the weather predictions based on historical records benefit from this pattern-based approach. Second, to use connections as a nonlinear mixing mechanism to emulate potentially nonlinear mathematical principles, such as with fluid dynamics. The math physics-based weather predictions benefit from this, a spatio-temporal learning, which is fundamentally different from historical pattern-based learning. It’s probabilistic forecasting vs physics-informed dynamic simulations — completely different approaches to applying NNs to weather forecasting. Prompting ChatGPT responds that some of the projects are combining the two approaches but it wouldn’t say which ones.

  72. ”category theory seem uttlerly devoid of empirical content”

    I always think of category theory as a way of showing how a specific mathematical construction (say geometry or flow diagram) or formulation (say a set of equations) can be morphed into one another. Thus the importance of the “morphism” and “functor” in CT speak which are used to compose something. The issue in terms of practicality is that many of these kinds of applications have already been built independent of the help of category theory.

    Thus a tool like Simulink from Matlab could have been envisioned from Category Theory if it had not already been developed as a means of transforming mathematical equations to a flow diagram, with the original Matlab developers never having to apply CT.

    That said, some languages such as Julia have CT libraries for building software, such as AlgebraicJulia. And that may not be that different from abstract templates that developers already design for libraries in other languages. I only know some of this stuff because I follow John Carlos Baez’s blog posts on Category Theory and he seems to think it is something new when he describes a data flow diagram, where I see he is often reinventing a wheel. See e.g. https://johncarlosbaez.wordpress.com/2020/10/19/epidemiological-modeling-with-structured-cospans/. In this post he does say that he is using CT to put the compositional approaches that scientists and engineers use “on a firm mathematical footing”. Another interesting discussion topic to me nonetheless.

  73. Just Dean says:

    The paper by Vincent Cooper, Kyle Armour, et al. that uses modeling of the LGM to constrain ECS has been published, https://www.science.org/doi/10.1126/sciadv.adk9461 .

    ” Accounting for LGM pattern effects yields a median modern-day ECS of 2.4°C, 66% range 1.7° to 3.5°C (1.4° to 5.0°C, 5 to 95%), from LGM evidence alone. Combining the LGM with other lines of evidence, the best estimate becomes 2.9°C, 66% range 2.4° to 3.5°C (2.1° to 4.1°C, 5 to 95%), substantially narrowing uncertainty compared to recent assessments.”

    From Armour’s X post, https://twitter.com/karmour_uw/status/1780793537195618350

    “LGM temperature patterns were influenced by major northern hemisphere ice sheet changes, while future warming patterns will not be. This new study is the first to consider these differences when using the LGM to estimate the climate sensitivity to rising greenhouse gases today.”

  74. Chubbs says:

    Per this recent Nature study, the cost of climate change dwarfs mitigation costs.

    https://www.nature.com/articles/s41586-024-07219-0

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.