low-probability, high-impact outcomes

Credit: Rowan T. Sutton (2018)

There’s an interesting Earth System Dynamics Discussion paper presenting a a simple proposal to improve the contribution of IPCC WG1 to the assessment and communication of climate change risks. Essentially, one can estimate the risk of some outcome by considering the likelihood of that outcome multiplied by the impact of that outcome. This is illustrated by the figure on the right (which I will discuss a bit more lately).

So, the message in this paper is that we should not only discuss the likely outcomes, but also the low-probability, high-impact outcomes. I think this makes a lot of sense. Climate change is probably irreversible on human timescales and so I think we really do want to avoid these low-probability, high-impact outcomes. Hence, it’s important that we discuss this publicly.

Credit: Michael Tobis

However, this isn’t really the first time that this has been suggested. Michael Tobis generated a figure illustrating a similar point. The public discussion seems to involve people who think climate change will be beneficial, or have minimal impact, and others who think there could be substantial costs but who avoid discussing the possibility of catastrophe. The latter may be very unlikely, but the impact would be so great that it is an outcome that we should probably not ignore.

I was going to say one more thing about the first figure I included in the post. I realise it’s probably just meant to be illustrative, but it’s not (I think) really correct. The impact really depends on how much we warm, not on climate sensitivity alone. How much we warm depends on climate sensitivity and on how much we emit. The latter probably makes this quite complicated because we can’t assign a simple probability to how much we will emit in future (it depends on what choices we make in future). The paper does mention the transient response to cumulative emissions, and emission pathways, so it’s not completely ignoring this. However, I do wonder if doing this rigorously is actually quite difficult. This doesn’t, however, mean that I don’t think this is a reasonable suggestion.

Advertisements
This entry was posted in Climate change, Climate sensitivity, IPCC, Scientists, Severe Events and tagged , , , , , , . Bookmark the permalink.

276 Responses to low-probability, high-impact outcomes

  1. David Wallace-Wells posted the following amusing tweet yesterday.

    Is it more irresponsible to report that climate change has been solved, when it hasn’t, or to risk scaring people by examining the scientifically sound worst-case scenarios? Asking for a friend.— David Wallace-Wells (@dwallacewells) June 10, 2018

    The context (in case some don’t know) is that he took quite a lot of flack for writing an article about worst-case scenarios. The climate change has been solved refers (I think) to recent articles about direct air capture that have probably presented a somewhat more optimistic picture than is realistic.

  2. I wonder, is one of the most substantial costs is all the time wasted by people speculating about things that will never happen?

  3. TE,
    It’s your time to waste.

  4. It is so hard to have an original idea nowadays.

    Bart Verheggen on the same paper: “Climate change as a matter of risk management requires different choices in communication”.
    https://ourchangingclimate.wordpress.com/2018/06/11/climate-change-as-a-matter-of-risk-management-requires-different-choices-in-communication/

    I fully agree that we should focus more on the high-end tail and less on the average. That is not just a communication problem. The IPCC reviews the literature and quite often the results are only described in terms of mean, standard deviation or confidence intervals. That makes is hard for the IPCC to deliver more detailed information on the tails (for example, higher percentiles).

    In my post from 2015 I already argued the high-end tail is were the problems are and more uncertainty thus means higher risks: “Fans of Judith Curry: the uncertainty monster is not your friend”.
    http://variable-variability.blogspot.com/2015/12/judith-curry-uncertainty-monster-high-risk.html

    The uncertainties in the emissions are likely skipped because they are impossible to quantify. That would be the job of economists and political scientists and is probably not just because some of them are distracted by attacking climate scientists. It is a hard problem.

  5. Is it more irresponsible to report that climate change has been solved, when it hasn’t, or to risk scaring people by examining the scientifically sound worst-case scenarios? Asking for a friend.— David Wallace-Wells (@dwallacewells)

    At the time, I already wrote that it is fine (even important) to focus on the high-end tail. The problem of Wallace-Wells’ doomsday piece were the inaccuracies, not the doom.
    http://variable-variability.blogspot.com/2017/07/how-to-talk-about-climate-doomsday-warnings.html

    (I like it how WordPress now includes tweets without telling Twitter which blog posts everyone is reading. Consumer-friendly laws work.)

  6. Turbulent Eddie: “I wonder, is one of the most substantial costs is all the time wasted by people speculating about things that will never happen?

    Interesting to hear someone from your circles advocate eliminating the military and the police.

  7. Richard Arrett says:

    Nova. Very high impact.
    Asteroid strike. Varies, but could be very high.

  8. Richard,
    But we can do the same kind of assessment for those. If by Nova, you mean supernova, then the risk is vanishingly small (there aren’t any massive stars close enough for this to be an issue). We now track a large fraction of near-Earth asteroids bigger than a few hundred metres. In the latter case I think we’d almost certainly have advance warning of any global-level asteroid strike.

  9. Richard Arrett says:

    ATTP:

    I was actually referring to our sun blowing up.

    Vanishingly small – yes. High impact – yes.

    Just saying.

  10. Richard,
    The Sun going supernova is virtually impossible. A supernova event happens when a star’s core gets filled with iron, which can neither undergo fusion, nor fission. Hence, it collapses. The Sun can’t generate iron in its core, so will never go supernova.

  11. verytallguy says:

    Ahem. The sun is in fact made of iron, another finding only revealed by climate “sceptics” and published in a prestigious peer review journal.

    https://www.sourcewatch.org/index.php/Energy_and_Environment

    [On topic, agree in concept but the tails of the distributions of emissions, sensitivity and impact are all poorly constrained. Of these, sensitivity is, I opine, far and away the least uncertain]

  12. vtg,
    Ahhh, yes, I’d forgotten about that.

  13. ATTP, I think that the discussion needs to widen from WGI to WGII, which considers not only the risk from a perspective of emissions, but also ‘exposure’ and ‘vulnerability’ as illustrated in the SPM.1 Figure in AR5 WGII report

    http://www.ipcc.ch/report/graphics/index.php?t=Assessment%20Reports&r=AR5%20-%20WG2&f=SPM

    There was a great article in the NYT reproduced in The Independent giving examples of 5 specifies ‘utterly confused’ (impacted) by climate change

    https://www.independent.co.uk/environment/five-plants-and-animals-utterly-confused-by-climate-change-a8297881.html

    … such as

    “The European pied flycatcher runs on a tight schedule each spring. From its wintering grounds in Africa, the bird flies thousands of miles north to Europe to lay eggs in time for the emergence of winter moth caterpillars, which appear for a few weeks each spring to munch on young oak leaves.

    … In the parts of the Netherlands where peak caterpillar season had advanced the fastest, the scientists later found, flycatcher populations dwindled sharply. “That was the big discovery that suggested this mismatch could have real consequences for populations,” said Christiaan Both, an ecologist at the University of Groningen.”

    In other words, a non-linearity in the vulnerability even given just 1C GMST rise. There are probably thousands of similar phenomena awaiting to be unearthed – or already published but rarely discussed – with unexpected and surprising impacts on the ecological systems as a whole.

    We don’t have to wait for Thwaites collapse or similar risk, although I am not dimishing the need to study this and quantify it (a large team from BAS is engaged in precisely this activity)

    But it is the in progress, incremental, and cumulative impact of thousands of ecological examples of Hazard x Exposure x Vulnerability that worries me right now.

    Humanity is inextricably linked to ecology for many reasons, not least food, so Ecomodernism’s illusory decoupling won’t isolate people from these ecological impacts.

    Perhaps helped by the ‘shifting baseline syndrome’, we paradoxically find it harder to discuss the creeping impacts happening already in ecology (which is much more messy and complex than the physics of course), than physics-oriented ‘catastrophic’ black swan events of the future, like a Thwaites collapse.

    It’s not a question of either/or, but we do need a broader spectrum discussion on risks.

  14. how many lukewarmers and deniers are buying fire insurance on their property even though there is little chance of their houses catching on fire? Sensible folks do manage risk against high impact events, but I don’t know anyone selling or buying SunGoesNova insurance. That one seems to be going nowhere. Swing and a miss.

  15. Richard Arrett says:

    Ok, fine. Not all low probability high impact events are to be considered (if the probability is too low).

    How about a super volcano eruption (say in Yellowstone)?

    Surely the odds of that are higher than the sun going nova.

  16. Steven Mosher says:

    I like the charts, In theory. Problems

    The PDFs are really educated guess work. Nothing wrong with that, but you do want to consider
    alternative guesswork, and you definately need a feedback mechanism. A way to update the guess
    work and track your success in guessing.

    Further the guesswork can help you frame what is important to investigate. It is VITAL to investigate whether or not we can chop off the tail of ECS. That says, perhaps, more research on paleo. More research to see if any models with ECS above 4.5 can show skill. Right now
    the most sensitive model is 4.4. Also better satillite observations could cut off the high end.

    next. Impacts. These non linear curves should concern us. I havent read much in the area, but if I am a planner of anything, I want to push back on this. what makes us believe that? If we start to see non linear damage what will we do? What can we do now in terms of adaptation to cut off these outcomes? is it smooth, or will we hit a point where the damage jumps up abruptly.

    Emissions: Although I like the RCP approach from a analytical approach, I do hanker for the old SRES ground up approach. For reasons I can explain..

    The conversation:

    I do risk every day. And rational people think about risk in very different ways.
    It is not all simply down to the math of things.
    Suppose I have to make a decision where there is a 5% chance of total disaster.
    I mean total. generally speaking we dont spend 100% of our time discussing this.
    but then this is a conversation among people who accept that there is a 5% chance
    of total disaster.

  17. Michael Hauber says:

    Humans hate ambiguity. So some of us pretend the worst case catastrophic scenarios can’t possibly happen. And others pretend that they are guaranteed to happen – the most extreme example I am aware of being a thread on a peak oil forum that started maybe 10 years ago stating that runaway methane clathrate explosion had commenced.

    Any discussion of these risks has to deal with the minefield of those attempting to milk such discussions as evidence of a fear-mongering conspiracy – eg the prediction that the Arctic may have been ice free by 2016. Or it may not. Or a few years later. And by just a few scientists – but Al gore quoted it, so that is evidence that all scientists were certain of doom and cannot be trusted.

    So maybe discuss, but only with people you know are capable of rational discussion?

  18. Steven Mosher says:

    “how many lukewarmers and deniers are buying fire insurance on their property even though there is little chance of their houses catching on fire? ”

    The purchase is under duress, same with drivers insurance. In many places you are forced to buy it. Fire insurance? when I rented? never. If you are concerned about losing everything you
    own too much. I have a suitcase and a fireproof safe.

    get a quote on insurance, save that money in a sinking fund. On average you’ll do better.

  19. Steven Mosher says:

    “So maybe discuss, but only with people you know are capable of rational discussion?”

    Yup,

  20. Richard,

    Not all low probability high impact events are to be considered (if the probability is too low).

    Yes, that’s sort of the point. If the probability is low enough, then we wouldn’t really worry too much about it, even if the impact would be enormous. I have no idea about the risk of a supervolcano.

  21. dikranmarsupial says:

    TE wrote “I wonder, is one of the most substantial costs is all the time wasted by people speculating about things that will never happen?”

    No. Talk is cheap (but some of it has value, even on blogs, however YMMV).

  22. dikranmarsupial says:

    Richard wrote “Nova. Very high impact.”

    what policy options do we currently have on that one? ;o)

  23. dikranmarsupial says:

    Richard wrote “How about a super volcano eruption (say in Yellowstone)?”

    Still not sure of our policy options.

  24. dikranmarsupial says:

    SM wrote “The PDFs are really educated guess work. Nothing wrong with that, but you do want to consider alternative guesswork, …

    For me that is the key point. Lukewarmers need to do more than show they think that ECS is probably low, they need to set out their PDFs for ECS (subjectivist Bayes is fine) and their impact function and show that it argues against action on mitigation. Of course they need to be able to make a more persuasive case than mainstream science has done (e.g. the IPCC reports, not just WG1), however they have the advantage of arguing for what people already want to do (i.e. nothing). AFAICS this hasn’t been done.

  25. angech says:

    Michael Hauber says:
    ” So some of us pretend the worst case catastrophic scenarios can’t possibly happen.”
    smallbluemike says:
    “how many lukewarmers and deniers are buying fire insurance on their property even though there is little chance of their houses catching on fire? Sensible folks do manage risk against high impact events, but I don’t know anyone selling or buying SunGoesNova insurance.”
    Steven Mosher “I do risk every day. And rational people think about risk in very different ways.
    It is not all simply down to the math of things.”
    “low-probability, high-impact outcomes.
    I think we really do want to avoid these low-probability, high-impact outcomes.”
    I know what you are saying, trying to say but struggle with the concept. And the time frame and that of insurance.
    How to word it.
    High impact is bad, death, stroke, car accident, fire, living in Syria.
    Low probability is good.The lower the better.
    But do we really need to insure against really, really low probability.? Probably not.
    Can we insure against really high impact events? No
    Say Yellowstone or that recent volcano in Venezuela.
    Hence with Climate change if it is catastrophic in the future but of low probability and
    “Climate change is probably irreversible on human timescales” there would seem to be less of an imperative to act now.
    We, or rather the future generations are all going to die, or the alternative, it won’t be so bad, They will adapt and I can have my overseas holiday to Italy next year.
    God willing, plane does not fall out of sky and no landing on a North Korea nuclear bomb site at Malpensa.
    Perhaps I should take out some travel insurance?

  26. Steven Mosher says:

    “For me that is the key point. Lukewarmers need to do more than show they think that ECS is probably low, they need to set out their PDFs for ECS (subjectivist Bayes is fine) ”

    Yes, I have no issue working through a problem with a subjectivist Baysian, provided they actually commit to the process of documenting their approach, assumptions, and provided the actually update their assesments. We do this all the time in business. I sit here today with a model that has
    2 key assumptions, ranges for that data and made up PDFs, shit even uniform if you want
    to represent lack of knowledge in that manner. But as data comes in people are responsible enough to update and reconsider and re calculate and own their educated guesswork. Even when things are fundamentally “unknowable” we have methods and approaches for moving forward, documenting our uncertainty, correcting it where we can, and plowing ahead.

    When lost in the woods
    https://answers.yahoo.com/question/index?qid=20091105140334AAYFSqY

    interesting answers

    My vote when lost in the woods, emit less, dont hurt the poor

  27. JCH says:

    My vote when lost in the woods, emit less, dont hurt the poor

    These are probably mutually exclusive. Arriving in the 21st century, we suddenly cannot hurt the poor. New rule for a machine that does not have that part in it. Push comes to shove, everybody knows what is going to happen to the poor.

  28. dikranmarsupial says:

    angech you are being ridiculous. The impacts don’t go immediately from “O.K.” to “catastrophic”, nor do the pdfs go immediately from “high likelihood” to “very unlikely”. Just before you get to “very low likelihood – very high impact”, there is a region with slightly higher likelihood and slightly lower impact, and before you get to that … That is why you need to consider the whole distribution of risk, rather than just concentrating on the low end (“all O.K., no need to take action) and the very high end (“can’t do anything anyway”). It is almost as if you were trying to find justification for not doing anything, rather than looking to do a rational cost-benefit analysis. (note lack of “;o)” )

  29. Joshua says:

    JCH –

    Perhaps not. What happens to “the poor” is not necessarily a function of energy policy. IOW, “the rich” could compensate “the poor” to offset more costly energy.

    Won’t happen, of course, but I don’t think we have to accept “skeptics'” framing that making fossil fuels more expensive necessarily comes at the expense of “the poor.”

    After all, the poor are poor because they failed at primate dominance hierarchy competition, evolution proves that.

  30. on supernovas and supervolcanos, etc.:

    Reductio ad Absurdum

    (also known as: reduce to absurdity)

    Description: A mode of argumentation or a form of argument in which a proposition is disproven by following its implications logically to an absurd conclusion.

    There is a reasonable discussion to be had in the realm of the low probability, high impact events. That discussion will not take place if absurdities are continuously presented.

    There are events that are of such low probability that we don’t need to address them because they are absurd on their face. These include the sun going supernova or the moon losing its orbit stability and crashing into the earth.

    There are events of such high impact that we don’t need to address them because we have no meaningful policy responses to them. These include things like a large asteroid impact or supervolcano eruption. Laws of survival and natural selection will take over in these circumstances.

    A discussion of the high impact cost of the higher end of climate sensitivity and what protections we might choose to pursue in terms of policy is sensible and overdue and should not be derailed by questions that are more useful when writing scripts for sci-fi action movies.

  31. I live on a subduction earthquake zone in the Pac NW. I read that we are overdue for a significant earthquake. I live on a hillside that will probably move a lot in a quake above 7.0 and I am in a large house that was built over 100 years ago. This house stood up very well to the quake of 1949 that broke foundations on many structures in the neighborhood, so I know this place has some inherent structural stability against quake damage, but the 1949 quake was nothing like the large quake that is discussed as a distinct possibility for this region.

    A major quake of magnitude 7.0 and above is a low probability event. It may not happen for 100 years. It may not happen for a 1000 years. In terms of impact, I think a quake of 7.0 and above is clearly a high impact event in terms of the destruction that it would create in my community and it is apparent that the State is not interested in committing financial resources to this possible event because the State is having trouble funding education adequately and the State Constitution dictates that education is the primary responsibility of state government.

    Given all of this data and risk analysis, it was pretty easy for me to commit to improving the earthquake bracing in the house. It was a lot of work in the crawl space drilling into the concrete foundation and updating and improving the attachment of the building to the foundation but only a few hundred dollars in earthquake hardware. There is a good chance that I will not live to see the see the big earthquake that is due here, but this is just a sensible improvement to the house that I feel good about as a person who attempts to manage risk in an intelligent manner. I carry fire insurance on this place because it just makes sense. I also pay for auto and health insurance because I think it makes sense.

    All of my risk management gut sense says that our species should be working much, much harder to reduce the CO2 accumulation in the atmosphere. It’s amazing to watch the lukewarmers and the deniers work so hard to prevent meaningful action that would reduce the risk of a low probability, high impact event. The ideology and decision-making are pretty hard for me to fathom.

  32. Dave_Geologist says:

    How about a super volcano eruption (say in Yellowstone)?
    Surely the odds of that are higher than the sun going nova.

    Yes, but you have to specify time frame. Next year? No. Next decade? No. Next century? Probably not but I’d need to do the math. Next millennium? Now we’re maybe talking but we’d have warning. But still very unlikely. USGS estimates about once per million years, but with large uncertainty because that’s based on a sample of two.

    Yellowstone has been pretty extensively surveyed seismically so we’d know if a supervolcano-sized magma chamber had already formed. And there would be surface expressions (bulging, earthquake acceleration, fumaroles, precursor eruptions). And the mantle plume feeder has recently been imaged. We can monitor both if we’re worried. Plume volcanoes are subject to the same laws of physics as the earth’s climate. To build a supervolcano-sized magma chamber you need an energy imbalance. Rather like the GH effect, energy coming from the mantle has to be trapped in the crust so it can built up enough stored energy to go BANG, not bang. The magma comes from decompression melting and then migrates to shallower levels. To make a lot more of it, you either need to have a slowdown in magma escape (like blowing up a balloon), in which case you can watch the magma chamber growing and Yellowstone swelling. Or you need to have variations in plume ascent rate or volume so the rate of melting increases and eventually a new, more eruptive equilibrium is reached. Volume is more likely. Ascent rate is driven by temperature differences which are unlikely to change on timescales less than tens of millions of years, but convectional instabilities can make the rising plume ‘lumpy’. Plumes are VERY big and move VERY slowly. A supertanker is a go-kart by comparison. The Iceland plume pulses are the best studied and occur on million-year timescales. They caused substantial dynamic uplift as they reached shallow depths, in the order of a kilometre. We’ll notice.

  33. Steven Mosher: “These non linear curves should concern us. I havent read much in the area, but if I am a planner of anything, I want to push back on this. what makes us believe that? If we start to see non linear damage what will we do?

    My post “Fans of Judith Curry: the uncertainty monster is not your friend” gives some reasons why we expect the curve to be super-linear and examples of specific impacts where the curve was found to be super-linear.

    As long as climate change is still small most systems will be adapted to that because the climate also fluctuates naturally. The larger the change relatively to the variations will be the less prepared systems will be and the more damage will be done and the more costly adaptation will be needed. The exact function will naturally be different for every impact.

    I would not stop talking about (working to avoid) a 5% chance of total collapse. Hard to think of anything more important.

  34. I’m a student of the low-probability, high-impact phenomena. These kinds of events can work both ways. One of the lowest probability, high-impact events is discovering a super-giant crude oil reservoir somewhere in the world. It’s nature’s version of the lottery.

  35. PP says: “One of the lowest probability, high-impact events is discovering a super-giant crude oil reservoir somewhere in the world. It’s nature’s version of the lottery.” oil reserves have proven to be a bit of a reverse version of the lottery for many countries. https://www.theatlantic.com/international/archive/2012/04/why-natural-resources-are-a-curse-on-developing-countries-and-how-to-fix-it/256508/

  36. Steven Mosher says:

    victor if exclusively talking about the 5 percent case for 30 years is not getting the job done…..at what point will you modify your behavior ?

    you see thats the thing. I get that stupid lay people wont modify their science beliefs in the face of new evidence. they think c02 cant be bad and no new evidence will sway them.

  37. SmallBlueMike, Sure, that’s what happens on the tail end of resource exploitation.

  38. Ken Fabian says:

    “The purchase is under duress, same with drivers insurance. In many places you are forced to buy it. Fire insurance? when I rented? never. If you are concerned about losing everything you
    own too much. I have a suitcase and a fireproof safe.”

    Really? You don’t have insurance other than what you are regulated to have – and with a self funded emergency account in place of the insurance you aren’t required to have?

    Whilst we are talking about low probability outcomes here, surely some of the high probability climate ones are like 5% chances that your house WON’T burn down with the low probability ones more like the sort that will leave you holding a suitcase with nowhere better to go to and next to zero chance of recovering what is in your safe and that emergency savings account can’t help. Clearly insurance isn’t the answer for even the likely outcomes, let alone the unlikely. But some kind of more direct response is needed – “insurance” in this is as metaphor.

  39. angech says:

    “There are events of such high impact that we don’t need to address them because we have no meaningful policy responses to them. These include things like a large asteroid impact or supervolcano eruption. Laws of survival and natural selection will take over in these circumstances.”
    Dave_Geologist says:
    “How about a super volcano eruption (say in Yellowstone)? Surely the odds of that are higher than the sun going nova. Yes, but you have to specify time frame. Next year? No. Next decade? No. Next century? Probably not but I’d need to do the math”
    SMB “There are events that are of such low probability that we don’t need to address them because they are absurd on their face.”

    Your answer is mistaken, Dave.
    Hitchhikers Guide to the Galaxy clearly states that using the improbability device the answer is yes to all three time intervals stated. Or as the banks say Prior performance is no guide to future performance. Why do they say that I wonder?
    Oh, that’s right. The odds of your odds being right only increase if the event does not happen in the immediate future.

  40. jacksmith4tx says:

    Insurance? The global debt-to-GDP ratio is 225%! We couldn’t afford the premiums.

  41. YES! Loss functions are a staple of statistical decision analysis, e.g., J. O. Berger’s fine textbook. But they have some issues, not theoretical ones, but in statistical practice.

    One, a characterization I owe to statiistician Dr John C Cook, is the Woodshed Problem: Everyone agrees we need to build a Woodshed, but we don’t agree what it should look like. Substitute Loss Function for shed.

    Another … if truly high impact low probability events are considered, there’s the greater imact game: For any event you come up with, I can come up with another which potentially has a much bigger loss, if less likely.

    Trouble is, like car accidents and gun deaths, the mean impact of climate change will what will kill us, and it’s significant. Trouble is, people at large don’t notice anything but singular spectacular events.

  42. Jacksmith says: “Insurance? The global debt-to-GDP ratio is 225%! We couldn’t afford the premiums.”
    mike says, that’s right, that is like the global resource overshoot thing where each year now we use up the earth’s sustainable productivity by an earlier date. Like pumping ground water out of Florida causing land subsidence at the same time we see sea level rise related to the burning of millenia of fossil fuel accumulation, it starts to become apparent to anyone capable of connecting the dots, that maybe we are just consuming a little (or way) too much and/or there are just too many of us human beings driving/flying/consuming our way around the planet. What exactly is our species’ end game or exit strategy if it is not to wildly rein in consumption just as fast as we can, if not faster?

  43. HG says: “Trouble is, people at large don’t notice anything but singular spectacular events.” even bigger trouble is that more and more singular, spectacular events are already in the pipeline as we let more CO2 (proxy term) accumulate in atmosphere and oceans every year. You want to see spectacular events? stay tuned. They are queued up.

  44. Ragnaar says:

    The image top right:
    Credit: Rowan T. Sutton (2018)

    For impact, replace that with profit. Figure the expected return which is panel c).

    Panel c) does seem to follow from the prior two panels. So then we need to decide if we’re going to sell panel c) as a graduate of business school? I wouldn’t. So in the original case as presented, risk is risk to the seller of the advice to use the chart. If you blow this one, trillions of dollars are wasted.

    In my example using profit, I am not taking the risk of stating something I learned in business school overvalues the returns.

  45. angech says:

    smallbluemike says:
    “What exactly is our species’ end game or exit strategy if it is not to wildly rein in consumption just as fast as we can, if not faster?”
    The Petrie Dish SMB.
    One Agar plate, the planet.
    one little group of bacteria [people] find a way to use the plate resources , sugar [oil] and start to grow. New colonies sprout up like mushroom bubbles around it or those bubbles colonists use on Mars. The central hub grows bigger, richer, more complex with the bacteria driving cars and dying their tendrils [hair] green. You can see them from the moon or at least on the petrie dish as a big white central spot.
    Then, Climate change, the oil [sugar] runs out and the colony starts to decay and crumble. The lights go out.
    But we [they had a good time.
    The epitaph and the answer SMB. There was and never will be an end game exit strategy of consequence for the Petrie dish.
    Please feel OK to use a variation of this theme [TM angech and a lot of biologists]

  46. Jeff Harvey says:

    One of the salient facts that is ignored in evaluating the risks of calamitous events here is the role our species is playing on driving them. The eruption of a supervolcano or a massive earthquake are geological events that have no, or at the very most, exceedingly small anthropogenic inputs. The current warming is almost exclusively down to us. And of course this is not the only massive assault being inflicted by mankind on complex adaptive systems. We are simplifying ecosystems across the biosphere at an alarming rate, with a massive concomitant loss in biodiversity. Species and genetically distinct populations are the working parts of our ecological life support systems. The signs of ecological collapse are all around us, yet in supine arrogance we act as if it is nothing really to worry about because technology will come to our rescue. Unfortunately, most critical ecosystem services have no technological substitutes, and those which do are often extremely costly or ineffective. The collapse in insect populations, including pollinators, is a very worrying symptom. There are many others. If we continue along the current trajectory there is little doubt that the consequences will be dire. In other words, the chance of societal collapse at some point is virtually 100%. The problem that ecologists like myself face in predicting exactly when this collapse will occur is based on several factors. First, some new technologies extend our ability to plunder systems across the planet and to delay or buffer the costs; second, our understanding of the relationship between biodiversity and ecosystem functioning is still relatively poor, given the immense complexity of these systems and the fact that they function non-linearly. So we really don’t know how much we can simply these systems before they collapse or fail to deliver vital services and conditions upon which our species depends.

    The thrust of my argument here is that the risk factor of maintaining a business-as-usual approach for the welfare of future human populations is incredibly high. Yet we have known this for 30 years or even longer and have done little to change course. The recent updated World Scientists Warning to Humanity in Bioscience explained in detail exactly what little mankind has done since the first document was written in 1992 coinciding with the Global Summit on Biodiversity in Rio De Janeiro.

  47. Dave_Geologist says:

    Suppose I have to make a decision where there is a 5% chance of total disaster.
    I mean total. generally speaking we dont spend 100% of our time discussing this.

    But Steven, nobody, literally nobody, spends 100% of their time discussing the 5% chance. In the real world, as opposed to your straw-man world, the noise is all about ignoring the 5% chance. As with RCP8.5, for example. Which absent a suite of models between 6 and 8.5, means ignoring everything above 6, not just ignoring 8.5.

  48. Dave_Geologist says:

    I have no idea about the risk of a supervolcano.

    In the next hundred years? At least a thousand times smaller than the risk of ending up on RCP8.5, or of ECS turning out to be greater than 4.5C. And as has been pointed out, there’s nothing we can do about a supervolcano or ECS. Whereas there is something we can do about whether or not Trump, Putin et al. get us back onto RCP8.5

  49. Dave_Geologist says:

    Or as the banks say Prior performance is no guide to future performance. Why do they say that I wonder?
    Oh, that’s right. The odds of your odds being right only increase if the event does not happen in the immediate future.

    I’m sure you’ve had this pointed out to you before angech, but it bears repetition. The banking world is totally unlike the real, material world, because the former doesn’t have energy conservation laws. The latter does. It’s why random walks* are OK in finance but abject stupidity in bounded systems subject to conservation laws. They’re functionally equivalent to perpetual motion machines.

    To get a Yelowstone supervolvano eruption any time soon would violate the laws of thermodynamics, or require the violation of other laws of physics, like instantaneously changing the viscosity of mantle lithosphere, or the melting point or latent heat of fusion of andesite magma. Ain’t. Gonna. Happen. Absent divine intervention, leprechauns, unicorns, skydragons or malevolent aliens. And far too complicated for them. Much easier to nuke us from orbit, or drop an asteroid on us.

    Or does the HHGG reference mean I should have read it as a Poe? Hard to tell sometimes.

    * Bounded random walks are OK. I’ve used them myself in fracture modelling. But you need to have a stopping rule for when all the strain energy is used up, and a rule for what to do at the boundaries (stop and leave some strain energy for next time; or reflect back; or start again at some random or specific location). And there’s generally a rule that says you can’t revisit where you’ve been before, or somewhere nearby. So in the end, not very random. Kinda like the difference between genetic drift and natural selection. Controlled randomness, if that’s not a contradiction in terms.

  50. Richard Arrett says:

    DK asked (I think as a joke) “what policy options do we currently have on that one? ;o)”

    In all seriousness, I think some people are talking about the desirability of getting all of our eggs out of a single basket by setting up a moon colony or even better a Mars colony. I would argue that this is a policy option for the planet killing low probability events like the dino killing asteroid strike.

    But DK is right of course – a Mars colony won’t help with a Nova.

    We would have to get our eggs out of the solar system or really far out in the solar system.

    Still, in 500 years who knows where we will have colonies!

  51. Richard,
    The issue with something like that is that it could protect us as a species (send some people to a colony on another Solar System body) but doesn’t necessarily do much to protect us as individuals. The latter does have value, in my opinion.

  52. Joshua says:

    Jeff –

    One of the salient facts that is ignored in evaluating the risks of calamitous events here is the role our species is playing on driving them. The eruption of a supervolcano or a massive earthquake are geological events that have no, or at the very most, exceedingly small anthropogenic inputs.

    This. Plus, another way of looking at the same aspect is that it is a calamitous event that we can mitigate. That, obviously distinguishes it from something like a supervolcano. I notice that the cohort that advocates doing nothing about the low probability high damage function risk of climate change overlaps quite a bit with a cohort that says we should consider military intervention against North Korea so as to prevent the low probabiity high damage risk of North Korea launching a nuclear strike.

  53. Martin Weitzman (and Gernot Wagner; and others) have been writing for years about how low-probability, high-impact scenarios can come to dominate economic analysis, leading to seemingly nonsensical implications for investing in mitigation and rendering traditional cost-benefit analysis practically useless.

  54. Ragnaar says:

    This image top right:
    Credit: Rowan T. Sutton (2018)

    Yes I read the reservations about it being real and my own comment above. Eyeball it and start multiplying 0.5 C areas times each other starting at about 6.5 C. We do not get a lowering of Risk as shown in panel c). Yet panel c) shows what I’ll call a reversal of Risk value at about the 6.5 C point. This is indicating the model breaking down at the extremes. There is some tiny probability Antarctica will surprise us in the extreme. Will that influence the expected value number from our model that we use for policy? In my above comment I threw out high values when looking at profit which I can defend easily. Then when we flip things over and instead use cost, tell us what to do with the extreme high values.

    http://smallbusiness.chron.com/estimate-expected-profit-53031.html

    It seems to me, and my math is rusty, that the image should show increasing Risk all the way to the right border given the range used.

  55. Dave_Geologist says:

    Well, well, I find myself agreeing with you Ragnaar. Although, as you say it is diagrammatic. To get a peak with those sort of curves you’d need to have a maximum limit to CS as well as a minimum limit. Perfectly feasible in principle (although IIRC to get a strong upper bound you need to rely on modulz). Or have a peak or plateau in the impact curve. For example, if you’re looking at NPV and the CS range encompasses civilisation collapsing, there’s very little remaining value to lose at higher CS. If you’re using fatalities as your yardstick, the law of diminishing returns applies past about four billion. If you’re using some sort of quality-of-life measure, we might actually be better off in a post Mad Max, hunter-gatherer world than in a Mad Max world. If it’s just a cartoon it doesn’t imply the model breaking down at the extremes – it’s just arithmetic after all. Obviously quantifying the input values is hard at the extremes.

    I disagree about Antarctica. I expect it to surprise us. It’s already started to, AFAICS. And not in a reassuring way.

    For the very high impacts I tend to take the view I outlined as oil industry practice. On top of the numerical risk calculation you have an overlay that says “don’t let that happen under any circumstances”. Or only take that risk with very high level (e.g. company president or vice-president) approval. A one-in-ten chance of going bankrupt is more scary than the certain loss of 10% of the company’s value. You can do the same with cost, setting an unacceptable level, but for a fair comparison you shouldn’t be arguing about whether that’s 1%, 2% or 5% of GDP. That approach should be reserved for catastrophic costs and impacts. As a benchmark I would take USA government spending during WWII, which rose from 30% of GDP in 1941 to 79% of GDP in 1944. Obviously that was a different set of circumstances, where re-armament was underway before Pearl Harbour, and there was a degree of certainty that it was time-limited (i.e. the Allies would either win in a few years or lose in a few years, it wouldn’t be a Hundred Years War). The war-footing approach might be right for a short-term push like a full-on build out of CCS infrastructure or a Hail-Mary deployment of high-atmosphere or orbital geoengineering to forestall imminent disaster. But for something like multi-decadal investments you should be debating how far above 10% you’re tolerance limit lies. IOW at what level of personal or national cost do you say “that’s just too much cost/effort/social change: if saying no means that my grandchildren die young or are pauperised, so be it”.

  56. Steven Mosher says:

    dave g

    how much time is spent discussing the impacts of rcp 8.5.

  57. Ken Fabian says:

    My 2c – The thing about space colonies is they are only going to be possible (if at all) with a wealthy, healthy Earth economy supporting them, and I think they will be far more at risk of extinction (individually and possibly collectively too) than Earth both during the long development of a healthy, wealthy space economy and after, when, perhaps, there is enough economy in space to be self reliant. As a primary motivation for space activities, backing up humanity isn’t going to cut it; developing space activities have to be based on economics. I don’t see any opportunity for running away from our problems and not for running away to something we can rely on to be better.

    Battery and EV technologies have been edging towards economic viability and a major push by a cashed up entrepreneur to bring economies of scale into play can lift it over the line; the technologies for viable space colonies I think still have such a very large gap as to be suitable only for giant leaps.

    I don’t see significant parallels with our climate problem except that it does intersect with economics – but as long as the externalities of our energy choices enjoy an enduring amnesty economic analyses and comparisons of the costs of those choices will have major distortions built in.

    Dealing with the existential level extreme risks may have economic constraints but – where we can bring ourselves to admit them and face them – they tend not to be free market economic constraints; nation states invoke States of Emergency. But for climate change here we are with people living lives of extravagant wealth and wastefulness being urged and encouraged to states of outrage at the possibility they might face proportionally quite small cost impositions to regain climate stability for the planet we depend on.

  58. The question has been asked: “how much time is spent discussing the impacts of rcp 8.5.”

    Answer: not enough to extinguish the risk that we will incur the impacts of rcp 8.5

    Thanks for asking. I do think we could put off risk management of supervolcanoes and sun going nova until we have handled the considerably higher probability that we could stumble into the world of 8.5.

  59. Solving a currently unsolvable problem in physics or math is a low probability, high impact outcome. Or is it? Being able to solve something is a binary operation. To put odds on it implies that you have a hint that it is solvable.

  60. Richard Arrett says:

    Ken:

    Yeah – space will have to make money. I agree.

    Still lots of ways to make money in space.

    1. Space mining. Just drop the finished metal anywhere you want on planet Earth.
    2. Space based solar – not intermittent.
    3. Move heavy polluting industry up to space and grow crops on the land freed up.
    4. Space hotels and moon resorts. Sex in 1/6 gravity will probably really sell (probably better than zero G). Plus – remember that Asimov book about flying on the moon. Could be fun!
    5. Space based nuclear (shoot the waste to the sun).
    6. Could grow lots of food in space (plenty of light).

    Anyway – it is all pie in the sky. But lots of rich people are working on making money in space, so it is going to happen. Eventually, backup human colony is going to get carried along and then problem solved and could even help Earth along the way.

  61. angech says:

    “To get a Yellowstone super volcano eruption any time soon would violate the laws of thermodynamics, or require the violation of other laws of physics, like instantaneously changing the viscosity of mantle lithosphere, or the melting point or latent heat of fusion of andesite magma. Ain’t. Gonna. Happen.”
    Sorry.
    No violation.
    Simple physics and thermodynamics.
    “In geology, the places known as hotspots or hot spots are volcanic regions thought to be fed by underlying mantle that is anomalously hot compared with the surrounding mantle. Their position on the Earth’s surface is independent of tectonic plate boundaries. There are two hypotheses that attempt to explain their origins. One suggests that hotspots are due to mantle plumes that rise as thermal diapirs from the core–mantle boundary.[1] The other hypothesis is that lithospheric extension permits the passive rising of melt from shallow depths”
    The crust is exceedingly thin under Yellowstone with a mantle plume that has been there for 2 million plus years. 3 eruptions of size in last 2.1 million years. Overdue for the next, Could happen in the next 50,000 years or in the next 4 weeks.
    You know that. Why you wish to deny that it could [not would] happen soon is anyone’s guess but probably a knee jerk reaction. If I had not said it could you would be out there arguing my case.

  62. angech says:

    By the way there has been a discussion of sorts. I thought the lady made some very pertinent points in her written presentation. Have not seen the other side yet. Are you putting up any of the points for discussion, ATTP?

  63. verytallguy says:

    lots of rich people are working on making money in space, so it is going to happen

    Yeah, baby.

    Nothing *ever* goes wrong when narcissists throw money at high profile business opportunities.

  64. Steven Mosher says:

    “Answer: not enough to extinguish the risk that we will incur the impacts of rcp 8.5

    Thanks for asking. I do think we could put off risk management of supervolcanoes and sun going nova until we have handled the considerably higher probability that we could stumble into the world of 8.5.”

    weird that suddenly folks think they can put probabilities on pathways.. shrugs

    Simple Question: Given an over under bet at a forcing of 6 additonal watts by the end of the century, do you take the over bet or under bet and why?

    As ATTP notes the one think missing from this risk analysis is the emissions. But it would seem that emissions pathways are the thing people are most reticent to place probabilties on. Without this
    its hard to calculate the actual risk.

    One approach we typically use is to estimate a baseline. Since the whole world (execpt trump)
    agrees to Paris, Paris is a good place to start as BAU

    Paris gets us to 3C of warming: Will you take the over bet or under bet? will we see more than
    3C or less than 3C?

  65. dikranmarsupial says:

    Richard wrote “DK asked (I think as a joke) “what policy options do we currently have on that one? ;o)””

    It was a “ha ha, only serious” kind of joke. the point being that it was no remotely analogous because unlike climate change (where we have our foot on the gas pedal, so we do have the policy option to at least lift of a bit), in those other issues, there really is very little we can actually do about it.

    “In all seriousness, I think some people are talking about the desirability of getting all of our eggs out of a single basket by setting up a moon colony or even better a Mars colony. I would argue that this is a policy option for the planet killing low probability events like the dino killing asteroid strike.”

    Yes, the reason why people are only talking about it is that we currently don’t have the technological capability (never mind the political will) to do anything.

  66. dikranmarsupial says:

    Angech wrote:

    “To get a Yellowstone super volcano eruption any time soon would violate the laws of thermodynamics, or require the violation of other laws of physics, like instantaneously changing the viscosity of mantle lithosphere, or the melting point or latent heat of fusion of andesite magma. Ain’t. Gonna. Happen.”
    Sorry.
    No violation.
    Simple physics and thermodynamics.

    You were responding to Dave_Geologist. Did it occur to you that perhaps, just perhaps, he might know more about this than you do (or could find out by doing a bit of googling to quote mine something of the web that supported your position)? Sorry angech, hubristic bullshit does not impress.

    Note the “any time soon” in Dave_Geologist’s comment. I don’t think he is suggesting there will never be another Yellowstone super volcano, just that there isn’t going to be one on a timescale that would justify taking action about it at the current time.

    “You know that. Why you wish to deny that it could [not would] happen soon is anyone’s guess but probably a knee jerk reaction. If I had not said it could you would be out there arguing my case.”

    Pathetic trolling.

  67. dikranmarsupial says:

    “how much time is spent discussing the impacts of rcp 8.5.”

    How many presidents of the USA seem to think climate change is a hoax and business as usual would be a good idea? While there are countries (especially high emitters) that have politicians that would be happy to see fossil fuels exploited as much as possible, then ECP 8.5 is relevant AFAICS (we may not be on that track at the moment, but we are capable of getting back on it again).

  68. Jeffh says:

    Richard seems to think that we will be around in 500 years. The way we are going, we will be lucky to make it intact out of this century. This thread talks about low risk, high impact; what about high risk-high impact scenarios? As I said yesterday, the empirical evidence is staring us in the face. For some ludicrous reason we as a species just choose to ignore it.

  69. Jeff,
    Have you seen this article by Adam Frank. If so, what do you think of it?

  70. Dave_Geologist says:

    @Steven: how much time is spent discussing the impacts of rcp 8.5.

    Nowhere near 100%. And the recent RCP8.5 thread was full of comments saying it shouldn’t be as little as 5%, it should be zero.

  71. Dave_Geologist says:

    Paul: the thread isn’t about putting odds on whether or not the problem is solvable. It’s about putting odds on whether the problem will occur, and multiplying them by the impact if it does occur. A PETM outcome would indeed be unsolvable. But we can still put odds on whether or not it happens, given particular combinations of emissions pathway and ECS.

  72. izen says:

    I apologies for this … quibble, but while I think I understand what low/high probability is (DK?), I am struggling with ‘event’ and ‘impact’.

    How many high probability, low impact events add up to a low probability, high impact event ?
    Impact could be measured in fatalities or money. But research indicating extreme events are getting cheaper in wealthy nations might mean that if we get rich enough we make a profit from disasters.

    Speculation about the impact of events might be better informed by historical examples. Extreme heatwaves, floods and storms have killed hundreds of thousands in S E Asia over the last decade due to a measurable increase in extreme events.
    Linear extrapolation might be simplistic, but it seems hard to claim it would undoubtedly be an underestimate.

    Are medium impact events that become higher probabilities a problem ? If last years Atlantic hurricane season became the ‘norm’ every two years, would Houston adapt to the flooding, would Puerto Rico ever regain a power grid.

  73. Dave said:

    “Paul: the thread isn’t about putting odds on whether or not the problem is solvable. “

    I just meant in general. For example, what are the odds that an analytical solution to the Navier-Stokes equation is even possible? The impact would be significant, since natural climate variations would then become predictable.

    Izen said:

    “How many high probability, low impact events add up to a low probability, high impact event ?”

    This is a feature of fat-tail statistics. To take crude oil as an example, there are enough low-volume stripper wells in the world that they essentially equal the production of super-giants.

    Fat-tail odds go as 1/X where X is the pay-off so that 1/X*X ~ 1, and the overall payoff is evenly distributed.

  74. Dave_Geologist says:

    angech, re Yellowstone.

    1) We know Yellowstone is primarily hot-spot not extension (although that is/has been going on – I’ve published (tangentially) on Basin and Range extension). The mantle plume has been imaged using seismic tomography.

    2) In the context of existential threats, we can ignore large eruptions like the one 70,000 years ago. To be on a par with GRBs, asteroids, or PETM-scale warming, we’re talking super-eruptions like the ones 2 million, 1.3 million, and 630,000 years ago. And BTW even they were only locally devastating. No mass extinctions, no threat to human existence. Toba, 70,000 years ago, was on a similar scale and has been suggested as the cause of a human genetic bottleneck, i.e. we almost went extinct. But no global mass extinctions like those associated with past warming events. Because of Yellowstone’s location, in the continental USA, it would cause great economic disruption, probably on a par with the Great Depression. And if a lot of ash is injected into the stratosphere, great agricultural disruption. So probably a Syrian refugee crisis squared, at a time when we can least afford it. But not an existential threat, unless it prompts some idiot to push the nuclear button.

    3) The mantle plume under Yellowstone is complex because it interacts with and is deflected by a subducting slab before it penetrates the crust. Partial melt from the plume rises through the crust
    in a diffuse way to form the Snake River basalts (you can find a non-paywalled version using Google Scholar). A basaltic partial melt reservoir is ponded in the lower crust, and overlain by a rhyolite partial melt reservoir formed by fractionation of the basalt and partial melting of the crust. The rhyolite reservoir underlies the caldera at 5 km to 16 km depth, with the shallowest part to the east of the caldera, under the current hydrothermal activity. Using the seismically imaged reservoir volumes, and a melt fraction based on the seismic velocity anomaly, the melt content of each reservoir is estimated at ~900 cubic km. Only the shallow one would be expected to participate in an eruption. Neither is a pond of lave. The melt fraction of the upper one has been estimated at 5% to 32%, with this study saying 9%. To erupt a supervolcano, that melt has to migrate shallower and pond at much higher melt fractions (>0.5). The last Yellowstone supervolcano erupted 1000 cubic km of lava, the biggest one 2 My ago erupted 2500 cubic km. To make a similar one you’d have to migrate all the melt to a shallower reservoir (ain’t gonna happen) and erupt all of that in one go (ain’t gonna happen). What you really need to do is get a lot more melt from the plume into the crust, which requires a hotter or larger plume (a pulsing plume – we’d notice), or wait a really long time for the existing reservoirs to get a lot bigger (we’d notice).

    4) There is some modelling which suggests we’re past peak Yellowstone. Intriguingly, modelled peak activity roughly coincides with the recorded supervolcano eruptions. So maybe there will never be another one, although their hottest model shows a gentle decline from about 3 My ago, so maybe we can expect the next one to be a bit smaller than the last one.

    5) That’s not to say smaller eruptions won’t be damaging. Which is why there’s ongoing research and monitoring. Mount St Helens only erupted 1 or 2 cubic km but was locally devastating. Tambora erupted about 30 cubic km and had a global impact (the year without a summer). Several hundred thousand deaths have been attributed to Tambora. Probably more than AGW-caused climate change has killed so far, but probably less than an order of magnitude more. So a long way from existential.

    But you’re right to some extent. I wouldn’t have agreed with you, but I might have ignored it if it wasn’t such an obvious squirrel. Dog-whistle version: “why worry about RCP8.5 when we don’t worry about supervolcanoes or GRBs?”. The answers have been given already: (a) one is under our control, the others are not and (b) we can still get an RCP8.5-type climate outcome from RCP6 if ECS is at the high end. That unfriendly uncertainty monster again.

  75. Magma says:

    @ dikranmarsupial & Dave_Geologist

    As you may know, the commenter who wrote

    3 eruptions of size in last 2.1 million years. Overdue for the next, Could happen in the next 50,000 years or in the next 4 weeks. You know that. Why you wish to deny that it could [not would] happen soon is anyone’s guess but probably a knee jerk reaction

    has a history of Just Asking Questions and sea lioning

  76. Dave_Geologist says:

    @rustneversleeps, I don’t see Weitzman and Wagner as arguing that we over-respond to low-probability high-impact events (and by inference should neglect them in assessing the risks from AGW?). Quite the reverse. To quote Wagner:

    We know enough to act now. What we don’t know should prompt us to even more decisive action. … What is scarier still is the uncertainty about the truly extreme outcomes. Our own calculations estimate that there is a roughly 5 percent to 10 percent chance that the eventual average temperature could be 6 degrees Celsius higher, rather than 3. What this would mean is outside anyone’s imagination, perhaps even Dante’s. We can obsess about all of these scenarios. A rise of three degrees would be bad enough. But when you factor in the uncertainty, there is even more reason to put global warming on an even more sharply decreasing path.

    IOW the uncertainty monster is not our friend.

  77. Magma says:

    Dave_Geologist — Dog-whistle version: “why worry about RCP8.5 when we don’t worry about supervolcanoes or GRBs?”.

    Not to mention asteroid impact or — my favorite on the Cosmic Surprise! apocalypse scale — a wandering black hole or cold neutron star passing through the inner solar system.

  78. dikranmarsupial says:

    Indeed, the hubris was quite astonishing/amusing in this case though!

    One wonders how a medical doctor would react to a (say) geologist telling them that

    “Vaccination causes autism. You know that. Why you wish to deny that it could [not would] happen is anyone’s guess but probably a knee jerk reaction”

    Or some other such medical myth.

  79. I don’t approach global warming as a gambling opportunity. I won’t be around at the end of the century to claim any winnings. I would like to do everything in my power to make the world a good place for my children and grandchildren. Some of them may be around at the end of the century. I don’t think our situation is a gambling opportunity, I think it is a sustainability problem or predicament.

  80. Dave_Geologist says:

    Just Asking Questions and sea lioning – I know Magma. But having made the claim, based on memory, I decided to check it out. Don’t want to make a mistake when arguing from authority. And it gives me an excuse to write about migma 😉 . Not the fusion reactor (I’d never even heard of that until I googled). It’s an informal term for a crystal-rich melt or crystal mush, usually quite close to its site of formation. From “migmatite”.

    And it was interesting to follow up on. I already had the papers I linked to, but found a few more of interest. It also prompted me to dig out my PhD thesis, as one thing I worked on was migmatite formation by melt extraction from a partially melted matrix. I did some material balance calculations using major elements and trace-element and refractory mineral markers. The discrete veins or pods where melt had ponded (that’s the stuff that could potentially be squeezed out into a shallow magma chamber) comprised about 85% melt and 15% entrained restite (solid, unmelted material, mostly refractory grains). The local residual material (the part that stayed solid and in place) was about 70% restite, 30% melt. Which suggests that pretty much all the melt that could be squeezed out, had been squeezed out. The strength of a crystal mush increases by several orders of magnitude below about 30% melt, and the remaining melt freezes in place much faster than it could rise by buoyancy alone.

    On the hundreds of metre scale, the outcrop was about 5/6 host rock, 1/6 veins, so the bulk composition was about 60% restite, 40% melt. An additional, unquantified amount of melt had of course left the building. But it shows that quite a lot stays behind, so getting the full 900 cubic km out of the shallow Yellowstone magma chamber is a non-starter. On that basis the 9% estimate from the Science paper looks small, but the’re only able to resolve things on a kilometre scale. Rather than one big pool of magma, I’d envisage multiple, interconnected small pools, dykes and sills. I eyeballed a photo from another part of my field area as >80% melt. That was in an area where the country rocks had undergone a much greater degree of melting (<10% intact rock left, the rest melt and rafts of partially melted country rock and restite). Based on P,T estimates I made, this was going on at about 15-20 km depth, comparable to the base of the Yellowstone upper magma chamber. So I reckon I've got a pretty good idea of what it would look like down there 🙂 .

  81. thank you to D the G for the thumbnail on Yellowstone! I am blessed to hang out in spots where I occasionally get to read this kind of stuff. I love Yellowstone except for the traffic and tourism impacts. That said, I do think the YellowStoners/Yosemitites are a better class of tourist than what you might encounter in Bali or Cabo.

  82. Dave_Geologist says:

    Sooo, dragging myself on topic. The other thing we did in my industry days, in addition to using (a) R = P x I, and (b) overlaying that with red lines or the need for very high-level approval for certain risks, regardless of probability, was (c) identifying mitigations and repeating the process for the mitigated risk. They had to be real mitigations, and you still had to show before and after so people could see what would happen if the mitigation failed. And generally you’d have to cancel the job if you were unable to put the mitigation in place. For example, you’re allowed to exceed the normal operating pressure of an injection well for the duration of a test. Mitigations would include things like having a temporary meter on the wellhead, not at a manifold some distance away or at the pump outlet, with wellhead pressure based on a frictional-loss calculation. Having 24/7 eyes-on for injection rate and pressure, with instructions for what to do if they go outside certain bounds or exhibit erratic behaviour. Having the right experts on-call. If any of those things fall through on the day, the test is postponed.

    Arm-waving stuff like “oh we’ll cope”, “we’ll keep an eye one it”, “”we’ll just build dykes or levees (unless you have a feasibility study and costing)” wouldn’t cut it. And for financial-loss risks, you’d have a with- and without-mitigation EMV calculation that incorporates the cost of the mitigation.

  83. Dave_Geologist says:

    And on what tail risk must we avoid. For me it’s a PETM. The hydrological cycle went crazy. Mass hunger, masses of refugees and wars would be the consequence, not just financial losses from flooding or drought-provoked fires. From my reading of the geological literature, the world was pretty normal at 3-4°C above pre-industrial, but the climate went crazy at 7-8°C above. The transition happened so fast we don’t know where the tipping point was, just that it was between about 4°C and 7°C. So to be confident of avoiding a repeat, we need to stay below 4°C. A 5% chance of going above would be unacceptable to me, so RCP8.5 is out. Even the central estimate is at 4°C by 2100. RCP6 is OK to 2100 (1.4°C to 3.1°C), but we need to be careful after 2100 because temperatures are still rising then. But I think it would be fair to say we can let our grandchildren modify the policy as we get further down the road and can narrow the uncertainty range on emissions and (perhaps) ECS.

    To be clear, I’m not advocating RPC6 as a policy. Far from it. I’m asserting that it’s the most emitting pathway we can get away with without incurring an unacceptable risk of catastrophic climate change, as opposed to damaging or severely damaging climate change.

  84. Hyperactive Hydrologist says:

    Dave,

    The baseline for the RCP 6 (1.4°C to 3.1°C) is 1986–2005, so you probably need to add about a degree to get to a pre-industrial baseline. Therefore RCP 6 is closer to (2.4°C to 4.1°C) also this is the average of the 2080-2100 period, so by 2100 it might be ~0.2°C higher.

  85. Dave_Geologist says:

    A bit more on Yellowstone (i was skimming and saving some to the other references I found today). This paper identifies buoyancy as a key trigger for a supervolcano. The magma chamber under Yellowstone is thick enough (4-8km, depending on volatiles), but

    At present, the Yellowstone magma contains a limited fraction of partial melt (about 10–32%; refs 28, 29), resulting in a buoyancy overpressure below the minimal critical overpressure to initiate an eruption, despite the large magma chamber thickness (Fig. 3b, red area). However, petrographic evidence indicates that a rejuvenation of the mush, that is, an increase in temperature and melt fraction, occurred before several past super-eruptions (30) and a similar rejuvenation of the Yellowstone magma chamber would strongly increase the buoyancy overpressure and potentially lead to a super-eruption.

    So you need to add heat (to melt more of the mush), or add more melt from below. Both are subject to thermodynamic and material-balance constraints and can’t happen overnight. If you got the melt fraction above 50%, which also makes it orders of magnitude less viscous, you’d also get to 5000 cubic km of magma which is enough to supply a supervolcano. But that requires a doubling to a quintupling of the current liquid load. The velocity of seismic waves passing through the magma chamber would be detectably changed as melt fraction increased, and the ground surface would be elevated by thermal expansion, melting and material addition. Like I said, we’d notice. especially now it has our attention.

  86. Dave_Geologist says:

    And thinking a bit further about that last paper (in the shower, not quite Archimedes but hey 🙂 ). My money would be on temperature as the rejuvenator. As I said, it’s hard to get the final 20-30% of melt out of a crystal mush. Other than by explosively disrupting the mush framework. But that would eject crystals as well as melt (which is fine, they’re called phenocrysts). So to get down to 9% melt, you have to have some of the solid as frozen melt rather than country rock. I.e. the magma chamber has cooled down in the last 630,000 years.

    So you’re probably looking at a pulsing plume type cause rather than slowly inflating a balloon. But the interaction of the plume with a rolling-back subducting slab adds another mechanism: dynamic disturbance of the plume by interaction with the slab. For example increase in roll-back rate, perhaps due to detachment of its deepest part, would “suck” the plume upwards and towards the upper part of the slab, imposing hotter mantle onto the base of the crust. But that sort of thing happens VERY SLOWLY.

  87. izen says:

    Is there something odd about the discussion of low-risk, high impact events from climate change having drifted onto a Yellowstone eruption.?

    Something with a MUCH lower risk and LESS impact on the climate, at least in the medium to long term.

  88. Dave_Geologist says:

    I couldn’t resist izen 😉 . It was just too cool, and an opportunity to revisit my twenties 🙂 . And the outcrop with melting in action is one of my all-time favourite photos. To a deterministic, process-oriented person like myself, seeing a process frozen in mid-step is just awesome 🙂 . Up there with a photomicrograph I took of the same thing happening on a mm scale. The geological equivalent of photography showing that tens of thousands of years of art was wrong and horses don’t bound with all four feet off the ground when they gallop.

    And it can perhaps be used to show the it’s-all-too-difficult brigade that even ‘acts of God’ are amenable to rigorous scientific investigation. Just as people have looked at asteroid statistics and so we do have a good idea of the risk of being hit next year by an asteroid above a certain size.

  89. Dave_Geologist says:

    HH, ah, the old starting-point point. I shouldn’t have said pre-industrial and should have just left it vague. In practice, when we’re looking at the palaeo data, I don’t think we have the accuracy to worry about whether we use 1850 or 1950 or 1990 as our starting point. Or should that be precision 😉 .? We can tell that it got x°C warmer during an event with a lot more confidence than we can tell that the pre-event temperature was y or y+1 °C warmer than today.

    For a proper analysis of the tipping point you probably have to use some sort of Bayesian approach and assume a prior for the tipping-point PDF in the gap. The tipping point is probably not 4.1°C, for example, because that’s so close to the pre-PETM temperature that natural variability would probably have kicked it off earlier. And the impact was so obvious that now we know what to look for, it would have been noticed. And it’s probably not 6.9°C, because AFAICS the whole world changed, and you’d have expected some areas to be unaffected. Remember there were no ice caps back then so we can’t appeal to albedo-driven bistability. It has to be CO2 and/or CH4, so roughly linear in forcings. You’d probably end up with a peak somewhere in the middle, but broader and with sharper cutoffs than a normal distribution. Perhaps a scaled and shifted beta distribution with cutoffs (4,7), (3,7), (4,8) or (3,8). That comes to mind because stretched betas were quite popular in some geological applications where you wanted something vaguely normal but knew you had a fixed floor and roof.

    We should be able to narrow down the tipping point by looking at the less extreme, later ETMs, but I’ve not seen sedimentological work related to them. The run-up to the PETM has been well studied, but not AFAICS the follow-on once things got back to normal. Maybe that means they were OK hydrologically, but maybe people just haven’t looked.

    We’d probably get away with RCP6 (as long as we stabilised temperatures post-2100, and remember we’re talking catastrophic as opposed to “just” severe impacts), but we’d be skating on thin ice and we wouldn’t know how thin the ice was until we broke it. So very much not my preferred option. If the Genie Of The Lamp granted me a 200-year lifespan and we followed RCP6, I’d be very worried about conditions during my later years.

  90. Jeffh says:

    ATTP, yes, excllent article by Frank. There is little doubt that we are living in the Anthropocene now, and Frank nails it when he says that life on Earth will persist irrespective of the combined effects of the human assault. The main point that I have long stressed in discussions and lectures is that humans can (and already are) greatly reduce biodiversity in ways that are significant enough to cause considerable damage and impair the functioning of ecosystems across the biosphere. There will be severe consequences for many species, but the biggest victim of our malfeascence may well be ourself. No species actually depends on nature more than we do; indeed, natural systems generate a suite of conditions that permit us to exist and persist.

    After we are gone, the damage we have inflicted will generate a huge array of vacant niches that will generate immense evolutionary innovation, much as occurred at the Cretaceous-Tertiary boundary when dinosaurs disappeared. Within 5 to 10 million years diversity will be restored, minus the rapacious bipedal primate that precipitated the latest mass extinction in the first place.

    As I said above, I see the current assault on nature as an example of a high risk-high cost scenario. High risk in that we know we are destroying our own ecological life support systems, and high cost in that our longer term survival is at stake. Yet we seem intent on continuing to fiddle while Rome burns. It is collective madness.

  91. Jeff,
    Thanks, that was roughly how I interpreted it. I saw some criticism, though. I think some felt that it essentially suggested that worrying about the extinction of species species was counter-productive and that it absolved us of making moral judgements other than those associated with ourselves.

  92. @DaveGeologist

    I am not citing Weitzman, Wagner and others in the way you are implying. My point was, somewhat quizically, why is this paper highlighting the probability * impact = risk (especially at the tails) notable?
    It’s been clear for quite a while that if – and, apparently, increasingly been borne out in the research – the damage function (impacts) is rising faster temperature increases (as a proxy metric) faster than the likelihood of those increases is falling, then we have, let’s say, an inconvenient risk function from an economics point of view (and more)…
    Isn’t that the core insight of the paper in the original post? Or is there something else… ?

  93. izen says:

    @-Jeffh
    “No species actually depends on nature more than we do; indeed, natural systems generate a suite of conditions that permit us to exist and persist.”

    The ‘Natural’ systems in place when hom sap was a minor twig on the hominid branch >100,000 years ago would not permit us to persist and exist in our present numbers.
    It was only by eliminating most of the mammalian mega-fauna, and burning down forests then replacing them with a far greater number (but fewer species) of domesticated animals and plants that we reached our present position. We are highly dependent, but on a man-made ecology.

    If you have some Rousseauian romantic attachment to a ‘Nature’ from before domestication and agriculture then this is viewed as a Bad Thing.

    The trade-off is a vastly increased food supply from a reduced biological diversity. That carries risks of crop or animal disease having a significant impact because of the lack of variety. Although in practise the variety among domesticated animals and plants has been sufficient to avoid a significant pandemic in any food species.
    It is Humans, with the least genetic diversity to population size ratio, who risk, and have suffered major plagues.

    Obviously there are challenges with shifting growing regions and seasons, along with shifting weather patterns from AGW. They are more likely to be logistical rather than existential.
    Total ecological collapse is a rare feature in Earth’s history. Climate change can drive rapid evolutionary diversification, as seen with hominids over 3 million years of glacial cycles, but those transitions did not destroy the ecosystem. That AGW could do so seems a risk somewhat smaller than Yellowstone blowing its top.

  94. angech says:

    dikranmarsupial says: June 14, 2018 at 1:49 pm
    Indeed, the hubris was quite astonishing/amusing in this case though! One wonders how a medical doctor would react to a (say) geologist telling them that”
    like?
    Dave_Geologist says: June 8, 2018 at 5:33 pm
    “dpy, are you claiming that dietary fat does not contribute to CHD and only sugar does, or that dietary fat and sugar both contribute? ”
    Funnily enough I am happy that a geologist cares about other subjects enough to make a contribution and if I can help improve his knowledge or even better he can improve mine] I would do so. Denigrating people’s knowledge and opinions purely because their specialty is in one field is not very nice but feel free to keep doing so.

  95. Richard Arrett says:

    I was not aware we were limiting the low probability high impact events to climate change.

    I thought I was on topic, as the events I brought up were in fact low probability and high impact events.

    I thought it was interesting to think about lower probability but higher impact events, which may have the same area or even more under the curve than some of the higher probability lower impact events. I mean what do you do when the impact is so high that the area under the curve is higher than all the areas from any human made climate impact? Just curious I guess.

  96. angech says:

    SM “My vote when lost in the woods, emit less, don’t hurt the poor”
    JC 15 What makes most sense to me is Climate Pragmatism, which has been formulated by the Hartwell group. Climate pragmatism has 3 pillars:
    Accelerate energy innovation
    Build resilience to extreme weather
    No regrets pollution reduction

  97. angech says:

    izen says: June 14, 2018 at 7:11 pm Is there something odd about the discussion of low-risk, high impact events from climate change having drifted onto a Yellowstone eruption.? Something with a MUCH lower risk and LESS impact on the climate, at least in the medium to long term.
    Steven Mosher says: June 14, 2018 at 5:50 am re “smallbluemike I do think we could put off risk management of supervolcanoes and sun going nova until we have handled the considerably higher probability that we could stumble into the world of 8.5.”
    “weird that suddenly folks think they can put probabilities on pathways.. shrugs
    Simple Question: Given an over under bet at a forcing of 6 additonal watts by the end of the century, do you take the over bet or under bet and why? As ATTP notes the one think missing from this risk analysis is the emissions. But it would seem that emissions pathways are the thing people are most reticent to place probabilties on. Without this its hard to calculate the actual risk. Paris gets us to 3C of warming: Will you take the over bet or under bet? will we see more than 3C or less than 3C?”
    AT the moment I need a lot more proof, without that natural variation plus a little CO2 < 3.0 C.
    DG Nothing you have said excludes a very large Yellowstone eruption in the next few months though the overall risk is low the factual,very serious high impact risk exists. Thank you for thoughts on why it cannot happen. If I said that you would be busy using the same facts to prove it could happen. Sad.

  98. izen says:

    @-Richard Arrett
    “I was not aware we were limiting the low probability high impact events to climate change.”

    Neither am I
    But a Yellowstone explosive eruption is a type of event, (like an asteroid impact). that is a one-off instant regional impact. While the US would suffer, (or impact site) the climate disruption would be short-lived and small compared to what AGW is doing.
    And with a much lower probability compared to AGW which is already having an effect.

    I think it would be useful to compare how we prepare, insure and adapt to low probability high impact events in the past and present that are similar, if less frequent or severe, to the type of events that are possible with AGW.

    Covering the US Midwest in 3 ft of ash in a day and causing a few cold winters is not a likely outcome of climate change. Drought, floods, storms and the consequences of sea level rise seem more probable as low probability high impact AGW influenced events.

  99. Richard Arrett says:

    izen:

    I understand your opinion.

    But both a super volcano eruption and a large asteroid strike will change the climate, and while of lower probability, I was wondering if the area under the curve (probability times impact) would be higher than some of the more common climate change impacts, like say 11 inches of SLR in 100 years. If we are sorting by area (probability times impact) I wonder whether some of the very very unlikely events times impact (area) will float to the top because of a larger area than some of the more likely but much less impactful events.

    As I said – I was just curious.

  100. dikranmarsupial says:

    “I was not aware we were limiting the low probability high impact events to climate change.”

    Fair enough, but the point remains that if you perform a cost-benefit analysis then that needs to take into consideration the costs of the action and whether any action is technically feasible. There isn’t (unlike climate change) any action we can feasibly take to prevent a Yellowstone super-volcano eruption, so the cost of action is essentially infinite.

  101. Richard,
    As far as asteroids go, we are tracking them. In other words, we are doing something so as to be aware of the risks.

  102. dikranmarsupial says:

    angech wrote: “Funnily enough I am happy that a geologist cares about other subjects enough to make a contribution and if I can help improve his knowledge or even better he can improve mine] I would do so.”

    more laughable hubris.

    “Denigrating people’s knowledge and opinions purely because their specialty is in one field is not very nice but feel free to keep doing so.”

    Nobody is denigrating anybody’s lack of knowledge, I was denigrating your hubris in telling a Geologist he was wrong on an issue in geology, and doing so in fairly insulting terms:

    “You know that. Why you wish to deny that it could [not would] happen soon is anyone’s guess but probably a knee jerk reaction”

    Suggesting that someone is being disingenuous and disagreeing with you as a “knee jerk reaction” is not very nice either, so your tone trolling is transparent.

    BTW have you admitted that your assertion was wrong? Dave_Geologist has explained why that is the case.

  103. dikranmarsupial says:

    ah, this was in moderation while I wrote my previous post

    “DG Nothing you have said excludes a very large Yellowstone eruption in the next few months though the overall risk is low the factual,very serious high impact risk exists. Thank you for thoughts on why it cannot happen. If I said that you would be busy using the same facts to prove it could happen. Sad.”

    so no, angech doubled down, and again accuses Dave_Geologist of being disingenuous (which is, to use angech’s words “not very nice”).

  104. Dave_Geologist says:

    DG Nothing you have said excludes a very large Yellowstone eruption in the next few months though the overall risk is low the factual,very serious high impact risk exists. Thank you for thoughts on why it cannot happen. If I said that you would be busy using the same facts to prove it could happen. Sad.

    The low melt fraction excludes it angech. As attested to by multiple sources. And it is not remotely as serious in terms of impact (outside the USA) as following RCP8.5. And many orders of magnitude less serious than blundering into a PETM world. Both of which are many orders of magnitude more probable than a Yellowstone supervolcano in the next few hundred years. How much attention should we devote to it, outside of Hollywood? About a million times less than we should devote to AGW.

    If you’re worried about volcanoes, worry more about the tsunami that would form if a large chunk of the Canary Islands fell into the sea. That is a critical-state phenomenon where the stored energy is already in place. Not one where it has to be added.

    On your final sentence; no I categorically would not. If I appear to be picking on you, it’s only because you persistently get the science, statistics and logic wrong. Change that, and I’ll be your firmest supporter.

  105. Dave_Geologist says:

    Richard, the answers to your questions about other low-probability events are simple and have already been pointed out. I’ll summarise.

    1) Split them into events we can directly influence and/or prevent, and those we can’t but can only mitigate or adapt to. AGW is in the first category, supervolcanoes, GRBs and asteroids are in the second. Why? Because the available policy responses are different.

    2) The second category can be split into those where monitoring can give us some lead time to prepare. GRBs: no. Supervolcanoes and asteroids: yes, and we’re already doing it. Unlike AGW, there’s no need to chivvy recalcitrant policymakers or counter propaganda campaigns claiming that they don’t exist or are a Chinese hoax. In the case of Yellowstone, recent research has given increasing confidence that the risk is not imminent, probably won’t be for many thousands of years, and we already have the technology to recognise an increasing risk.

    3) Crunch the numbers. It’s been done for AGW, and even contrarian and lukewarmer economists agree that we’re past the point of net benefit and into the realm of net cost, at least in terms of committed warming from the current CO2 load. Probably why the contrarians have given up on extolling the benefits of a greener world and moved on to “it’s all too difficult”, “it’s not fair on the USA” and “what about the poor in Africa?”. Demonstrate your assertion that the area under the P x I curve is greater than that for (say) RCP8.5. I don’t believe it.

    Supervolcanoes? Not even close. And no, they don’t change climate, only weather. The few years or even decades while the dust rains out is too short to count as climate. After that it’s back to normal. And Yellowstone is probably too far north to affect both hemispheres, so not even global. Are there any major or lesser mass extinctions associated with supervolcanoes? Nary a one. With global warming? Pretty much every one. Even the K/T. Left to its own devices, the Deccan Traps LIP might have done the job on its own. You will find nonsense on the internet conflating supervolcanoes with the Siberian Traps LIP and the Great Dying. Again, not even close. Basalt not rhyolite, effusive not explosive, thousands of times bigger and it was the CO2 and other gases that did the killing (via global warming and maybe some poisoning), not the ash cloud. Y’re left with a 1 in 100,000 chance of an event like Toba that maybe killed about half of the human population but had no other long-term ecological impact (so on a par with the Black Death in Europe but smaller than the spread of smallpox in the Americas). Area under the P x I curve? Miniscule. Orders of magnitude less than AGW under anything approaching BAU. Even I is probably bigger for AGW, let alone P x I. because I includes duration (thousands of years of committed AGW, vs. years or decades of ash). P x I is probably less than we’re already committed to, despite Paris. Because the odds of that are more like 50/50 than 1 in 100,000.

    Asteroid impact? Once in the last billion years. And even that hit a world already primed by warming due to the Deccan Traps. And landed in a reservoir of sulphur, which can’t have been more than a one-in-a-hundred chance. Devastating yes, but there’s nothing we can do about one that size. Smaller ones that could destroy cities? Well worth looking out for. And we are.

    GRB? Leave that one to Hollywood.

    In summary: not even a real squirrel. Just some leaves rustling in the wind.

  106. izen says: “If you have some Rousseauian romantic attachment to a ‘Nature’ from before domestication and agriculture then this is viewed as a Bad Thing.”

    I don’t think you to have a romantic attachment to nature and biodiversity to recognize that our species conversion and consumption of the global ecosystem’s productivity can and may have strayed into the territory of “too much of a good thing.”

    It seems you think that a broad, resilient planetary biodiversity is not very important. Do I read you right on that?

    Cheers

    Mike

  107. Dave_Geologist says:

    Ah, the perils of your native tongue 😦 , rustneversleeps. At first glance I thought you were saying we are over-reacting to the low-risk, high-impact tails. Then I realised you couldn’t have meant that because it would be out of character. So I rephrased it, but not enough. I should have started with something like “I’m sure you didn’t mean it that way, but this reads like,…”. Mea Culpa.

  108. Everett F Sargent says:

    I don’t claim to understand risk at all, but this figure displays a feature that I would call Bounded Risk …

    a = two parameter gamma distribution (with exponential decaying fat tail)
    https://en.wikipedia.org/wiki/Gamma_distribution
    https://upload.wikimedia.org/wikipedia/commons/e/e6/Gamma_distribution_pdf.svg
    (if the inage shows up, the green curve is the closest visual match)
    b = exponential function (e. g. y=a(exp(bx)), I think most people understand the exponential function).
    https://en.wikipedia.org/wiki/Exponential_function
    c = (or a very close approximation to) generalized gamma distribution (three parameter)
    https://en.wikipedia.org/wiki/Generalized_gamma_distribution

    Those are the choices used by the author, problem is, that (a) exponentially decays faster than then the exponential growth of (b) so that at some point to the right on the ECS-axis (c) approaches y=0 asymptotically, the risk curve is bounded (meaning it has finite area and large values of ECS suggest zero risk! You can just start to see the downturn in (c) at about 6C through to 8C (that downtrend decays exponentially reaching a CDF value of ~0.97 at 20C.

    I’m kind of thinking that risk must be unbounded and that it has infinite area. Otherwise we would end up with a situation where high x-values are by definition assigned ~zero risk.

    The other issue I have is with the 1.5-4.5 ECS range, the CDF shows p~0.12 for ECS<1.5C, p=0.65 for 1.5C<ECS 4.5C. I don’t think that to date, that the IPCC has defined anything other then the most likely range of 0.66 for 1.5C < ECS < 4.5C (with no actual written statement of numerical asymmetry AFAIK).

    Note to self: (a) Is a given and has unit area so no need to know the actual y-axis values, (b) in it's current exponential form, will eventually grow at a rate less than the exponential decay rate of (a) (super exponential growth is required so that the product of (a)(b) at least has a finite +y-axis asymptote (risk is unbounded and continues to increase the further one goes along the x-axis)) AFAIK.

    I would like some feedback on risk = likelihood times impact model (or risk matrix if you prefer). Basically, to know if this is indeed a good idea some basics of risk analysis would appear to be some form of minimum requirement

    I should make a somewhat more general comment in a later post, but at present, I am doing some rather major head scratching on this topic.

  109. Michael 2 says:

    ATTP writes “I think we’d almost certainly have advance warning of any global-level asteroid strike.”

    Maybe, but a lot depends on who is “we”. Some astronomers would certainly know. Whether it would be made public knowledge is less certain.

    hypergeometric writes: “Trouble is, people at large don’t notice anything but singular spectacular events.”

    To an extent, yes. How can I “notice” something that is not in the news? I can notice my local and regional climate if I have a means of comparing over long time spans. Farmers do this religiously in my opinion; they have a vested interested in noticing change.

    Jeff Harvey writes: “the only massive assault being inflicted by mankind on complex adaptive systems.”

    Four legs good, two legs bad! Point to be made in case not obvious: It’s like ignoring a street preacher; the moment you have identified someone AS a street preacher you (probably) ignore what he has to say. The moment I sense “humans bad” I might continue to read for its entertainment value but don’t count on it.

    Jeffh writes: “There is little doubt that we are living in the Anthropocene now”

    Create a scene, name it, live it. I am living in the Mikoscene and so are you.

    Dave_Geologist writes [The most interesting analysis of Yellowstone volcano I’ve seen perhaps ever] “Probably why the contrarians have given up on extolling the benefits of a greener world and moved on to ‘it’s all too difficult’, ‘it’s not fair on the USA’ and ‘what about the poor in Africa?’ ”

    It is an Alinsky tactic; the right has been slow to adopt some of these strategies but they seem to be effective. Rules for Radicals; rule #4. So, when talking to people easily moved by “poverty porn”, mention the poor in Africa. When talking to Red State Americans, mention that its not fair to Americans; mindful that the word “fair” is a dog-whistle (Blue recognizes it as “I want your stuff”, Red recognizes it as “you want my stuff”). When talking to Millenials the “it is too hard” meme is good. If it takes more than a finger-touch on a smartphone then don’t count on anything happening.

    The key to reaching me is to convince me that you believe your own words, and that your words mean what I think you mean by them. That opens the door.

  110. izen says:

    @-Michael 2
    “The key to reaching me is to convince me that you believe your own words, and that your words mean what I think you mean by them.”

    That might fit a lot of manic street Preachers.
    They are often strong on sincerity and simple consistency of the message.

  111. BBD says:

    The key to reaching me is to convince me that you believe your own words, and that your words mean what I think you mean by them. That opens the door.

    Yeah, right.

  112. Michael 2 says:

    izen responds: “That might fit a lot of manic street Preachers. They are often strong on sincerity and simple consistency of the message.”

    Indeed; and is why anyone stops to listen (when not simply blocking the way). If you are NOT sincere in your belief; you might still be correct but your motive in preaching will typically (IMO) be self-serving.

    Consider Leonardo Di Caprio. He is building a large expensive resort in Belize barely above sea level and on a sand bar if I remember right. So his commentary about sea level rise is particularly odd. Perhaps he has taken out a huge insurance policy and expects sea level rise to destroy the resort and then he can claim the insurance benefits. Perhaps he does not actually believe in sea level rise at all; or that it is imminent or dangerous. Why then go round the world preaching it? It brings him followers and fame (F-squared).

    “Fool me once, shame on you; fool me twice, shame on me”

    I suspect many people have reached the second part of this old saying and inspect claims closely. Since the actual truth of AGW claims cannot easily be demonstrated, what remains is whether your behavior is consistent with your declarations.

    If you take a thing seriously, then I ought to at least take a look at it (and here I am, looking at it), but if you do not, there’s a million (plus or minus some) other things demanding my attention.

  113. Richard Arrett says:

    It is my understanding that NASA only tracks asteroids over 1 km in diameter.

    Here is a link to a story about a privately funded telescope which has not yet launched, which will be for locating and tracking astroids less than 1 km:

    https://spectrum.ieee.org/aerospace/satellites/sentinels-mission-to-find-500000-nearearth-asteroids

    So we have no idea what the risk is of a less than 1 km asteroid hitting Earth in the next 100 or 1000 years, because we haven’t looked yet. One could be headed for us in the next 40 years and we wouldn’t know (yet).

    There is actually a lot we could do if we knew one was headed for us years out (the more years the better). Splash white paint on it, station a mass just a few meters away for a fairly long period of time – both of those would change the trajectory. A little change could make a big big difference of 40 years (or 10 or 5).

    DB says 1 billion years for the last asteroid strike? Maybe I read that wrong.

    But I thought one hit near the Yucatan peninsula 63 million years ago.

    What about Tunguska – flattened 770 sq. miles of forest in 1908. Move that event to New York or Mexico City or some of the other large cities and that would be a bad day for a lot of people.

    Again – I am not sure what the area would be under the curve – but I am not so sure it is smaller than the climate change stuff. And there is probably just as much we could do to head those off as to reduce emissions. All we have to do to reduce emissions is build a bunch of nuclear power plants (300 for the United States), and we could be generating 80% of our electricity with very low emissions. Problem solved – 5 to 10 years (if we are really serious about solving it, of course). Other countries will have to decide what they want to do themselves.

    The next ice age used to be a big problem – but I think we solved that one.

  114. Richard,

    It is my understanding that NASA only tracks asteroids over 1 km in diameter.

    Pan-STARRS, which is already operating, aims to be almost complete down to about 300m.

    So we have no idea what the risk is of a less than 1 km asteroid hitting Earth in the next 100 or 1000 years, because we haven’t looked yet. One could be headed for us in the next 40 years and we wouldn’t know (yet).

    We know that the possibility of this is small, given that such impacts are rare.

    The next ice age used to be a big problem – but I think we solved that one.

    Probably already delayed by at least 50000 years.

  115. izen says:

    @-Michael 2
    “Consider Leonardo Di Caprio. He is building a large expensive resort in Belize barely above sea level and on a sand bar if I remember right. So his commentary about sea level rise is particularly odd. Perhaps he has taken out a huge insurance policy and expects sea level rise to destroy the resort and then he can claim the insurance benefits. Perhaps he does not actually believe in sea level rise at all; ”

    You may not remember right.
    Perhaps he is leveraging ridiculously expensive holiday accommodation to finance a facility that is designed to adapt to rising sea levels and includes measures to improve the ‘sand bar’ he is building on by restoring the mangrove cover that protects the island by retaining sediment to offset the erosion of rising sea level.

    @-“Since the actual truth of AGW claims cannot easily be demonstrated,”

    But you agree that it is an actual truth and can be demonstrated, just not easily.
    What do you think makes it difficult ?
    Can you describe what would make it easier, or do you have a reason for thinking AGW is inherently unverifiable ?

    @-“…what remains is whether your behavior is consistent with your declarations.”

    I suspect the judgement of consistency is skewed in favour of personal self-denial and against systemic change.
    Because the commercial business of energy generation is outside my personal control, the only option I have, to reduce my carbon footprint by altering behaviour, is by reducing consumption.
    I appear to have to forgo the full benefits available from society to conform to this judgement of consistency.

    Or is there some metric linking asceticism with consistency, how much is enough?
    Living like the Amish is not a reasonable test of behavioural constancy.
    It is an even worse test of the veracity of the science a person declares.

  116. izen says:

    @-Richard Arrett
    “So we have no idea what the risk is of a less than 1 km asteroid hitting Earth in the next 100 or 1000 years, because we haven’t looked yet.”

    We have a good idea of the size of the hole a 1km asteroid would make and a good idea of the rate of erosion that would eventually obliterate it.
    That gives a measure of how often it has happened in the past.

    If it happens around every 1000 yrs then we would expect to see at least 3 holes made since the last ice age, This can give a strong limit to the likelihood of a 1km strike.

    Dating the meteor craters in terms of size and incidence on the moon and other planets can also provide PDF for the rate and size of asteroid impacts.

    We do not, as yet, have the technology to do more than improve detection and tracking of asteroids. That is relatively cheep and sufficient given the low risk and capabilities we have.

    @-“All we have to do to reduce emissions is build a bunch of nuclear power plants (300 for the United States), and we could be generating 80% of our electricity with very low emissions. ”

    The French model.
    It has problems. Is the same solution open to Iran and North Korea ?
    While it would provide a transition to a low emission grid, it also requires centralised state control by a stable governance. Or strict oversight by a global authority.

    There is a transition in energy production in progress, and more significantly where demand is growing it is increasing met by the cheapest options. Solar in the lower latitudes, wind in the higher latitudes. That will require significant modernisation and change in the way energy grids work and are commercialised.
    That may pose a greater obstacle than deploying the technology. The resistance is likely to match the opposition that building 300 nuclear plants in the US would attract.

  117. Joshua says:

    I’m not a believer in left/right personality asymmetries, but if I were…


    Denial of climate change may exacerbate the problem by impeding pro-environmental behavior and policy. An online survey of 219 US citizens assessed tolerance of ambiguity, political orientation, and beliefs about climate change. Results indicated that low tolerance of ambiguity predicted beliefs that: 1) climate change is not a problem; 2) is not the result of human activity, and; 3) cannot be corrected through individual behavior. Analyses also supported that these relationships were mediated by conservative political orientation. The link between conservative politics and refusal to accept climate science may reflect a proximate relationship; personality characteristics, like tolerance of ambiguity, may ultimately be responsible for drawing people to both political and environmental ideologies.

    https://www.researchgate.net/publication/324973023_Personality_politics_and_denial_Tolerance_of_ambiguity_political_orientation_and_disbelief_in_climate_change

  118. angech says:

    izen says:
    “If it happens around every 1000 yrs then we would expect to see at least 3 holes made since the last ice age, This can give a strong limit to the likelihood of a 1km strike. Dating the meteor craters in terms of size and incidence on the moon and other planets can also provide PDF for the rate and size of asteroid impacts.”
    This is better, thanks Izen.
    ATTP in my naive way I seem to recall 2 reported events of large meteorites passing close to earth in the last 2 years [closer than the moon], one expected and one only 2 months ago that was detected late by an amateur astronomer. Is there a record of how many largish ones do go by at this distance in the last 40-50 years when we have had better observational mechanisms?

  119. verytallguy says:

    . All we have to do to reduce emissions is build a bunch of nuclear power plants (300 for the United States), and we could be generating 80% of our electricity with very low emissions. Problem solved – 5 to 10 years (if we are really serious about solving it, of course).

    This massively understates how difficult decarbonisation is.

  120. angech,

    ATTP in my naive way I seem to recall 2 reported events of large meteorites passing close to earth in the last 2 years [closer than the moon]

    Yes, but the Moon is 60 Earth radii away. Hence, the probability of passing inside the distance to the Moon is vastly greater than the probability of something actually hitting us.

  121. Dave_Geologist says:

    Did you read my comment Richard? I wrote about smaller strikes and how they could cause locally devastating, but not existential damage. The K/T is the only strike in the last billion years that was devastating enough to cause a mass extinction. That is the benchmark to use when comparing ‘acts of God’ with the AGW tail risk. Whether it happened 900 My ago, 66 My ago or last week is irrelevant to the risk estimation. AFAIK astronomers don’t believe that the risk of a hit has increased recently. One hit in the last billion years, so one-in-a-billion odds for a particular year. With a large uncertainty range of course, but the risk in the next few years is miniscule.

    If you shift the goalposts and talk about sub-km objects, you’re not talking about existential risks any more. P goes up but I goes down. How much damage did Tunguska do to the planet? A lot less than we’ve already done by global warming. P x I for a hit on New York? Remember, your mythical meteorite doesn’t just have to hit the planet, it has to hit the 1/500,000th of the planet occupied by NY. Benchmark for a small one? Let’s say ten times the Kobe earthquake, which was 6,000 deaths and $100 billion. So 60,000 deaths and $1 trillion. Fifty times bigger than Kobe? About the same as the Iraq war. Bad enough, but compared to what blundering into a PETM world would bring, well down in Meh… territory.

    You say you’re not sure whether your preferred events (which unlike AGW, are outside our control) are bigger in P x I terms than high-end AGW risks. I’m sure they’re not and I’ve shared my reasoning. Why don’t you share yours, instead of just sharing concerns? The onus is on you not just to show that your squirrel is real rather than leaves rustling in the wind, but to show that it’s a giant, rabid, man-eating squirrel.

  122. Dave_Geologist says:

    Richard, are you sure you want to build 300 Hinkley C’s? At $25 billion a pop, that’s $7 trillion. Almost half a year’s US GDP. You could do a lot of other stuff with that money.

    And BTW, since I presume it’s meant as a plank-in-your-own-eye taunt at librulz who block nuclear power: not everyone. Provided it’s well regulated and the costs are not totally ridiculous, I’m pro-nuclear.

  123. Ken Fabian says:

    I should know better but I’ll bite –

    Richard Arrat, Global Warming is happening right now. Are you seriously asking should we be doing meteorite defense systems first? Achieving a meteorite defense system, for e.g. one in one hundred thousand year impacts, is something for a healthy, wealthy global economy of the future. Ensuring there is a healthy, wealthy future global economy that may one day be capable of that is our present day job. It makes good sense – careful, considered, educated and informed good sense – to face the energy/emissions/climate problem now. I’m not sure 3 decades (since the IPCC was established) is rushing things.

    A 300 plant pre-emptive nuclear strike on coal and gas, when US Conservatives – who like nuclear – can’t even admit there is a climate problem to pre-emptively fix, seems… unlikely. So unlikely as to be time wasting to waste time on.

    There is no “just do x” solution on offer in this, we are just going to have to wing it as best we can. Right now the best we can do is add lots and lots of cheap solar and wind – who believed that could ever happen? – into fossil fuel heavy electricity grids, as much as they can take, until it needs on-demand backup to cover the intermittency overlaps, then we’ll see how we go adding on-demand backup. None of the technologies are staying still enough for anyone to know how this will ultimately play out, what the zero emissions endgame is achieved, but we will all get to play.

  124. “ATTP in my naive way I seem to recall 2 reported events of large meteorites passing close to earth in the last 2 years [closer than the moon]

    Yes, but the Moon is 60 Earth radii away. Hence, the probability of passing inside the distance to the Moon is vastly greater than the probability of something actually hitting us.”

    the less astute lukewarmers/deniers often make the scaling error. You see it in this thread with Angech and Richard Arrett with not comprehending the difference in near misses and floating ideas like direct hit to NYC without understanding the scale between a moon-wide miss and direct hit on a small municipal area.

    But at the end of the day, I think the motivations are generally ideological. It’s really hard to account for the intransigence of some posters except by considering ideological reasons why those same posters continue to move the goalposts or recycle/re-present bad science.

    I think I will refer to some of this as Brooksing from now on to commemorate Rep Mo Brooks concern about rockfall on coastlines raising sea level.

  125. “Yes, but the Moon is 60 Earth radii away. Hence, the probability of passing inside the distance to the Moon is vastly greater than the probability of something actually hitting us.”

    If an asteroid passed by the earth close enough to have the same radially projected size of the moon or sun, it would create a transient gravitational forcing equivalent to that of the moon or sun. It would scare the crap out of everyone but it would also provide a treasure trove of data to analyze. We would get a perfect controlled experiment for a gravitational impulse response.

    If the projected size was twice that of the moon, the localized gravitational forcing would be 4 times that of the moon or sun. Who knows what kind of tsunami effect that would set off? It’s probably worth at least thinking about.

    https://physicsworld.com/a/nasa-hits-back-at-millionaire-in-asteroid-spat/

    A couple months ago we had a “near miss” * with a Tunguska-sized asteroid that wasn’t detected until the day before it was within the Moon’s orbit. The only video I have seen of the fly-by was taken by amateur astronomer Michael Jager.

    * should be classified as a “near hit” 🙂

  126. Paul,
    Unless I’ve done my sums wrong, for a ~1km-sized asteroid to have to same gravitational influence as the Moon, it would need to pass within the radius of the Earth anyway.

  127. dikranmarsupial says:

    “ATTP in my naive way I seem to recall 2 reported events of large meteorites passing close to earth in the last 2 years [closer than the moon], one expected and one only 2 months ago that was detected late by an amateur astronomer. Is there a record of how many largish ones do go by at this distance in the last 40-50 years when we have had better observational mechanisms?”

    According to Wikipedia, the largest asteroid that came withing a lunar radius in 2017 was 7–28m in diameter. The largest in 2016 was 35–86m. I don’t think they would be considered large on a “greater problem than climate change” standard. Wikipedia doesn’t seem to list anything above 50m predicted to come close to the earth in the next few years.

    You can find lists of this nature by performing the obvious Google search (and follow the links on Wikipedia for their sources).

  128. Yea, more of a thought experiment as the majority of asteroids are not that big.

  129. Steven Mosher says:

    On the costs of adaptation versus temperature

    http://www.indiaenvironmentportal.org.in/files/file/Coastal%20flood%20damage.pdf

    Hmm, figure 3

    Iinear? shrugs

  130. Dave_Geologist says:

    Well, obviously some costs are more or less linear, Building higher dykes being an obvious example. But what about the 22nd century when the ice keeps on melting even if we stabilise global temperature? The 23rd century? At some point we’ll have to relocate coastal megacities. Some might trust to 30m high dykes, but that won’t work for cities like Miami with its porous underburden, or Dhaka with some of the world’s biggest rivers backfilling the dykes.

    And what about air-conditioning European cities which current don’t have domestic air-conditioning? What about outdoor activity and agriculture when the wet-bulb temperature goes above 35°C for large parts of the world for much of the year? Maybe once every decade or two at first, but eventually every other year. What about this (lots of references quoted where you can investigate non-linearity, which appears to have been neglected in some studies):

    What about a PETM world where California and Spain get no rain for 30 years? What about an early Triassic world where the tropics were too hot for fish and reptiles? Mammals? No chance.

  131. Dave_Geologist says:

    And you forgot to mention Figures 1 and 2 Steven.

    The top half of Figure 1 looks pretty non-linear, until a law of diminishing returns presumably kicks in. There are strong grounds for believing, based on the track record to date, the widespread active opposition to action, and the propaganda attempting to minimise or deny the problem, that even spending at the linear rate of Figure 3 is a pipe-dream. Who’s going to build higher dykes when the State legislature has forbidden any consideration of climate change or SLR in coastal planning? Their Enhanced Protection case looks totally unrealistic. Why haven’t we already implemented the advanced protection? It looks like there would be at least an order of magnitude improvement even if we held the global temperature anomaly at 1°C. Why are we waiting?

    And Figure 2 (bottom) is strongly non-linear in flood costs past 2-3°C. So their linear increase in dyke spending not only fails to hold flooding costs constant, it even fails to hold them linear. So presumably they’re doing a cost/benefit tradeoff, in which case holding flood costs constant would require a super-linear increase in spending. Or there are flood risks which increase non-linearly with global warming and which can’t be mitigated by building (higher) dykes.

  132. Dave_Geologist says:

    Oh, and BTW Steven, did you see that the y-axis is annual investment and maintenance cost (basically investment – maintenance is modelled as tiny). So each year, the cost of standing still gets bigger. Even if they were standing still, which they’re not. If you integrated Fig. 3 to get a cumulative cost this century, it would of course be a quadratic.

    And the unmitigated damage functions in Figures 1 and 2 are clearly a lot steeper than linear.

    Without adaptation, 0.2–4.6% of global population is expected to be flooded annually in 2100 under 25–123 cm of global mean sea-level rise, with expected annual losses of 0.3–9.3% of global gross domestic product. Damages of this magnitude are very unlikely to be tolerated by society and adaptation will be widespread.

    I see they did perform a (financial) cost/benefit tradeoff. Trouble is, as I alluded to above, we’re not walking the walk today. We currently appear to be under-investing in flood defences, based on their criterion. Otherwise, if we regard the present crop of defences as a response to a 20th-century steady-state, the bottom curves in Figure 1 should be fairly flat. Which is a bit strange as they say it is calibrated on real present-day flood protection decisions.

    I can think of a couple of reasons for that. One is that flood defences are typically based on the 100-year storm. It looks to me that they’re not waiting for that storm to hit a particular location before upgrading its defences. Otherwise the damage would be much greater, more like today. And the benefits of the improved flood defences would only be seen decades out, when the next outlier storm hits. So I presume that, if you’re running on a particular RCP scenario, you say on a running basis “what do I need to do to mitigate today’s 100-year storm” Or rather, the 100-year storm within the 20-30-year life-before-upgrade of the new defences. Because you don’t know where it will hit, you upgrade flood defences everywhere they might be overtopped in the design period. The unlucky 1% of those sites gets hit by a 100-year storm and scrapes through by the skin of its teeth. The rest are over-defended for years or decades to come, which means lesser storms which would have caused some damage, cause none. That would be a programme with a lot of pre-investment. I’m not convinced (by a long way!) that the world is up for that, even without rising temperatures and flood risk. In practice, I think most locations will take at least one hit before major investment is approved, even in non-AGW-denying countries.

    The other is that for Germany (a rich country, obviously) the optimal level of protection under their model would be against a 1,400-year storm. I’ll bet they don’t spend as much as that today. IOW are we really walking the walk? It’s a pity they don’t give a poor-country example. You might find that they’d have to starve half the population to meet their optimum annual spend.

    Setting my personal judgement of human nature against their clever math, I think we’ll end up spending less than they recommend, later than they recommend, and struggle to keep the mitigated damage functions flat. I doubt that will save us very much money, because the mitigation required will still be heroic. Observe that the damage axes in Figures 1 and 2 are orders of magnitude higher for the unmitigated than for the mitigated case. The mitigation required to protect 196 million out of 200 million people is unlikely to be much less than that required to protect 199 million.

  133. Richard Arrett says:

    Ken asked “Richard Arrat, Global Warming is happening right now. Are you seriously asking should we be doing meteorite defense systems first?”

    I am not advocating anything.

    I am simply asking what if the P * I area under the curve for some low probability high impact event like an asteroid strike or large meteor strike or super volcano eruption (to name just a few) is HIGHER than the area of P * I of some of the standard climate change events, such as sea level rise or another 1C temperature increase?

    Using this post as a guide, maybe if the area were higher (I have no idea personally), maybe we would look at detecting objects under 1 Km that might hit Earth. I thought I read somewhere that NASA was looking at doing something about Yellowstone. Maybe they have evaluated the P * I and thought it merited taking a look at efforts to head it off? Here is a link:

    http://www.iflscience.com/environment/nasa-drill-yellowstone-supervolcano-order-save-planet/

    But again, I am not saying do A or B or do A instead of B – I am just asking what are the various areas of various events and what is the largest area P * I event – and is that the one we should tackle first?

  134. Dave_Geologist says:

    But again, I am not saying do A or B or do A instead of B – I am just asking what are the various areas of various events and what is the largest area P * I event – and is that the one we should tackle first?

    Read the posts by me and others Richard. Your question has been answered. Multiple times. Your squirrels have a much smaller P x I than AGW. So we should tackle AGW first.

    The only way you can reverse that is by claiming either that AGW isn’t real, that we can’t (as opposed to won’t) do anything about it, or that its impacts will be benign or at least minor. Any or all of which require descent into science denial and a fantasy view of the world. Mother Nature doesn’t do fantasy, so if that’s where you’re coming from, your P and I are both meaningless and your P x I can be ignored. Not that you’ve even tried to calculate one yet!

    Oh, and the Serenity Prayer

    God, grant me the serenity to accept the things I cannot change,
    Courage to change the things I can,
    And wisdom to know the difference.

    We can change AGW. We can’t change supervolcanoes or asteroids.

  135. Richard,

    I am simply asking what if the P * I area under the curve for some low probability high impact event like an asteroid strike or large meteor strike or super volcano eruption (to name just a few) is HIGHER than the area of P * I of some of the standard climate change events, such as sea level rise or another 1C temperature increase?

    We are already investing in some of these areas already. Tracking asteroids, for example. Also, that there is a chance of some other catastrophic event happening doesn’t mean that we should not consider other possible catastrophic events. There is also, in my view, a difference between an outcome over which we have little control, and one over which we have a lot of control. We can actually do something about the latter.

  136. Dave_Geologist says:

    Richard, look at my posts above on the Hinkel et al. paper (which has Richard T as an author, so surely can’t be accused of scaremongering 🙂 ). Looking at Table 1, and the RCP8.5 figures, you can’t pair high cost with low GDP and stay below 10% of global GDP by 2100. So the $$ spread must be primarily driven by how much GDP grows. They must all be approaching 10% of annual GDP in annual flood damage costs (that’s flood damage alone, with no allowance for consequent lost production and no account of other events such as droughts or forest fires). We can still stay on RCP8.5 if we don’t do something about emissions. Paris is not in the bag yet. The paper uses undiscounted GDP, but we can use today’s approximate global GDP of $100 trillion as a guide. So about £10 trillion per year in today’s terms just from flooding if we do nothing (remember my requirement above to show both unmitigated and mitigated damages, as the justification for mitigation measures). A hundred Harveys, Katrionas, Sandys or Kobe earthquakes, Every year.. Not one or two per decade.

    How about a bigger earthquake (or one somewhere less earthquake-proofed than Japan)? The Great Lisbon Earthquake is reckoned to have cost Portugal four to six months’ GDP. The King put the Marquis of Pombal in charge of civil defence, cleanup and restoration, which he did with ruthless efficiency (conscripted labour, seized assets, looters publicly hanged). He’s highly regarded in Portugal, but hated in the former colonies because his next job was to squeeze the colonies to restore the State coffers, a commission which he conducted with the same ruthlessness. Portugal had a big empire at the time so let’s assume it represented 20% of global GDP. So 6-10% of global GDP, about the same as unmitigated RCP8.5 by 2100, before we add non-flood costs. But it only happened once. Almost 300 years ago. And not for at least a millennium before that. So P x I is not 10% of global GDP, it’s 0.01%.

    Let’s say we get hit by a dinosaur-killer and humanity gets knocked back to a Toba-scale group of survivors. We have to rebuild a modern civilisation from scratch. Assume it takes a bit less time, because we’ve cancelled the next glaciation and once the ash has fallen out, they’ll have a fairly stable climate. We’ll have lost about 50,000 years of GDP. Let’s maximise your damage function and not discount future GDP (inconsistent I know, because I bet you’d discount future AGW damage much more heavily than Stern did). If we revise our estimate of dinosaur-killer frequency downwards to once per 50 million years, P x I is 50k/50M or 0.1% of global GDP. A thousand times smaller than unmitigated RCP8.5. On a par with a single Katrina or Kobe. Orders of magnitude smaller than a single Lisbon.

    Richard, not only is your squirrel not a giant, rabid,man-eating squirrel. It’s not even a living, breathing squirrel. Just some leaves rustling in the wind.

    So far you’ve been all hat and no trousers. Show your mettle. Present an alternative calculation.

  137. Dave_Geologist says:

    Oops, metaphor mix-up. All mouth and no trousers. Or all hat and no cattle. Either works 🙂 .

  138. dikranmarsupial says:

    Richard wrote “But again, I am not saying do A or B or do A instead of B – I am just asking what are the various areas of various events and what is the largest area P * I event – and is that the one we should tackle first?”

    It has already been explained to you on this thread that action is not determined only by risk, but by the cost of the action (i.e. a cost-benefit analysis). Climate change is a relatively high-P, moderate-I problem, but the costs of doing something useful about it are comparatively low. A Yellowstone super-volcano in the near future is a very low-P but very high-I event, but the costs of preventing it are essentially infinite, there is just nothing we can do.

    And that is even before talking about whether there is the political will to act.

  139. Ken Fabian says:

    Richard – If we have to wait a century before we establish a meteorite defence system we will not have raised our risks of a 1 in 100,000 year impact by much at all. If we wait a century to address the energy/emissions/climate problem it will already have had world changing consequences that can’t be reversed – consequences that could be economically damaging enough that any possibility of a meteorite defence system is lost..

    A globally significant problem that is continuing and cumulative, currently accumulating at record rates and that we can expect to leave us ongoing irreversible consequences is a surely a problem that deserves our attention now. Climate change is a case of human initiated cause and effect, of actions and their consequences which may have begun in ignorance, but they are now knowing actions; I’m not convinced there is or has ever been any real equivalency with risks of Meteorites or Supervolcanoes.

  140. Richard Arrett says:

    Ken:

    I understand your opinion.

    I think we already know what to do to lower CO2 emissions and if we really wanted to do it, we could, in five or 10 years. I am speaking about building lots of nuclear power plants to generate the majority of our electricity. Only political issues preclude this, not technological issues.

    We have dithered for 30 years and not really accomplished much at all on lowering emissions worldwide (I think it was flat a couple of years during the economic downturn).

    Of climate change were really a globally significant problem, than a majority of the people should be willing to adopt the only really practical solution – nukes and lots of them.

    Maybe in a few more years people will be more reasonable.

  141. verytallguy says:

    . I am speaking about building lots of nuclear power plants to generate the majority of our electricity. Only political issues preclude this, not technological issues.

    It may surprise you to hear this, but electricity is not the only source of energy. Indeed sectors such as transportation are almost entirely fossil fuel based and a significant proportion of total energy demand.

    https://www.eia.gov/energyexplained/?page=us_energy_transportation

    the only really practical solution – nukes and lots of them.

    Well, I agree that more nukes would be a Good Thing. But they are not a simple silver bullet as you present this assertion without evidence.

  142. verytallguy says:

    And maybe in a few years you will have becomes more reasonable…

  143. Ken Fabian says:

    Richard, you are proposing something that requires State of Emergency levels of government intervention. That level of commitment can’t even exist whilst climate science denial is deeply wound through mainstream politics all the way to the top. The support for nuclear there does not extend to using it for displacing fossil fuels, just for stopping solar and wind. Climate science denial is very bad for nuclear; renewables are surviving despite it but nuclear goes nowhere whilst denial has such a strong hold on the imaginations of powerful people who don’t want to even imagine a world where climate responsibility is a real thing.

    We can and will take wind and solar as far and as fast as we can – because it is becoming increasingly cost effective to use now and we can. We can use it despite the mire of conflicted politics; RE doesn’t have to wait on strong, enduring bipartisan commitment and the end of climate science denial. Nuclear on the other hand does absolutely require that. It requires levels of commitment that can’t exist whilst climate science denial grips the imagination of whole mainstream political parties, Prime Ministers and Presidents.

    Bipartisan commitment would be great – lots of discussion on sites like this is about how to get past that climate science denial that prevents it. I haven’t noticed much success but I think that because the deniers have dug in deep, not because arguments for it have no merit. Putting a hold on wind and solar while we work on an agreement for massive expansion of nuclear looks like a good way to end up with none of them. A lot like putting a hold on the climate problem while we get a meteorite defence system established – we end up fixing neither.

    We are building wind and solar at compounding rates and they will be a serious problem for future nuclear; if new nuclear can’t compete commercially running 24/7 now it will struggle when solar owns the electricity market whenever and wherever the sun is shining or wind is blowing. It isn’t going to compete directly with them but must compete with batteries, hydro, fast response gas and demand management for what is left when the sun isn’t shining and wind isn’t blowing. It isn’t a case of “just do wind and solar” and all is fixed but these are useful building blocks, ones we can use now while we work on end run solutions.

    We aren’t where we in this are because of any master plan by anti-nuclear environmentalists, but because innovative businesses made solar and wind cost competitive. And because the combination of an enduring absence of effective mainstream leadership and presence of effective and influential opposition has prevented the use of planning and forethought being used to make appropriate policy.

  144. I appreciate and respect the energy that has gone in to educating Richard Arrett about the situation. I have to ask, how many of you think he will get the education that is being provided?

  145. Dave_Geologist says:

    Well, he does appear to have Gish-galloped off into bad-greenies-thwarting-nukes territory rather than the asteroid/supervolcano silliness. So maybe some of it got through.

    Still waiting to see his trousers though. And the cattle are so far off I can’t even hear a distant rumble of hooves 🙂 .

  146. Richard Arrett says:

    smallbluemike:

    While I appreciate the back and forth and always learn something new by engaging, I don’t think that wind and solar are going to be able to supply more than about 35% of the electricity, without substantial new inventions in storage (and modifications to the grid). We need to invent stuff we don’t have yet.

    So while I appreciate the education, I am not so sure DG can solve the problem with just wind and solar. Nuclear should and will play a large role in whatever solution we end up with (in my opinion).

    Nuclear is baseload and can work with the existing grid and supply as much electricity as we need, which can then be used for electric cars and so forth. For people using heat pumps, it can even provide heat. Wind and solar, while certainly part of the solution, are not going to be able to provide 80 or 60 or even 40% of the electricity we need without some new inventions we don’t have yet (power storage).

    Nuclear is more expensive than fossil fuels, but COULD be cheaper than it is now. If we (USA) picked one design, 4th generation passive cooling, and just built 2 or 4 plants per state, we could double and triple our nuclear share of electricity generation very easily (from 20% to 60%). It is just politics, not technology that stands in the way (and a little bit of fear).

    With wind and solar, it is technology that stands in the way of increasing its share – namely how to store power for when it is not windy and dark.

    So while I like solar and wind, and appreciate the cost reductions that have occurred, there is a limit to how far we can push this technology. Backup power currently is fossil fuel and should be nuclear, and we should be using nuclear when it is dark and not windy.

    That is what I have learned.

  147. Hi Richard,

    did you learn anything about meteorite impacts and how the risk from those events is qualitatively and quantitatively in a different class from AGW?

    As DG noted, you appear to have jumped directly to the nuclear discussion without any apparent closure or educational notice from your part regarding the risk of AGW. I think a lot/most of us would be happy to discuss and evaluate nuclear options are a response to AGW, but that requires that lukewarmers and deniers demonstrate they have learned and agree that AGW is the big risk.
    Absent that kind of acknowledgment, I assume you will just continue to move the metrics and topic for culture war purposes.

    I am gone over to hoods canal for a couple of days to cool off. It’s too hot at home right now. Won’t be able to respond quickly to anything.
    Cheers
    Mike

  148. Richard Arrett says:

    smallbluemike:

    No, our discussion did not teach me whether the probability of a meteorite impact was qualitatively and/or quantitatively in a different class from AGW.

    I really have no idea what the P * I area of any particular AGW event is compared to the P * I area of a meteor strike or a super volcano. I very much doubt a meteor strike P * I area would be smaller than an AGW event (although DG seems to think so).

    From our discussion, I only know DG thinks a large meteorite impact is far less likely than AGW, but not the likelihood of a smaller (less than 1Km) strike, and how that compares to the probability of various AGW events.

    The Tunguska event happened in 1908 and was an airburst which wiped out 8 million trees over 770 sq. miles and was estimated to be between 60 and 190 meters wide and had an energy of between 10 – 30 megatons of TNT (15 Megatons would be 1000 times the energy of the Hiroshima nuclear weapon).

    According to wipedia (https://en.wikipedia.org/wiki/Impact_event#Frequency_and_risk) an 85 meter diameter object which would create an airburst of 29 Mt (megatons) of TNT happens every 3300 years. If one of those hit over a city it would be much worse than any AGW impact (in my opinion), as we can easily move a city one inch of sea level rise at a time, to higher ground, over a century or so. AGW is very very slow – while a meteor strike is very very fast, and this seems to make a difference to me.

    A 100 meter object, which would leave an impact crater 1.2 km in diameter and release 47 Mt of energy at atmospheric entry and 3.4 Mt of energy at impact, happens every 5200 years. Again, if one of those hit a city or the ocean near a city, it would do far more damage than any AGW event (in my opinion).

    The frequency’s of 3300 and 5200 years, while much lower than for the the dinosaur killing impact of 66 million years ago (15 km wide object), seem pretty frequent to me. Obviously this is a matter of opinion and reasonable minds could disagree – but 1 inch of sea level rise per decade is a lot less worrisome to me than an 85 meter air-burst over a city.

    Recall that the sea has risen 120 meters over 20,000 years, which is 60 cm per 100 years and nobody even noticed until the last 30 years or so. Civilization (or primitive peoples, if you prefer) has been moving to higher ground all along and never even noticed they were do so.

    I am in favor of trying to track objects smaller than 1 Km and the more time we have before a strike, the more options we have to deal with it. Maybe a task for the Trump space force (grin).

  149. Richard Arrett says:

    Sorry – 3300 and 5200 years is much higher in frequency, but much smaller in impact than the 66 million year ago object. I should have written my comment in word and edited it, but alas, I did not.

  150. izen says:

    @-RA
    “Obviously this is a matter of opinion and reasonable minds could disagree – but 1 inch of sea level rise per decade is a lot less worrisome to me than an 85 meter air-burst over a city.”

    One reason opinions may differ is that the 1 inch per decade sea level rise is a global impact.
    The big airburst every ~4000 years is most likely to occur over ocean. Because there is rather more of it than cities.
    So the odds of an asteroid hitting near a city are much less than of it hitting the ocean. Every 4000 years. Compare how likely it is that one of these every 3500 year events happens over, say, Miami.
    While the odds of Miami NOT flooding by the end of the century are 100/1 against.

  151. Ken Fabian says:

    Richard, I see a currently cumulative problem that gets worse with time and delay, that leaves irreversible consequences – world changing consequences – as a problem that should have priority. I don’t understand how you could conclude that a meteorite defence system should have greater priority. You can campaign for that if you choose, and for State of Emergency level nuclear construction programs, if you see climate change as that urgently important (after meteorites) and see renewable energy or other options as inadequate. I will leave you to it.

    Smallbluemike – ” I have to ask, how many of you think he (Richard) will get the education that is being provided?”
    Not me, not really – there were some clear indicators embedded in Richard’s earliest comments of how this would go. Perhaps some other readers will get something from this. As an exercise for practicing my communications skills it was not entirely wasted effort.

  152. Steven Mosher says:

    “AGW is very very slow – while a meteor strike is very very fast, and this seems to make a difference to me.”

    1. Assumes no tipping points for AGW– Bad skeptic to have unacknowledged assumptions.
    2. AGW is global and not easy to reverse, some might call it irreversible. irreversibility has a non zero probability. City destroyed? been there done that, rebuild. Hiroshima.

    In the end the assessment of risk and potential damage is uncertain business. There is of course
    room for rational disagreement and no 100% certain method of calculating the optimum response.
    Practically its a political decision and currently folks agree to spend more resources on AGW rather than asteriod defense. Don’t agree? do some science, make a case, convince others. I’ve seen
    nothing here that amounts to a compelling argument that we are not allocating funds in a reasonable way. not perfect, of course.

  153. Joshua says:

    No, our discussion did not teach me whether the probability of a meteorite impact was qualitatively and/or quantitatively in a different class from AGW.

    We can prevent AGW = a categorical difference.

  154. Richard,
    Bear in mind that 70% of the planet is ocean, so the probability of a Tunguska-like event over a city is lower than once every few thousand years.

  155. dikranmarsupial says:

    “We can prevent AGW = a categorical difference.”

    If only someone had pointed that out earlier in the thread, perhaps repeatedly! ;o)

  156. Hyperactive Hydrologist says:

    Classic example of perceived vs actual risk. Perhaps its just a lack of imagination.

  157. Dave_Geologist says:

    I am not so sure DG can solve the problem with just wind and solar

    Beware of pigeonholing people Richard. I’m pro-nuclear, subject to adequate regulation, safety and cost. And that’s not just a figleaf, I mean it. I’d build those 300 power stations under the right conditions, just not all at once. And at the moment, here in the UK at least, the third category is the blocker. Exacerbated by Hitachi’s financial problems, which are down to concealed losses at Westinghouse which it bought. When your flagship manufacturer turns out to be a turkey, it rather suggests that the US nuclear industry is in no position to deliver 300 units in a hurry. Nuclear costs appear to be going up; whereas wind and solar are coming down so fast that US solar manufacturers will soon go the way of European ones – priced out of the market, not absence of a market.

    I spent most of the thread shooting squirrels not offering or objecting to solutions.

  158. Dave_Geologist says:

    I very much doubt a meteor strike P * I area would be smaller than an AGW event (although DG seems to think so).

    Wrong Richard. I know so. I’ve done the math. And shown my working. I put on my trousers, showed you my cattle and let you inspect their hooves and teeth. You, AFAICS, are still in bed and your alleged cattle are up in the clouds with My Little Pony.

    If one of those hit over a city it would be much worse than any AGW impact (in my opinion

    Utterly, utterly, utterly wrong Richard. Devastating for the city. A blip for the rest of the world. And you forgot to scale P by the percentage of the earth’s surface area which is occupied by cities. It should be one in 200,000 when you allow for the 2-3% of the Earth’s surface which is urbanised. And that’s not just cities, small towns and villages too where the total losses would be much smaller. List ten other 1:200,000 risks you advocate we defend against.

    Did you skip math at school? Of course not. You know better. But you just ignored it for rhetorical effect. Cluestick: rhetoric is not math. Or science. Or indeed risk management. And when it’s abused, it’s unbecoming.

    Continuing to defend the indefensible, after you’ve been shown step by simple step that it’s indefensible, is just trolling. Of course I knew that all along, but thought it useful to put some realistic numbers out there for the rest of the readers. There would be nothing left to do now but repeat myself, which I hate doing. So no more troll-feeding from me about meteorites. And no responses to further laps of the Gish Gallop until you admit your errors over meteorites and supervolcanoes.

    we can easily move a city one inch of sea level rise at a time, to higher ground, over a century or so

    Ha, ha, ha, ha, ha. So hilariously wrong it doesn’t bear responding to. Worse than flat-earthism.

    Come clean Richard. Isn’t the real reason you’re wasting our time with ridiculously low P x I events that serve only to make you look like an unteachable fool, is that you attach a very low P to global warming being real, a very low P to it being our fault, and a very low I because CO2 is plant food and Minnesota could do with warmer winters? But you know that if you do that, you’ll be dismissed even more as a modern-day flat-earther and science-denier and consigned to the crank box, where you’d have much less fun.

  159. Joshua says:

    If only someone had pointed that out earlier in the thread, perhaps repeatedly! ;o)

    I’m sure that would have made a huge difference.

  160. Everett F Sargent says:

    This is a test (of Flickr) …

    (please delete if only the URL shows up)

  161. Steven Mosher says:

    ” low probabiity high damage risk of North Korea launching a nuclear strike.”

    There are a certain class of events that it doesnt really make sense to ascribe a probability to. Even from a subjectivist standpoint. I live in Seoul. I’d say the probability is non zero.

    “We can prevent AGW = a categorical difference.”

    Not really.
    A) AGW has already happened, warming is in the pipeline, and there is talk about what can
    be done,
    B) Meteor strike, hmm probably theoretically preventable in certain cases provided you invested enough.

    The categorical difference isnt preventability

    we cause AGW= a categorical difference

    Additional AGW is more preventable that meteor strike preventability

  162. Joshua says:

    Not really.
    A) AGW has already happened, warming is in the pipeline, and there is talk about what can
    be done.

    Given that AGW is not a discrete event, preventing AGW is not mutually exclusive with it having already happened or with it being in the pipeline.

    The categorical difference isnt preventability.

    I would argue with, but I learned a long time ago that since you can determine “ze issue” it would be futile to do so.

    There are a certain class of events that it doesnt really make sense to ascribe a probability to.

    Of course, I also learned long ago that like certain others who shall remain nameless, you get to determine what actually does and doesn’t make sense, but I happen to think it absolutely does makes sense to evaluate the probability of an event like NK launching a nuclear attack. But my point is that many, many other likewise foolish people do so also, irrespective of your superior insight and wisdom making it obvious that they shouldn’t, and that not an insignificant # of them dismiss other low probability high damage function events – AGW in particular. Which speaks to complicating factors in how people approach risk.

    That doesn’t mean that the probabilities are easily quantified, but a risk is certain even if the level of risk is uncertain, qand there is a wide space between certain and non-zero.

    At any rate,… There are a certain class of events

    Anywho, out of curiosity, how would you describe that “certain class of events?”

  163. As Joe Pesci said in My Cousin Vinnie, “I’m finished with this guy!”

  164. dikranmarsupial says:

    “I’m sure that would have made a huge difference.”

    You’re right, it made no difference at all. Plus ca change… ;o)

  165. Steven Mosher says:

    Not really.
    A) AGW has already happened, warming is in the pipeline, and there is talk about what can
    be done.

    Given that AGW is not a discrete event, preventing AGW is not mutually exclusive with it having already happened or with it being in the pipeline.”

    If you had said “possibly preventing more” I would have no quibble.

    The categorical difference isnt preventability.

    I would argue with, but I learned a long time ago that since you can determine “ze issue” it would be futile to do so.

    Dont argue with me, argue with NASA
    https://www.space.com/40943-nasa-asteroid-defense-plan.html

    I would say the more Salient feature is we cause AGW and we dont cause meteor impacts.
    I would argue thats more salient because, we may not ( practically speaking ) be able to prevent
    more AGW, even though it is theoretically preventable, and we are theoretically able to prevent
    meteor strikes even if we cannot practically do so today. So the categorical difference, is
    related to the cause. we cause one; we dont cause the other. seems simple but maybe you control asteroids, who knew. I would rank preventability as a quantitative difference, not a
    categorical ( 0/1) difference.

    There are a certain class of events that it doesnt really make sense to ascribe a probability to.

    Of course, I also learned long ago that like certain others who shall remain nameless, you get to determine what actually does and doesn’t make sense, but I happen to think it absolutely does makes sense to evaluate the probability of an event like NK launching a nuclear attack. But my point is that many, many other likewise foolish people do so also, irrespective of your superior insight and wisdom making it obvious that they shouldn’t, and that not an insignificant # of them dismiss other low probability high damage function events – AGW in particular. Which speaks to complicating factors in how people approach risk.

    A) never said I get to determine this, shrugs.
    B) not sure how you even begin to evaluate the probability unless it is just a baysian guess
    But go ahead, go ahead and show your guess/ calculations for Kim. In typical war scenarios
    you are not assigning probabilities. Take the scenario we were given in the mid 80s.
    a two front war in fulda gap and the korean peninsula. Nobody assigned a probability.
    there was no place to start to justify it. It was just posited as a stressful scenario that
    you adopted as a truth for the purposes of force planning. “we want to be prepared
    for X” Nobody thought it would happen, but your not paid to evaluate wether it is probable
    or not. You just use it. Same with planning in ROC. the scenario was a blockade and
    then attack by the PRC. no probability is assigned. Its just accepted as scenario you
    use in planning. If you asked the probability you would be laughed out of the room.
    Its a scenario, selected for the stress it induces.
    C) Of course people ascribe “probabilities” they use words like low or high. Can’t stop
    them, not sure it makes any pragmatic sense.

    That doesn’t mean that the probabilities are easily quantified, but a risk is certain even if the level of risk is uncertain, qand there is a wide space between certain and non-zero.

    I’m not at all sure what it means to say a risk is certain. but yes there is a non zero risk that
    aliens will invade and suck your brain out of your skull, and then move along to better meals.

    At any rate,… There are a certain class of events …

    Anywho, out of curiosity, how would you describe that “certain class of events?”

    A non definitive list of attributes.. sure. Events where you have zero cases in the past.
    hmm probability that aliens will visit and suck your brains out. Oh wait presupposes
    you have them. You might say “low” and you might argue that since it has not happened
    before ( maybe it explains Trump… na forget it) that the probability is low, maybe you
    put a subjective number on it.. 1 in a million. Not sure that it makes sense to do it. Many war scenarios. It might ( as I said ) not make sense to apply a probability . You can do it obviously. not sure it makes sense– meaning not sure you can make a compelling case, and not sure the “low” probability of it means you do nothing or something. and this is just another way of saying that I’m not sure math is always the best tool for every problem.

    Other classes of events: You dont know the probability of X occuring. But,
    whether or not X occurs is unimportant to you. probably doesnt make too much sense to
    apply a probability to things that are unimportant. I dont know the probability that your left show is untied now. Not sure it makes sense to apply a probability.. ok, low. shrugs. Theoretically possible, pragmatically unimportant.

    So I dont know whether NK will attack us or not. hard to know even where to begin to estimate a probability. low one day, high the next? really low on chusock? And if I apply one subjectively, hard to know whether it just represents my hopes or fears. And when another day goes by, how do I update this prior? dunno. doesnt make sense to apply a probability to it. but you go ahead. Do I even care? So when people ask me, you live in seoul are you scared? Nope. Oh you think war is a low probability? Nope, don’t know, dont care, doesnt make any sense to apply a probability. Today I’m on the right side of the dirt, doesnt get any better than that.

    In climate science? lets see, for the SRES people would not assign probabilities. Nobody complains. hard to apply a probability since the scenario has so many different elements, and more importantly you are not choosing it because of its probability you are choosing it to illustrate
    a range of futures. so folks might reasonably argue that it makes no sense to apply a probability.
    you are picking ranges to make different points.

    so can we apply probabilities. always, drake-like equations are cool and can be used in many ways, even in war games. but few people take the end numbers seriously, as numbers. its like guesswork with the appearence of math.

    so like I said there are some cases where it probably doesnt makes sense to apply a probability.
    or rather there are cases where other tools may be better. lots of scenario analysis.

    questions?

  166. Steven Mosher says:

    So to summarize for you joshua.

    events where it may not make sense to assign a probability. the ways it doesnt make sense can also differ.

    A) where the various outcomes dont matter to you.
    B) cases where collecting data to update your prior are impossible or hard to define
    C) cases where you are doing scenario analysis to establish a range of possible futures
    D) cases where you have a non countable sample space and someone asks you to
    predict the probability of a particular outcome. ( it has a probability but its rather paradoxical)
    silly example: the exact spot a dart thrown at a dartboard will hit.

    and of course you are always free to assign probabilities, it may not be pragmatically important. I just decided that theres a 10% probability that god exists. no seriously. lukeatheist. that makes sense right?

    related to D, here is your quiz. Can zero probability events occur?

  167. tedpress says:

    Who are the hottest K-Pop stars and where do they stand on meteor defenses?

  168. Dave_Geologist says:

    no probability is assigned. Its just accepted as scenario you use in planning
    But Steven, how do you know a probability wasn’t assigned higher up the food chain and the military were just told to war-game the ones whose probability was high enough that the Chief of Staff, Defence Secretary, President or whoever decided was worth the effort? I’m sure the corporal in a trench or the sailor in an engine room wasn’t involved in probability discussions, other than as coffee/water-cooler talk (e.g. why are we wasting our time on this?). Someone decided to war-game those scenarios, and not 20,000 Russian tanks crossing the northern border after they’d been smuggled in by a secretly-Quisling Canadian government.

    Even if it was a rough translation of a spoken-words, non-parametric ranking, e.g.
    100% Certain
    93% (give or take about 6%) Almost certain
    75% (give or take about 12%) Probable
    50% (give or take about 10%) Chances about even
    30% (give or take about 10%) Probably not
    7% (give or take about 5%) Almost certainly not,
    0% Impossible
    you can still quantify the end points to get a feel for whether P x I is big enough to worry about. And have an over-ride that says an existential risk (e.g. 30,000 cannon letting loose on Seoul) is something to worry about whatever its probability. With another over-ride for when we can’t stop it but there are a few things we can do to prepare (e.g. asteroid-watch, but don’t try yet to design a rocket that can divert a dinosaur-killer because we don’t have the technology or wealth to succeed and it diverts resources we could apply to mitigate other risks).

    I’ve mentioned before the risk matrix we used in my oil-industry experience. The lower-probability risk categories each spanned an order of magnitude, so the financial or loss-of-life P x I automatically got bigger and bigger uncertainty bounds (in absolute terms) as you moved to lower probabilities. And the numerical risks were assigned based on qualitative guidelines. E.g. “happened this year”, “has happened in this field”, “has happened in this basin”, “has happened within the company”, “has happened within the industry”, “has never happened within the industry but happened in a related industry”, “was attempted in the company but failed (e.g. large-scale fraud, diverting a $10Ms payment)”, “has never happened but is an existential risk so we’ll cover it anyway (that part is not covered by P x I, but is an over-ride when I is big enough)”.

    People do actually think about this stuff. They may not always get it right, but you don’t have to just throw up your hands and say “it’s too difficult”.

  169. Dave_Geologist says:

    silly example: the exact spot a dart thrown at a dartboard will hit.

    Particularly given “A) where the various outcomes don’t matter to you”. You don’t care where in the bulls-eye the dart hits, or where in the double-twenty, or whatever the target is for that particular throw. The number of targets on a dartboard is countable…

  170. dikranmarsupial says:

    “D) cases where you have a non countable sample space and someone asks you to
    predict the probability of a particular outcome. ( it has a probability but its rather paradoxical)
    silly example: the exact spot a dart thrown at a dartboard will hit.”

    You can still provide a probability density function over the sample space, which is what someone probably meant if they talked of the probability of a particular outcome in such a situation. It is important not to get overly concerned with the mathematical rigour if it gets in the way of understanding what the person is trying to say.

  171. Everett F Sargent says:

    Discuss …




    (I’m considering changing “Noodles to “Pretzel Logic”)

  172. Everett F Sargent says:

    Pretzel Logic (song)
    https://en.wikipedia.org/wiki/Pretzel_Logic_(song)

    “Steely Dan FAQ author Anthony Robustelli describes “Pretzel Logic” as a bluesy shuffle about time travel.[5] Fagen has stated that the lyrics, including anachronistic references to Napoleon and minstrel shows, are about time travel.[6][5] According to Steely Dan FAQ author Anthony Robustelli, the “platform” referred to in the song’s bridge is the time travel machine.[5]”

  173. Everett F Sargent says:

    Interactive comment on “Ideas: a simple proposal to improve the contribution of IPCC WG1 to the assessment and communication of climate change risks” by Rowan T. Sutton
    https://editor.copernicus.org/index.php/esd-2018-36-AC2.pdf?_mdl=msover_md&_jrl=430&_lcm=oc108lcm109w&_acm=get_comm_file&_ms=69231&c=144237&salt=1105116537584030391
    ““It is very unlikely that ECS is greater than 6C but this value may be considered a Physically Plausible High Impact Scenario (PPHIS). If realised, such a value for ECS would very likely result in an increase in global mean surface temperature by 2100 well above 2C relative to 1850-1900 under all RCP scenarios except RCP2.6 (high confidence).”

    I’m pretty happy with this being published as is. Although the IPCC AR6 WG1 might change …
    IF(ECS.GE.6)THEN(P.LE.0.1) to
    IF(ECS.GE.6)THEN(P.LE.0.05)

    Which would sort of throw a spanner in the works, so to speak.

  174. Everett,
    The blow-up observed (re risk-vs-probability) will always occur with a fat-tail probability, since the exponential impact will always overcome the slow fat-tail decline. None of the Gaussian sharp-tails will blow-up.

    Is that the discussion you desired?

  175. Steven Mosher says:

    hottest kpop.
    bts. hands down. hard working kids.

    meteor defense. dunno. personally i think its silly. im still on the right side of the dirt and loving it. My suggestion?

  176. Tedpress: Is that you, Tom? If yes, can you please provide the details about your donations to Gore campaigns? I can’t evaluate anything you post here until I can confirm that claim that you made.

  177. Steven Mosher says:

    dk.
    joshusas question was what kinds if events was i talking about. as you know in a non countable sample space the probability of a single event is technically zero. so it doesnt make much sense to talk about that and it makes MORE sense to talk about it the way you did.

    but that wasnt his question. for some people it seems paradoxical to refer to these as zero probabability events since they happen. it makes technical sense to folks in the field, but again that wasnt his question and I took his question to be sincere. so i answered it.

  178. Steven Mosher says:

    dave the question wasnt which target.
    the question was the exact location, as in x and y.

  179. Dave_Geologist says:

    Everett,
    Other than that the “exponential” at the high end is probably a step-function (EAIS collapses, clathrates or tundra belch gigatonnes of methane, we lock into an ice-free attractor, most of the taiga burns, seasonal climates switch to decadality rather than seasonality (as appeared to happen in the PETM*)), the risk curve is surely what we’d expect. The non-existential risks cluster close to the ECS PDFs but are right-biased because of the increasing impact function. The impacts get worse at an increasing rate but are the same category (TM) of risk. Above a certain temperature threshold (for which ECS is a proxy here), new, perhaps existential impacts appear, which have essentially zero probability below that threshold.

    * The decadal-drought-broken-by-megaflood observations tend to be on the scale of part of a European country or of a US state. On a continental scale, it may still be seasonal in that there is a dry season and a wet season. But if all the rainfall is concentrated into one superstorm which doesn’t move, the return period for a particular patch of territory will be multi-annual.

  180. Everett F Sargent says:

    Technical Summary IPCC AR5 WG1 (p. 81)
    https://www.ipcc.ch/pdf/assessment-report/ar5/wg1/WG1AR5_TS_FINAL.pdf
    TS.5.3 Quantification of Climate System Response (1st full paragraph)

    Estimates of the equilibrium climate sensitivity (ECS) based on observed climate change, climate models and feedback analysis, as well as paleoclimate evidence indicate that ECS is positive, likely in the range 1.5°C to 4.5°C with high confidence, extremely unlikely less than 1°C (high confidence) and very unlikely greater than 6°C (medium confidence). Earth system sensitivity over millennia time scales including long-term feedbacks not typically included in models could be significantly higher than ECS (see TFE.6 for further details). {5.3.1, 10.8; Box 12.2}

    Earth system sensitivity = ESS.GT.ECS

    I tend to think/believe that the likely ECS range (say either 90% or even 95%) reported from paleo studies may be a mashup of ESS and ECS due to poor temporal resolutions. But then again I need to do more reading of those studies (regardless of method) to better understand high ECS values (ECS.GE.6), specifically those studies which show a mean/median/mode where (ECS.GE.6).

    Box TS.1 | Treatment of Uncertainty (p. 36)

    Term*,Likelihood of the outcome
    Virtually certain,99–100% probability
    Very likely,90–100% probability
    Likely,66–100% probability
    About as likely as not,33–66% probability
    Unlikely,0–33% probability
    Very unlikely,0–10% probability
    Exceptionally unlikely,0–1% probability
    *Additional terms (extremely likely: 95–100% probability, more likely than not: >50–100% probability, and extremely unlikely:
    0–5% probability) may also be used when appropriate.

    The IPCC ECS statement has three conditionals as follows:
    (1) IF(ECS.LE.1)THEN(P.LE.0.01)
    (2) IF(ECS.GE.1.5.AND.ECS.LE.4.5)THEN(P.GE.0.66.AND.P.LE.1.00)
    (3) IF(ECS.GE.6)THEN(P.LE.0.1)

    Note: I could have screwd up the boolean logic, but I think/believe I have it right.

    Those three conditionals are represented in the “Likelihood CDF Noodles” plot (as vertical black lines at 1, 1.5, 4.5 and 6). also included are two horizontal black lines, for P.LE.0.01 (ECS.LE.1) and P.GE.0.9 (ECS.DE.6). Finally a “floater box” (with a green dashed horizontal lines) represents the P.GE.0.66 minimum change required in the likely statement for 1.5.LE.4.5. I pin the lower left corner of the then current PDF/CDF to visually determine if I have met the 2nd conditional (likewise for the 1st and 3rd conditionals, there are also a set of cells that show the numerical values).

    I designed eleven PDF/CDF to meet all three conditionals (some hit all three exactly, some hit two exactly and some hit one exactly, but all eleven do pass all three conditionals). The author’s example, strictly speaking, does not meet two of the three conditionals (1st and 2nd) and the “floater box” is set to that PDF/CDF (a two parameter gamma distribution), it is the 1st “Gamma” shown in the PDF and CDF plots and is labeled as “Risk A” in the “Risk Noodles” plot.

    I will continue, but I wanted others to review what I’ve written so far and that there are not some obvious errors that are not yet obvious to me. :/

  181. Everett F Sargent says:

    OK, I just found my 1st error, I used exceptionally unlikely (0.01) where I should have used extremely unlikely (0.05). Author’s PDF passes two conditionals and barely misses the 2nd conditional. So I’m really OK with the author’s choice.

    So, for the moment, never mind. Back to the olde tyme drawing board. But I’m still thinking that the Risk Noodles will be even more so all over the place (with single, double and triple Y values that are the same).

  182. Everett F Sargent says:

    Paul Pukite (@WHUT)

    “Is that the discussion you desired?”

    Kind of. Some risk curves collapse to zero (double Y values), some explode monotonically (one Y value) and some wiggle before exploding (triple Y values). I even have one that goes through a 2nd wiggle (two triple Y values).

    Whatever I didn’t understand about risk (which is a lot) before now, I’ve now come to the conclusion that I have unlearned even more about risk (this exercise does not comport with my thoughts on increasing monotonic risk).

  183. Everett F Sargent says:

    Note, I’m using Knutti17 “Beyond equilibrium climate sensitivity” as it is a rather recent review paper on most reported TCR/ECS papers to date. There was one recent paper that claimed an ECS of 10-11 degrees C. So I’m looking for high ECS papers as NL has given us dozens of low ECS papers (with one even including paleo data).

  184. Everett F Sargent says:

    This …
    “Those three conditionals are represented in the “Likelihood CDF Noodles” plot (as vertical black lines at 1, 1.5, 4.5 and 6). also included are two horizontal black lines, for P.LE.0.01 (ECS.LE.1) and P.GE.0.9 (ECS.DE.6). Finally a “floater box” (with a green dashed horizontal lines) represents the P.GE.0.66 minimum change required in the likely statement for 1.5.LE.4.5. I pin the lower left corner of the then current PDF/CDF to visually determine if I have met the 2nd conditional (likewise for the 1st and 3rd conditionals, there are also a set of cells that show the numerical values).”

    Should read …
    “Those three conditionals are represented in the “Likelihood CDF Noodles” plot (as vertical black lines at 1, 1.5, 4.5 and 6). also included are two horizontal black lines, for P.LE.0.01 (ECS.LE.1) and P.GE.0.9 (ECS.GE.6). Finally a “floater box” (with a green dashed horizontal lines) represents the P.GE.0.66 minimum change required in the likely statement. I pin the lower left corner of the then current PDF/CDF to visually determine if I have met the 2nd conditional (likewise for the 1st and 3rd conditionals, there are also a set of cells that show the numerical values).

    (still includes my newer error for P.LE.0.01 which should be P.LE.0.05)

  185. Dave_Geologist says:

    I tend to think/believe that the likely ECS range (say either 90% or even 95%) reported from paleo studies may be a mashup of ESS and ECS due to poor temporal resolutions.

    See the PALAEOSENS review Everett, which attempts to split out fast from slow feedbacks and more-or-less reconciles palaeo ECS estimates with the IPCC (a bit bigger, 2.2–4.8 K per doubling of atmospheric CO2). No comfort for lukewarmers you’ll note.

    We shouldn’t be complacent about the slow feedbacks though.
    1) Slow in some cases might mean hundreds not thousands of years, so not the distant future.
    2) It’s in the nature of ESS that it’s “baked in”. When we talk about ESS/ECS ratios mattering after 2100, we’re not talking about things we can influence after 2100. Rather about how much in-the-pipeline warming we’re committed to, even if we stabilise or reduce CO2.
    3) This part is speculative, but we’ve never done it this fast before, not even in the PETM (at least an order of magnitude slower). We tend to assume that will bring out lags in the system, so that we’ll approach ESS equilibrium somewhere slower than our CO2 addition rate but faster than PETM warming. But AFAICs there’s no a priori reason why our unprecedented rapidity should not, for, example, result in an overshoot and rebound. So transient conditions en route to ESS may be worse not better than the final ESS outcome. I have come across a few papers analysing the impulse response from multi-model ensembles, which you could then convolve with candidate forcing paths, but the two I could quickly find are not definitive in my mind. One is very long-term (100ky) and deals only with CO2 drawdown. The other puts in only a fairly small pulse of CO2 and may be too small to trigger potential tipping points.

  186. I have been told numerous times that if we stopped CO2 emissions, we would see temps stop rising in ten years or less. That seems very fast to me. I have also read 25 years and have sometimes suggested that time frame, but I usually get corrected to the ten yr or less time frame.

    I think the ten year or less time frame is very good news, but there is a bit of bad news as well. One item of bad news is what would happen if/when we go from adding emissions to flat in a big hurry. (food supplies? political upheaval, large human migrations/die-offs, etc) . Another bit of bad news is that we are already so hot that we have probably bought a lot of food and ecosystem collapse and disruption and a lot of that just hasn’t landed on our doorstep yet. Finally, there is the bad news that we don’t appear to be capable of making reductions in emissions on a time frame that is meaningful in terms of AGW and stabilizing the global ecosystem. I just don’t think we have the ability decarbonize the human global economy, so we are going to experiment with triggering the fastest major extinction event to ever happen on this planet. Not a great idea, but it seems to be the path we are on.
    Cheer,
    Mike

  187. Everett F Sargent says:

    Dave_Geologist,

    I did stumble over the PALAEOSENS paper in my searches, that one is dated to 2012, so that I’m thinking it is included in AR5 WG1. I’m mostly looking for post AR5 papers as those are the ones that will likely change the AR6 ECS conditionals.

    But I’ve included the paper you have referenced above. It will form an anchor paper for newer paleo papers. So thanks.

  188. JCH says:

    OT:

    Climate sensitivity estimates – sensitivity to radiative forcing time series and observational data

    Abstract.

    Inferred effective climate sensitivity (ECSinf) is estimated using a method combining radiative forcing (RF) time series and several series of observed ocean heat content (OHC) and near-surface temperature change in a Bayesian framework using a simple energy balance model and a stochastic model. The model is updated compared to our previous analysis by using recent forcing estimates from IPCC, including OHC data for the deep ocean, and extending the time series to 2014. In our main analysis, the mean value of the estimated ECSinf is 2.0 °C, with a median value of 1.9 °C and a 90 % credible interval (CI) of 1.2–3.1 °C. The mean estimate has recently been shown to be consistent with the higher values for the equilibrium climate sensitivity estimated by climate models. The transient climate response (TCR) is estimated to have a mean value of 1.4 °C (90 % CI 0.9–2.0 °C), and in our main analysis the posterior aerosol effective radiative forcing is similar to the range provided by the IPCC. We show a strong sensitivity of the estimated ECSinf to the choice of a priori RF time series, excluding pre-1950 data and the treatment of OHC data. Sensitivity analysis performed by merging the upper (0–700 m) and the deep-ocean OHC or using only one OHC dataset (instead of four in the main analysis) both give an enhancement of the mean ECSinf by about 50 % from our best estimate.

  189. Everett F Sargent says:

    JCH,

    Yeah, I tripped over that paper last night, another observational constrained paper with NL-like low ECS values. I’m much more interested in the high end of ECS right now (anything with mean/mode/median at or above 4C).

    There is a method to my madness, others often have called it the scientific method (which is what I thought WG1 was, you know, The Scientific Basis or some such), given the AR5 WG1 three conditionals, a single exponential impact (it would also be useful to use a polynomial with all positive coefficients often called a series (see below)) and using those three ECS conditionals to their broadest/widest extent possible … and you end up with a noodle plots or a pretzel logic plots?

    https://en.wikipedia.org/wiki/Exponential_function#Formal_definition

  190. Joshua says:

    If you had said “possibly preventing more” I would have no quibble.

    Earlier is spoke of mitigating.

    I’m not terribly bright, but even I know that the AGW which has already occurred,and thst which is already in the pipeline, cannot be prevented. Interesting thar you quibble with an argument thst I wouldn’t even consider making.

    The categorical difference isnt preventability.

    I would say the more Salient feature is we cause AGW and we dont cause meteor impacts.

    I would question WHY that feature is salient. We could say it’s salient because of accountability at some level, but I don’t particularly care about that. I’m focusing on what’s more practical.ans at thst level, I think that the extent to which the anthropogenic feature is important is in direct connection to the extent to which that means it is preventable.

    I would argue thats more salient because, we may not ( practically speaking ) be able to prevent
    more AGW,

    Just as we may not be able to prevent monkeys from flying out of your butt.

    I think what distinguishes AGW from certain other (one might even say “natural” low probability high damage events is that AGW MAY manifest in PREVENTABLE damage. even though it is theoretically preventable,

    and we are theoretically able to prevent
    meteor strikes even if we cannot practically do so today.

    I see a relevant distinction in terms of present day “prevent” capability. Which is, effectively, IMO, a categorical difference.

    So the categorical difference, is
    related to the cause. we cause one; we dont cause the other. seems simple but maybe you control asteroids,

    I don’t agree. IMO, whether we “cause” the damage is irrelevant in a practical sense (unless you’re focusing on guilt or accountability). IMO, what matters is the extent to which we have the potential to protect against the risk. In thst sense, in a perfect world. If we could completely adapt to AGW (including our “natural” environment) , I don’t know that it would matter to me that “cause” AGW.

    I would rank preventability as a quantitative difference, not a
    categorical ( 0/1) difference.

    In some abstract sense, sure. In a practice sense, I don’t agree. We can clearly take action, now, to prevent the risk of AGW. That doesn’t necessarily mean that we SHOULD. It doesn’t tell us where to focus our sites w/r/t the degree of risk were trying to prevent. Tough questions, those.

    There are a certain class of events that it doesnt really make sense to ascribe a probability to.

    But go ahead, go ahead and show your guess/ calculations for Kim.

    My guess is that there is a real threat. Not so much of a deliberate direct first strike (ensuring how own destruction), but of an unintended strike or an escalation thst gets out of control. I wouldn’t know how to begin with making a quantification (although some people smarter than I do so). But my point wasn’t with respect to my beliefs not sure why you keep missing that.

    In typical war scenarios
    you are not assigning probabilities. Take the scenario we were given in the mid 80s.
    a two front war in fulda gap and the korean peninsula. Nobody assigned a probability.
    there was no place to start to justify it. It was just posited as a stressful scenario that
    you adopted as a truth for the purposes of force planning. “we want to be prepared
    for X” Nobody thought it would happen, but your not paid to evaluate wether it is probable
    or not. You just use it. Same with planning in ROC. the scenario was a blockade and
    then attack by the PRC. no probability is assigned. Its just accepted as scenario you
    use in planning. If you asked the probability you would be laughed out of the room.
    Its a scenario, selected for the stress it induces.
    C) Of course people ascribe “probabilities” they use words like low or high. Can’t stop
    them, not sure it makes any pragmatic sense.

    Of course there are assumptions that delineate probilities to some extent. Probabilities of the likelihood of an event occurring, and probabilities of the resulting damage. If no one made ANY assessment of probilities, then no one would plan for the potential for it to happen. We see estimates if damage levels all the time. I’m not assuming any particular degree of precision. You work with what you’ve got. You make evaluations on the basis of subjective evaluations to sime extent.

    Anyway, this is getting boring…. so moving on….

    As near as I can tell, the rest of what you wrote are criteria that would indicate reasons that overly specific estimates wouldn’t be compelling or wouldn’t inspire high confidence.

    OK. Sure. That doesn’t lay out a case, that I can see, for a “certain case of events” for which I would not ascribe probilities, just a list of factors I would take into consideration when determining the range of probilities or my level of confidence in specific estimates.

    But I’m kind of lost as to what we are even discussing at this point. Again, my point was that there is a reason why taking action to “prevent” anthropogenically caused climate change is more compelling than taking action on other, low-probability, high-damage events such as an (extinction-type) meteor strike (obviously, the scale of the meteor is relevant) or events like a supervolcano or the sun blowing up – and part of the basis on which I make my determination as to what makes sense in that regard is whether, the event is, in any practice sense, preventable.

    Goinf to the end…

    so like I said there are some cases where it probably doesnt makes sense to apply a probability.
    or rather there are cases where other tools may be better. lots of scenario analysis.

    questions?

    Why would you split off scenario analysis from probability assessment? They seem inextricably linked, to me.

    As it happens, I have lived in Seoul. I have worked (and still do work) extensively with Koreans. I never got a sense that they EVER stopped evaluating the probilities of NK launching a nuclear strike. That doesn’t mean that I think they have a compelling, exact quantification of the probabilities – nor that I would say that quantifying the risk “makes sense” in the meaning that their estimations justify a confidence that is necessarily more robust than a random guess. Sure, context matters. A lot. Yeah, that’s all part of what complicates how humans approach risk.

    Again, that was one of my points. People don’t approach evaluating risk in a consistent fashion. As an example, there is quite a bit of overlap between a set of Americans who dismiss the risk from AGW because they see it as low probability and low damage, and a set of Americans who advocated invading NK to prevent what they saw as a (I guess high probability?) high damage risk. And then Trump met with Kim, and magically their evaluation of the probilities changed, practically overnight. Others would (and do) approach the relative risk from those two threats differently.

    It is what it is.

  191. Joshua says:

    On my phone. Messed up the formatting horribly. Losing interest fast and so won’t try again, but I think you can figure it out.

  192. Joshua says:

    On another note. Just when I thought irony was dead, someone kills it all over again:

    Jordan Peterson sues Wilfrid Laurier University for defamation.

    I’m hoping that someone can show me how this is a joke.

    https://www.theglobeandmail.com/canada/article-jordan-peterson-sues-wilfrid-laurier-university-for-defamation/

  193. Joshua says:

    The same people who think that mitigating risk of AGW is too costly:

  194. JCH says:

    Everett – I suppose you’ve read this one, but just in case:

    Potentially large equilibrium climate sensitivity tail uncertainty

    …Tail behavior is often pos- tulated rather than empirically derived, because oftentimes it is statistically 100 very difficult, and sometimes even impossible, to estimate the probabilities of extreme values when there are so few extreme values of rare tail-events in the existing data. This is overwhelmingly true for estimates of ECS tail- probability distributions. Pending other climate-econometric challenges, Cox et al. (2018) may have found a useful new way of measuring the ‘best esti- 105 mate’ of ECS. In doing so, however, they have effectively assumed something close to a Normal distribution around the best estimate. While this analysis may be used to justify statements around the ‘best estimate’ of ECS, it does not justify statements concerning its tail behavior and, in particular, cannot rule out the fat tails that characterize many physical processes. …

  195. izen says:

    @-“Sensitivity analysis performed by merging the upper (0–700 m) and the deep-ocean OHC or using only one OHC dataset (instead of four in the main analysis) both give an enhancement of the mean ECSinf by about 50 % from our best estimate.”

    So small differences in the model of energy distribution in the oceans can give and ECS of 2.8C and a TCR of 2.1C.

    Whenever climate sensitivity has been discussed on previous threads I have expressed my doubts about the usefulness of the metric or at least the supposed relevance of small differences in the estimates.
    This thread will be no exception.

    Climate sensitivity, ECS/TCR, is a derived metric from modelling. It refers to the average Global temperature for a year. It contains no information about how that temperature rise is distributed, either regional or over the day/night, summer/winter cycles. As Held has discussed in the past, the distribution of temperature change can alter the global average and therefore the apparent ECS. If climate patterns were to change in a way that kept the Arctic ice free year round, the extra energy lost during winter from the 30C warmer sea surface, compared to ice, would result in a lower global average temperature rise. More of the extra energy from the CO2 effect would have been lost more efficiently to space.

    I doubt this lower ECS would reflect a smaller disruption to the inhabited environment in the N Hemisphere with an ice-free Arctic.

    TCR/ECS are secondary metrics, derived from the underlying, and important change, the amount of extra energy absorbed, in Joules. Estimated ECS is just an estimate of how that energy is distributed into the 3 reservoirs, land air and sea.
    Land has a very low thermal capacity, so shows a large temperature response to extra energy, but has little ability to transport or store much energy.
    Air has a small thermal capacity so if more energy ends up in the atmosphere it will warm much more than if that same energy enters the oceans. But it can transport that extra energy rapidly at a global scale, so offsets its high sensitivity with efficient distribution.

    Oceans have a very large thermal capacity, they can absorb 90% of the extra energy with far less temperature change. If ALL the extra CO2 Joules could be distributed into the deep oceans a doubling of CO2 would probably raise themperatures by less than half a degree. But while oceans can transport and distribute energy, there are timescale constraints on how much, how fast, how far.

    So estimates of climate sensitivity are based on underlying assumptions and constraints on how the energy is shared out between the land, sea and air. Put more of the Joules into the ocean, the ECS is lower.

    MY objection is that the impacts, especially low incidence, high cost impacts are unlikley to scale in any meainingful way with ECS. It is much more likely that they will scale with Joules added to the system.
    It is the amount of extra energy absorbed that is the basic measure of impacts. It is the spatial and temporal distribution of the climate in response to that energy that will drive impacts. From that, as a secondary derivitative, can be estimated the annual gloabl average temperature change.

    Arguing over the decimal place value of ECS is quibbling over where the energy has gone, oceans or air.
    What matters, and is likely to be a better indicator of the probability of high impact extremes, is the amount of Joules added to the climate system.
    Not the annual average global temperature derived from that pattern of distribution.

  196. Everett F Sargent says:

    JCH,

    I saw that one too last night but decided against based on it being in Economic Letters. Too many papers too little time, plus I did not expect them to use actual data, they don’t. They critique Cox18 though, If Cox18 did use a Normal PDF, then no fat tail,

    The paper is mixing a Normal PDF with either a Pareto (very poor for a comprete PDF, it has always ranked very low in my analyses) or a Lognormal (OK for a complete continuous PDF). Problem is, neither of their compound PDF’s are continuous PDF’s (continuous derivatives and what all), both are combinations or compounds of two different PDF’s, sort of like a FrankenECS. PDF’s don’t have to be continuous, but then that opens up even more doors (of the dozens of PDF’s that are capable of fat tails, meet the three AR5 conditionals, let’s go ahead and take all possible combinations of 2, 3, 4, … ). :/

  197. Dave_Geologist says:

    smallbluemike

    I have been told numerous times that if we stopped CO2 emissions, we would see temps stop rising in ten years or less.

    The ones I’ve seen either go flat or go a small fraction of a degree higher over a decade or too then decline very gently. But that’s unrealistic. It doesn’t mean stabilise emissions, it means stop all our emissions, and IIRC it’s CO2e so it also mean stopping fugitive emissions from the gas wells and pipelines we’ve shut in, methane belching from cattle and deforestation. Total cold turkey, instantly, and not only economically and politically infeasible, but would also result in at least hundreds of millions of deaths, maybe billions of deaths IMO.

    It’s useful as a thought experiment, as a way to understand how the models work, as a counter to “it’s impossible, blame the Kochs, we just have to live with the consequences”, and as a reminder that if we do take drastic action, we will see short-term improvements. More realistic scenarios take many decades to feed through because the changes are implemented over decades. See the Caldeira & Myhrvold discussion at the bottom of the earlier thread.

    https://andthentheresphysics.wordpress.com/2018/06/04/science-and-skepticism/

    Superficially the 2012 and 2013 papers seem mutually contradictory, but they’re not. The difference is all about how fast we decarbonise.

  198. Hyperactive Hydrologist says:

    izen,

    Great post. I totally agree.

    I have often asked for evidence that low ECS = low/minimal impacts but none has been forthcoming. It seems to be an assumption that very few people seem to challenge. If you make the claim that low ECS means there is nothing to worry about you need to provide evidence for this in relation to impacts.

  199. Dave_Geologist says:

    I don’t agree. IMO, whether we “cause” the damage is irrelevant in a practical sense (unless you’re focusing on guilt or accountability).

    That’s a very important point Joshua. I would rank the quality of lessons-learned and pre-emptive interventions like this:

    1) Aviation, which has perhaps the best reputation for open, blame-free investigation and lesson-learning.

    2) Oil companies behind the scenes are actually pretty good as long as it’s among the professionals. And in some countries, the HSE regulator. It deteriorates internally the higher the level of management involved, and once it goes public everything goes through lawyers and practically nothing is admitted, and at least visibly, nothing is learned. Deepwater Horizon is a case in point. Everyone busy blaming the “greedy Brits” and engaging in CYA “of course it could never happen to us”, when all of the failure points were Made In America and were common across the industry. The only saving grace is that enough of the raw data and enquiry findings were made public that those who know what they’re doing can draw the correct conclusions. It’s all buried in reports the size of AR5, so the press, public and politicians won’t go near it. But hopefully the Gulf of Mexico industry has taken them on board behind the scenes, and quietly implemented the things they’ll publicly say were already implemented, or not needed.

    3) Anything public is really hard to learn lessons from, because it degenerates into point-scoring, finger-wagging and blame-dodging. Which is a shame, because there’s an obvious benefit in making things public, both in terms of accountability and in educating those who want to learn. But it very much gets in the way of determining what happened and stopping it happening next time. Well-executed inquiries like Hutton in the UK, and reasonably well-executed* ones like DWH in the US, can do good. But you have to read the reports, not the headlines and political speeches.

    4) And of course anything that attracts the ire or interest of politicians or media editors sits at the bottom. I probably need say no more on that one.

    * IMHO professional investigators in the USA have a harder time of it because of the practice of putting political appointees in charge of agencies, rather than civil servants. Also, the US has a reputation for box-ticking regulation, which sometimes seems to be designed so you can unambiguously prosecute someone for breaking the rules, rather than to maximise the beneficial impact of the rules. The UK leans towards giving more discretion to do the right thing, but with the caveat that you’re more likely to go to jail, or your boss is, if you do the wrong thing. You might say the US is more inputs-focused and the UK is more outputs-focused.

    But the UK still has problems, e.g. the latest NHS case which lead to the minister calling for more blame-free investigation. That’s a more difficult area, because health scandals, legal scandals, church scandals etc. tend to involve individuals who were incompetent, criminal, or who exceeded their authority in a bad way. The problem is failure to listen to and protect whistleblowers or failure to stop someone who’s behaving unprofessionally, rather than not having the right operating procedures in place. The blame-free culture can be applied to those surrounding the unfit practitioner, but ultimately he or she is indeed unfit to practise and has to be struck off. Even if (s)he’s not to blame, because incompetence can’t be tolerated any more than criminality. There should, however, be room for rehabilitation and retraining where appropriate.

  200. Dave_Geologist says:

    I would rank preventability as a quantitative difference, not a categorical (0/1) difference.

    I agree here too Joshua. Preventability has nothing whatsoever to do with defining risk.

    Speaking as someone who’s walked-the-walk, on some occasions with lives and hundreds of millions or billions of dollars at stake on the low-probability, high-impact end, we always defined risk independent of preventability. Preventability (or mitigation) comes later. You assume the prevention or mitigation is in place, then redo the risk assessment with the mitigation as part of the environment. Then compare the two. If the mitigation* stops someone getting killed, the choice is a no-brainer. If it’s a pure dollar risk, the choice of whether or not to mitigate becomes a cost/benefit analysis.

    I’ve taken part in many risk assessments where we can’t prevent the risk, and all the mitigations are already in place because it’s a common risk or a foreseeable risk, and they’re legal or company-policy requirements. The only mitigation is not to go ahead. But sometime that falls into the same category as “the only way I can mitigate not being killed in a plane crash is not to fly”.

    * We used mitigation both for preemptive actions and for the response if those mitigations don’t work. And for a response that stops the impact happening and one that reduces its impact. That may differ from everyday use.

    A real example. A few years ago we were conducting a series of min-fracs to fully characterise the fracture initiation, propagation and closure pressure in a particular formation. There was a very narrow pore pressure/fracture gradient window, and normally the procedure would have been deemed too risky. But in this case the information would mitigate another, larger risk and it was the only way to get it. The operation would take up to a day, and during each measurement cycle we’d have the drill-string stationary for a few hours, with inflatable packers sealing the test interval from the rest of the hole. That introduces several risks. A financial risk of getting stuck (and a collateral HSE risk of swabbing in gas in the process of getting unstuck). A blowout risk if something unforeseen happened in the hole, or if the drilling mud misbehaved and its weighting agent settled out. Or if we got stuck and/or weather-delayed and the weighting agent settled out as-per-spec. An information deficit because the tool was on the end of the drill-string and all the usual pressure and flow sensors were isolated from the lower part of the hole. A response deficit because some of the measures the drillers would take in the event of a kick (potential precursor to a blowout) require an ability to rotate and move the pipe and to circulate out dissolved gas or inject heavier mud (with the narrow PPFG window the preferred option would have been to circulate it out if possible).

    It took a long time to get it approved, with very high referrals and with the drilling department going right to the top and retaining a veto. One of the mitigations was routine, although sadly, not always implemented. Prior to DWH, it was accepted that the shear rams on the blowout preventer couldn’t cut through a drill-collar or pipe-joint. So we always had a clean piece of pipe across the shear rams when the drill-string was stationary or we were conducting potentially risky operations. Shutting in the BOP is post-event mitigation in the sense of minimising the impact once the risk has become reality, but having the right sort of drill-pipe across the shear rams is preemptive mitigation in case something goes wrong. We also added another mitigation in discussion with the service company conducting the test (of course they were fully informed and had a parallel risk assessment process, because they’d have people on the rig). It proved possible to configure the tool below the packers with a couple of extra pressure sensors. They’re much better precision than the drilling ones, better than 0.01 psi. That enabled us to monitor the hole for any misbehaviour, and even let us report a real-time pressure gradient between the sensors, giving early warming of mud settlement. IIRC that was the clincher in going ahead – a mitigation that provided enhanced information which made it much more likely that we’d see problems coming and nip them in the bud. That’s preventive mitigation.

    In the end all went safely and the operation was a success. But I presume there were still other mitigations behind the scenes, as would be done for other potentially risky operations like a well-test, e.g. ensuring that our Emergency Control Centre was aware and doing the same with shared offshore services like our helicopter and rescue boat operators.

  201. izen says:

    @-HH
    “If you make the claim that low ECS means there is nothing to worry about you need to provide evidence for this in relation to impacts.”

    What seems to be omitted is any consideration of WHY ECS may be low.

    If in one scenario ECS is high and another low when the SAME amount of extra energy has been added to the system it is because of where the energy went. If more goes into water with its much higher thermal capacity the temperature rise will be lower.

    If water were a inert thermal sink with little influence on the climate this may be a benign outcome. Unfortunately with its ability to transfer enormous amount of energy without the driver of a large temperature difference by reason of its phase change in the latent ‘heat’ of melting and evaporation, and the extra energy-trapping effect of water in vapour form in the atmosphere, it seems unlikely that a lower ECS from increased ocean absorption is inconsequential.

  202. Everett F Sargent says:

    It took awhile, but I did figure out what was wrong with this figure …

    Basically likelihood should be a CDF not a PDF (per ISO definitions)
    https://www.earth-syst-dynam-discuss.net/esd-2018-36/#discussion

    You can now ignore my comment SC7 over there. Comments SC8 and SC9 are correct.

    I’m old and rusty now, its been like 40 years ago when I had my formal book learnin’ :/

  203. Everett,
    I don’t think I agree. Surely the risk is the probability of warming by some amount time the impact of the warming? Why do you think it should be a CDF rather than a PDF?

  204. A CDF would describe the likelihood of something occurring of a certain level or beyond. The beyond indicates the cumulative part. Then the likelihood of not occurring would be 1-CDF.

  205. Paul,
    I get that, but you can also get that from the PDF.

  206. I smiled when I saw the rhetoric (“Is anyone here really paying attention to the very fundamentals of risk? How could this happen? I am embarrassed for the author and Professor Ed Hawkins (acknowledgments)”) in your question, this really is not a good idea in a scientific discussion, unless you don’t actually want to be taken seriously. As it happens the definition of risk used in the paper is correct, so I’m afraid your hubris has had the usual outcome.

    From Wikipedia (quoting the OED), risk can be defined as:

    3. The probability of something happening multiplied by the resulting cost or benefit if it does. (This concept is more properly known as the ‘Expectation Value’ or ‘Risk Factor’ and is used to compare levels of risk)

    This would be a more apt definition for a set of discrete outcomes, but the way that you would generalise that to a continuous outcome is to use a PDF.

    “We do not claim any subject matter expert (SME) status in risk or impacts (well except
    for 30+ years as a research coastal engineer where we deal with this stuff all the time),”

    LOL.

  207. Everett F Sargent says:

    See my most recent comment PP, see SC10. I’m way ahead of you (e. g. 1-CDF). I did get a bit rude though. About time too.

    ATTP, no you can’t. Because the PDF has units (the inverse of the x-axis units).

    DK, have a nice day. 🙂

  208. Everett F Sargent says:

    ATTP, I booked learned this stuff like 40 years age. I KNOW that I am right.

    I posted the ISO standard definitions. I do know what I am doing even though I will admit to being a ‘bit’ rusty.

    I am coming to think I’m the only one here with both formal and 247 on-the-job professional training though.

    That is all.

  209. Everett,
    Okay, you’re saying that the problem is that the PDF has units of probability per x interval?

  210. ” I did get a bit rude though. About time too.”

    well done. That way you probably won’t get a reply and you can claim victory.

    DK, have a nice day.

    So if you are going to ignore people pointing out that you are wrong, why do you expect a more substantive reply from the authors of the paper? All you have done is demonstrate that you have no counter-argument and cannot admit you made an error. That is the problem with hubris (or being a bit rude), it puts you in a position where you can’t back down without making an utter (and I mean “utter”) fool of yourself, so people double down instead, or engage in evasion.

  211. I must admit that I had taken the figure to be more illustrative than exact.

  212. Everett F Sargent says:

    DK, don’t care what the OED sez, look into the author’s own reference 1 (which I cited). Then please go here …
    Cumulative frequency analysis
    https://en.wikipedia.org/wiki/Cumulative_frequency_analysis

    And no, the author scotched it up quite fine in their Likelihood times Impact equals risk graph. Of that I am CERTAIN!!! :/

  213. EFS how about looking for a definition of RISK rather than pdf/cdf.

    ” Of that I am CERTAIN!!! ”

    yes, despite the fact the OED contradicts you. How about Wikipedias page on probabilistic risk management, which says “The total risk is the expected loss: the sum of the products of the consequences multiplied by their probabilities.”. Anybody with a sufficient grasp of statistics to know what an expectation is will see that this means the risk is the product of probabilities and risks. Note the CDF does not give you the probability of a continuous outcome.

    Of course I am wasting my breath, because you have made it very clear that your prior probability I am wrong is one. ;o)

  214. Everett F Sargent says:

    ATTP, it’s all about and tales and storylines and CliFi, don’cha know. AR6 should mo longer carry the subtitle “The Scientific Basis”

    I Wasn’t Born Three Days Before The Day After Tommorrow!

    DK, could you please stop doubling down when you don’t have a clue? :/

    ATTP, please look at the (ISO) definition of likelihood …
    https://www.iso.org/obp/ui/#iso:std:iso:guide:73:ed-1:v1:en
    (0 to 1, no units)

  215. Everett F Sargent says:

    DK, now you are tripling down? 😦

  216. Everett F Sargent says:

    Dk, please stop saying I am wrong when clearly that is not the case. The author of said idea make a very simple mistake in the example plot (using the PDF when they should have used (1-CDF)).

    R-LI, the L stands for likelihood it is totally unitless and goes from 0 to 1.

    That is all. :/

  217. Everett F Sargent says:

    R=LI (whatever)

  218. EFS “DK, could you please stop doubling down when you don’t have a clue?”

    yawn.

    As it happens, my research interests are in machine learning, a branch of statistics. I teach basic risk analysis/decision theory to undergraduates every year. I am also fairly expert in identifying discussions where my interlocutor is either trolling or incapable of admitting their are wrong, where there is little point in continuing,

  219. Everett F Sargent says:

    DK,

    Are you sure about this …

    “Anybody with a sufficient grasp of statistics to know what an expectation is will see that this means the risk is the product of probabilities and risks.

    risk!=risk times probabilities. cancleling out risk we have nothing=probabilities

    So here is what really happened, Sutton passes off the graph to Hawkins. Based on that incorrect graph (using a gamma PDF which is clearly wrong, the PDF is never, nor can it ever be, likelihood per the ISO standards) I spent a great deal of (now wasted) time in their misdirect, hmm err, mistake. Nothing made sense based on that one incorrect graph. I did read the article, so there.

    So I looked at all the author’s references starting with the 1st. Quickly I scanned through that reference looking for even one PDF, then I saw likelihood in several figures, 0% to 100% always, then a light bulb went off in my head. It’s supposed to be a CDF! B-I-N-G-O End of story. Bye.

  220. “R-LI, the L stands for likelihood it is totally unitless and goes from 0 to 1.”

    O.K. say I roll a die (numbered 1-6 as usual) and I have an impact function that says how much I loose for each outcome, such that I(1) = 1, I(2) = 2, I(3) = 4, I(4) = 8, I(5) = 16, I(6) = 32.

    Now if we assume that this is a fair die, then the probability distribution function (the discrete equivalent of a probability density function) P(1) = P(2) = P(3) = P(4) = P(5) = P(6) = 1/6, such that they sum to one.

    The cumulative distribution function is C(n) = n/6.

    So what is the risk associated with rolling a four?

    Using the PDF it is R(4) = P(4)*I(4) = 8*1/6 = 4/3

    Using the CDF it is R(4) = C(4)*I(4) = (1/6 + 2/6 + 3/6 + 4/6)*I(4) = 8*10/6 = 40/3

    So the question is, why does the risk associated with rolling a four depend on the probabilities of rolling a 1, rolling a 2 and rolling a 3 as well as the risk of rolling a 4?

  221. ““Anybody with a sufficient grasp of statistics to know what an expectation is will see that this means the risk is the product of probabilities and risks.”

    Yes, you are right, it should of course have been “risk is the product of probabilities and impacts”. The point remains that the Wikipedia page on probabilistic risk analysis shows you are wrong about the definition of risk.,

  222. Everett F Sargent says:

    DK,

    Saying I am wrong is not the same thing as formally proving that I am wrong. You are wrong, Am not. Am too, Am not. Am too. …

    It should be a very simple matter to formally prove that I am indeed wrong. But you can only use equations in your proof. :/

    Your pedantic bickering has gotten very boring at this point. What was your point again? That you can’t read the words that I have posted. Those words are pretty straight forward. Kind of hard not to understand those words properly.

    Bye.

  223. <a href="https://en.wikipedia.org/wiki/Likelihood_function"Likelihood does not have the same meaning as probability in statistics: Now there are at least two probabiltes, the authors of the paper have got it wrong, or you are failing to understand they are using the statistical meaning of likelihood. It is the latter.

  224. Everett F Sargent says:

    DK,

    Please remember the author’s definition which was Risk = Likelihood * Impact (R=LI). Are you here and now saying that Hawkins and Sutton are WRONG? After all it is their definition, not mine.

    Please explain their error then. Thanks.

  225. There were equations in my previous post, I note you have not answered the question at the end.

  226. EFS go find out what likelihood actually means in statistics.

  227. Everett F Sargent says:

    “Now there are at least two probabiltes, the authors of the paper have got it wrong, or you are failing to understand they are using the statistical meaning of likelihood. It is the latter.”

    They pointed to the ISO standards not I. 0.LE,Likelihood.LE.1 You do understand dimensional analysis? Degrees centegrade carry through to their risk metric, yes? They would fail the first week of a course in hydrodynamics! But then you don’t know hydrodynamics and I do.

  228. Everett F Sargent says:

    Dkj, I am now quite sure that I would never take a course from you in anything. I could care less about dice rolls, that isn’t a PDF/CDF or our current discussion of risk = likelihood times impact. Please stay on topic. Why are you moving the goal posts? :/

  229. EFS. Actually the thing you really need to look up is probability density function and why you can’t talk about the probability of a continuous random variable taking on a particular value. The previous example I gave shows why it isn’t the CDF, when you’ve answered my question about it, maybe we can continue with the discussion, but not until then.

  230. “Dkj, I am now quite sure that I would never take a course from you in anything. I could care less about dice rolls, that isn’t a PDF/CDF or our current discussion of risk = likelihood times impact. Please stay on topic. Why are you moving the goal posts?”

    Ah I see you have run away from the question and just used rhetorical evasion. Sorry I am not interested.

  231. Everett F Sargent says:

    Oh DK,

    The topic is Sutton’s idea and his fundamental error in defining likelihood, his graphic proves such, yet he does not appear to have read his own reference 1.

    Here’s the bottom line you just sort of jumped into it (posting here) without much thought. Go ahead and post a comment yourself on that discussion paper. I am through with my comments, over there, that is for sure. Most people will admit their own error, so it is just, let us wait and see.

    Kind of like WTFUWT? never wrong, never right, up is down and left is right.

  232. Everett F Sargent says:

    Your dice question? 😦

    I did not move the goal posts you did. Please explain your dice question in the current context of R=LI. Oh and stop changing the topic of discussion. Your dice question is not comparable with the author’s current standard definition of R=LI (per the ISO guidance on risk) …
    ISO 31000
    https://en.wikipedia.org/wiki/ISO_31000

    The definitions are right there, you are free to not follow the ISO risk standards though. Me I like standards that hold scientists accountable.

  233. Everett F Sargent says:

    [Enough of that. -W]

  234. Willard says:

    > The definitions are right there

    Here’s the “definitions” section:

    One of the key paradigm shifts proposed in ISO 31000 is a controversial change in how risk is conceptualised and defined. Under both ISO 31000:2009 and ISO Guide 73, the definition of “risk” is no longer “chance or probability of loss”, but “effect of uncertainty on objectives” … thus causing the word “risk” to refer to positive consequences of uncertainty, as well as negative ones.

    A similar definition was adopted in ISO 9001:2015 (Quality Management System Standard), in which risk is defined as, “effect of uncertainty.” Additionally, a new risk related requirement, “risk-based thinking” was introduced there.

    Likewise, a broad new definition for stakeholder was established in ISO 31000, “Person or persons that can affect, be affected by, or perceive themselves to be affected by a decision or activity.” It is the verbatim definition given for the term “interested party” as defined in ISO 9001:2015.

    https://en.wikipedia.org/wiki/ISO_31000

    Some definitions seem to be missing.

    ***

    > Me I like standards that hold scientists accountable.

    Me I prefer when standards are not an excuse for more ClimateBall.

  235. That’s the nice thing about standards [or definitions], there are always so many to choose from. (Andrew Tannenbaum) ;o)

  236. Can we try to make this discussion pleasant. I think I see what EFS is saying (it’s meant to be probability not probability density) but I also think he’s missing what Dikran is saying. I also think the paper was trying to illustrate this, rather than presenting an exact calculation .

  237. There could be situations involving a piecewise constant impact function where you could get the same result with the CDF. I would have mentioned that earlier, but it would likely distract from the key point, which is what the author of the paper did is entirely standard. If you have a continuous outcome, you can’t talk about the probability of a particular outcome, which is why probability density functions are used instead.(and they don’t go from 0 to one). However if you have a continuous outcome and continuous impact function, then that is the best way of dealing. Discretising the impact function and the outcome so you can use standard probabilities introduces approximation error.

  238. “where you could get the same result with the CDF.” – but not by just multiplying with the impact function – which is an important point. My dice example shows why multiplying the CDF and the impact doesn’t make any sense.

  239. Everett F Sargent says:

    This is really kind of very funny …
    “So what is the risk associated with rolling a four?

    Using the PDF it is R(4) = P(4)*I(4) = 8*1/6 = 4/3

    Using the CDF it is R(4) = C(4)*I(4) = (1/6 + 2/6 + 3/6 + 4/6)*I(4) = 8*10/6 = 40/3”

    Actually, after many rolls the 1,2,4,8,16,32 impacts will fall through in proportion to those values for a uniform distribution, I=I. In your die example, a CDF makes absolutely no sense whatsoever for a single roll of the die. In your example, a six would have a CDF of one, should I always expect to get a six, or a number between zero and seven?

    You can start to do a CDF for multiple roll of the die

    I need to see a continuous PDF and a continuous Impact example to do R=LI properly (discrete will also work as long it has a beginning and an end and say N>25).

  240. Everett F Sargent says:

    “Discretising the impact function and the outcome so you can use standard probabilities introduces approximation error.”

    I have never encountered that problem. Perhaps because I have always coded in double precision, or over the past few years quad precision (assuming it isn’t time critical).

  241. Everett F Sargent says:

    ATTP,

    “I also think he’s missing what Dikran is saying”

    That is true, but then again I don’t know what DK is saying. Perhaps you could translate?

  242. Everett F Sargent says:

    “You can start to do a CDF for multiple roll of the die”

    Is wrong. It will still produce a uniform distribution for a single die (I guess I was thinking dice as in more than one).

  243. EFS “In your die example, a CDF makes absolutely no sense whatsoever for a single roll of the die.”

    Yes that is the point.

    “You can start to do a CDF for multiple roll of the die”

    with respect to climate risks we only have “one roll of the die”, we can only run the “climate experiment” once.

    “I need to see a continuous PDF and a continuous Impact example to do R=LI properly (discrete will also work as long it has a beginning and an end and say N>25).”

    Fine, do the same example with a d100 ( a hundred sided dice), the outcome will be the same, using the CDF won’t make sense in that case either.

    The PDF is essentially the limiting case as the discretisation of the outcome goes to zero width bins, so the CDF isn’t any better there.

  244. Everett F Sargent says:

    “Likelihood does not have the same meaning as probability in statistics: Now there are at least two probabiltes, the authors of the paper have got it wrong, or you are failing to understand they are using the statistical meaning of likelihood. It is the latter.”

    No kidding.

    Their PDF has a y-axis with the units units of 1/C, the author’s Impacts could be economic ($) or deaths or human years lost. The product (R=LI) is then (dollars or deaths or years lost)/C. To make any sense of that specific risk graph one must integrate under the curve to remove 1/C. So clearly one can do that if one so chooses. But it does not communicate the most easiest form of directly showing only dollars or deaths or years lost (which is what R=LI does if done properly).

    Other than that, they still have the issues of weird asymptotic CDF behaviors when coupled with exponential-like growth.

  245. Everett F Sargent says:

    “Fine, do the same example with a d100 ( a hundred sided dice), the outcome will be the same, using the CDF won’t make sense in that case either.”

    I already did that thought experiment, no go for a single role and making up a fictitious CDF.

    We are clearly talking past each other. I can only cite the Sutton paper and their dimensional error (plus the weird risk asymptotic behaviors of several candidate distributions).

  246. Everett F Sargent says:

    Probability density function
    https://en.wikipedia.org/wiki/Probability_density_function

    “The terms “probability distribution function”[2] and “probability function”[3] have also sometimes been used to denote the probability density function. However, this use is not standard among probabilists and statisticians. In other sources, “probability distribution function” may be used when the probability distribution is defined as a function over general sets of values, or it may refer to the cumulative distribution function, or it may be a probability mass function (PMF) rather than the density. “Density function” itself is also used for the probability mass function, leading to further confusion.[4] In general though, the PMF is used in the context of discrete random variables (random variables that take values on a discrete set), while PDF is used in the context of continuous random variables.”

    Note: “or it may refer to the cumulative distribution function” which should comport with Sutton’s definition if he had used the correct likelihood function (1-CDF (no units) not PDF (units of 1/C)).

    DK, can you help me find CDF here (thanks in advance) …
    Probability mass function
    https://en.wikipedia.org/wiki/Probability_mass_function

    I’d like to see a peer reviewed paper on a single die single throw CDF. I seriously doubt that one exists, although this is your chance to write that paper.

  247. Everett F Sargent says:

    OK, here it is …
    Discrete uniform distribution
    https://en.wikipedia.org/wiki/Discrete_uniform_distribution
    (They do mention dice in passing)
    There is a CDF so using likelihood as (1-CDF)*Impact=Risk makes all the sense in the world to me.

    So I am still looking for some actual documentation on CDF’s for a single multiple sided object single roll.

  248. Everett F Sargent says:

    I’ve worked through the N=6 dice pdf/cdf and N=100 dice pdf/cdf and R=LI works like a champ. 🙂
    I can show my work if anyone is interested (guessing not).

  249. Everett F Sargent says:

    This is the correct answer to DK original question (you can pretty much do it all in your head).
    Die,pdf,cdf,(1-cdf),Impact,R=LI
    0,0,0,1,0,0
    1,0.166666667,0.166666667,0.833333333,1,0.833333333
    2,0.166666667,0.333333333,0.666666667,2,1.333333333
    3,0.166666667,0.5,0.5,4,2
    4,0.166666667,0.666666667,0.333333333,8,2.666666667
    5,0.166666667,0.833333333,0.166666667,16,2.666666667
    6,0.166666667,1,0,32,0

    Likelihood = (1-cdf)

  250. Everett F Sargent says:

    Longhand for the number four: (1) pick four unique numbers from 1,2,3,4,5,6, (2) roll the die, (3) you have a one third chance of being incorrect (and a 2/3 chance of being correct on average), (4) the one third chance of being wrong is your likelihood of being wrong, (5) the impact factor of being wrong is eight, (6) R= LI = 8/3 = 2.666666667.

    You don’t get rewarded for being right but you do get penalized for being wrong. Probability of exceedance. (wrong guess) times Impact = risk …
    https://en.wikipedia.org/wiki/Frequency_of_exceedance#Time_and_probability_of_exceedance

  251. Everett said:
    “Probability of exceedance.”

    I think that’s the key to understanding how to apply the CDF.

  252. Everett F Sargent says:

    I think that I have answered both of DK’s questions correctly, one 6-sided die (answers shown above, btw DK had the wrong answer) and the 100-sided die (answers available to anyone, all you need to do is ask, I used a quadratic impact factor).

    So I stand behind my analysis 110%, R=LI is correct if used properly.

    Sutton did not use R=LI properly (ok wait, DK said the did, so they must have, because, but then you have to integrate under their curve, to figure out the corresponding risks, the correct use of R=LI gives you the actual numerical risk values w/o integration, so in this case I guess climate scientists are poor communicators, never heard of KISS.

    Sutton was also wrong about ECS.GE.6, yes you can make up stuff and create some wicked CliFi, but using the current three IPCC AR5 WGI ECS conditionals, their own conditionals mind you, it is simply not numerically possible to have an ECS curve with a median at, or above, 6C. Not numerically possible if one stays within those three IPCC conditionals.

    As to risk peaking above 6C, you can do that if you pick just the right distribution (see my correct risk figure as shown in note SC10). The problem there is the risk curve must collapse to zero (in the practical sense, so say 15 > ECS > 100 and risk equal to infinity is utter nonsense). Right now, numerically speaking, those risk curves are still all over the place with one, two or even three roots. That will be my next task I set for myself, figuring out those distributions coupled to exponential impacts (also lower order polynomials that are a subset of the power series) …
    https://en.wikipedia.org/wiki/Exponential_function#Formal_definition

    Thanks for all the input, it was actually very helpful. 🙂

  253. Everett F Sargent says:

    “15 > ECS > 100” above should be “15 < ECS < 100" (sorry, my bad)

  254. EFS “I think that I have answered both of DK’s questions correctly, one 6-sided die (answers shown above, btw DK had the wrong answer).

    No, you didn’t answer the question at all, the question was:

    So the question is, why does the risk associated with rolling a four depend on the probabilities of rolling a 1, rolling a 2 and rolling a 3 as well as the risk of rolling a 4?

    If you use 1-cdf instead of just csf that means the risk associated with rolling a 2 depends on the probability of rolling a three, a four a five and a six as well as the probability of rolling a 2, so that is no better. Sure you get a numeric answer using your procedure, but that doesn’t mean it is the right answer. I’ve pointed out why it doesn’t make sense, and you have just ignored it.

    You say that my risk calculation was wrong, but you cannot point out why. I have pointed out why your method is wrong.

    I think the reason you are getting confused about the dimensional analysis is that in the continuous case, the risk is actualy something like a “risk density”. As I keep saying if you have a continuous outcome, then the probability of any particular outcome is essentially zero (e.g. what is the probability that the temperature change is exactly pi?), which means that the risk associated with a particular outcome is also zero. Thus you multiply the pdf and the impact function to get the “risk density functon” and you then integrate that over some interval to get the risk associated with all outcomes in that interval (a temperature range). The advantage of that (as I pointed out already) over discretising the outcome space is that there is no approximation error due to the quantisation.

    “Sutton did not use R=LI properly (ok wait, DK said the did, so they must have, ”

    O.K. well if you are going to start with the sniping again, even after ATTP’s request for a pleasant discussion, I think it is better to leave it there. I have better things to do,

  255. Everett F Sargent says:

    Sorry DK but you are now the only one here who is dead wrong. I even fell sad for you. 😦

  256. Everett F Sargent says:

    DK,

    These two comments are my answer to you, they are 100% correct …
    https://andthentheresphysics.wordpress.com/2018/06/11/low-probability-high-impact-outcomes/#comment-126699
    https://andthentheresphysics.wordpress.com/2018/06/11/low-probability-high-impact-outcomes/#comment-126700

    That 2nd comment is a longhand walk through. Do I need to do all SEVEN for you in longhand?

    Show the error in my math in those two comments, you made the original math error, not I.

    Here’s the deal, you don’t get to demand anything from anyone here. I’m willing to have constructive comments and constructive feedback. So far, I have found your comments destructive, without merit and lacking in rigor. 😦

    I understand PDF’s and CDF’s and Risk = Likelihood times Impact.

    Anyone else here who is watching this dog and pony show, I can show you all my work, all my math, all my spreadsheets and all my graphs. I am absolutely certain of my work.

  257. Everett F Sargent says:

    “You say that my risk calculation was wrong, but you cannot point out why. I have pointed out why your method is wrong.”

    Where did you do that. I must have missed it. Probably because you have not presented any equations that would falsify my math. 😦

    I did a walk through for the number 4, not literally picking for the number 4, but picking 4 unique numbers before the die was thrown, that is part of the CDF, it goes 0,1,2,3,4,5,6 (0 you don’t play and 6 is a sure winner as you pick all possible numbers as there is only 6 on the die, so 0 and 6 carry zero risk I even used your 0,1,2,4,8,16,32 cascade. I did exactly as you asked, the correct answer is really very simple if you really try to think about it for a bit. 😦

    Oh, and which part of R = LI don’t you get? Perhaps I could do another walk through for you. You are starting to look rather embarrassing now. I am actually trying to help you out. Seriously.

  258. Just to show the trolling “I did a walk through for the number 4, not literally picking for the number 4, but picking 4 unique numbers before the die was thrown, that is part of the CDF, ” picking 4 unique numbers obviously has nothing whatsoever to do with the original problem (the risk associated with the true value of ECS, of which there is only one). Thus it clearly is not doing “exactly as you asked”, it is evading the question by not-so-subtly substituting your own question that bears no relation to reality so that you can continue arguing for the CDF isntead of the PDF.

  259. I think this discussion should stop. I don’t think it’s going anywhere constructive.

  260. My own criticism of the paper would be that the pdf should be over the rise in temperature rather than over ECS so that policy choices are modelled in the pdf rather than in the impact function. We might rationally opt for a “minimum risk” policy choice, i.e. one that minimises the total risk (the area under the risk curve. Obviously the risk depends on policy (e.g. reductions in emissions -v- business as usual), so as the pdf of ECS does not depend in any way on emissions, we would have to have a different impact function for each emissions scenario/policy.

    A far better approach would be to have a pdf over the temperature rise (which depends on both emissions and the pdf of ECS), and a single impact function that depends only on the temperature rise.

    The paper makes a good point though, which is we need to properly consider the interaction between likelihood and impacts. Statistical decision theory provides a reasonable means of discussing this rationally (although not without problems when there can be infinite impacts with non-zero probability – although that isn’t the case here).

  261. Okay, not that I want to get too involved in this discussion, but I agree that an issue is that the paper presents it as depending on ECS, rather than on actual warming. To address EFS’s point. I see no reason why one couldn’t use the PDF to estimate the probability of warming by between x and x + dx, and then multiple that by the impact of warming by between x and x + dx to get the risk factor for that level of warming.

  262. Everett F Sargent says:

    “I think the reason you are getting confused about the dimensional analysis is that in the continuous case, the risk is actualy something like a “risk density”.”

    I presented the original dimensonless number, call it the Sargent Number, for a stiff hinge between two floating bodies at the 2007 Army Science Conference. You can find the paper online. I majored in hydraulics and minored in structures. I beat three structural engineers who were on our team on that one. I was in the Coastal and Hydraulics Laboratory at that time. I was the math nerd, anything to do with floating bodies in an ocean environment, the all came running to yours truly. Frank is it EI or EA? I’d go, it depends, if you look at like this then it is EA and if you look at like this it is EI (E = modulus or specific modulus, then there is specific strength, I is inertia and A is area). Then I told then the answer was a 4th order differential equation, called the Euler-Bernoulli beam equation (I figured it out myself and only later found it in an ME textbook). I built the very 1st physical model in the world that satisfied the three similitude laws AND the correct scaled design breaking strength. No one in the world, at that time, had included the properly scaled strength into any physical scale model. Then there was RIBS another Army project, after I showed up I told the lead scientist (SES’er), you got a bunch of idiots in your laboratory that don’t have an effin’ clue about build proper scale models for water wave testing (that one only had the standard three similitude laws, geometric, kinematic and dynamic similitude), I also have two breakwaters in the POLB (Pier J), harbor recommence ship motion, I lesarned it all myself and I did it all (with a lot of help in building stuff to test).

    So if there is anyone who really understands dimensional analysis than that person would be yours truly (WMC got s-o-o-o-o-o-o-o- upset over that one a few years ago).

    So what was it you were saying about dimensional analysis? Because I got orders of magnitude on you.

  263. ATTP I am not continuing the discussion further, but I agree with what your have just written, if you did that and took the limit as dx->0 you would get the same answer as if you just multiplied the the pdf and the impact function and integrated the risk from x to x+dx. That was basically what I was saying when talking about quantising a continuous outcome, but it’s only as dx->0 that you get the right answer, rather than an approximation.

  264. Dikran,
    Yes, I agree that we would want to take the limit dx -> 0.

  265. Of course you can use the CDF to work out the integral between x and x+dx, to get a quantised approximation, but it isn’t as simple as multiplying (1-cdf)*impact.

  266. Everett F Sargent says:

    DK and ATTP,

    You two should read my note (SC6), I suggested GMST/OHC/GMSL (but I forgot and left left out total emissions (GtC) and CO2eq).

    ATTP do you want a direct plot of risk ($, deaths, years lost on the y-axis) or do you want a plot of that divided by temperature, because then you have to integrate to compute something that you can quite easily put there directly? It is called KISS for a reason and makes for much easier communication (Aren’t you the one always going on about climate scientists communicating better? Case in point.) :/

  267. EFS wrote “ATTP do you want a direct plot of risk ($, deaths, years lost on the y-axis) or do you want a plot of that divided by temperature, because then you have to integrate to compute something that you can quite easily put there directly? ”

    It can’t “quite easily be put there directly” – as I have pointed out the probability of a continuous random variable having any particular value is essentially zero, which means the risk associated with any particular value is also zero (as it is a scalar times zero). That is why we have to have a density function and integrate.

  268. ATTP feel free to delete that last one. I thought it might be worth having one last try, but you did say the discussion should stop, and I agree that would be wise.

  269. Everett F Sargent says:

    DK have you ever taken a formal course in numerical methods?

    Eye 1st used a dt=0.001C and went to an ECS of 100 (trapezoidal rule, because I’m old now, RK4 and FD and FEA/FEM in the late 70’s though), Eye had to back off that one though as that spreadsheet was over 100MB. dt=0.01C is the spreadsheet EYE submitted with my comment.

    BASIC self taught in fall 1972 and Fortran fall 1975. Took 8 graduate level courses as an undergrad. Every course eye fully used both my numerical and programming skills.

  270. Everett F Sargent says:

    Oh wait, that was early on eye did all the CDF’s as integrals, but now all are closed form (no integration). Sorry, my bad.

  271. Everett F Sargent says:

    I’d say something, but nah. :/

    Oh wait, when all of my CDF’s were numerical integrals of their respective PDF’s, I replaced them one at a time with their closed form. Side-by-side comparison each and every time, my numerical integrals were spot on into the double digits. I do this stuff in my sleep.

  272. Okay, I really do think it’s time to bring this “discussion” to a close.

  273. Since this discussion has been about impacts and risk, I thought I would highlight an old post of mine that discusses some of the IPCC AR5 impacts, risks and adaptation potential. There are also a couple of other figures that I hadn’t seen before that consider this globally, rather than regionally (H/T EFS).

  274. Hyperactive Hydrologist says:

    The better half’s latest paper showing intense rainfall is increasing at a faster rate than expected. The models don’t show this rate of increase in rainfall – highlighting another risk that isn’t accounted for.

    Detection of continental-scale intensification of hourly rainfall extremes

    Temperature scaling studies suggest that hourly rainfall magnitudes might increase beyond thermodynamic expectations with global warming1,2,3; that is, above the Clausius–Clapeyron (CC) rate of ~6.5% °C−1. However, there is limited evidence of such increases in long-term observations. Here, we calculate continental-average changes in the magnitude and frequency of extreme hourly and daily rainfall observations from Australia over the years 1990–2013 and 1966–1989. Observed changes are compared with the uncertainty from natural variability and expected changes from CC scaling as a result of global mean surface temperature change. We show that increases in daily rainfall extremes are consistent with CC scaling, but are within the range of natural variability. In contrast, changes in the magnitude of hourly rainfall extremes are close to or exceed double the expected CC scaling, and are above the range of natural variability, exceeding CC × 3 in the tropical region (north of 23° S). These continental-scale changes in extreme rainfall are not explained by changes in the El Niño–Southern Oscillation or changes in the seasonality of extremes. Our results indicate that CC scaling on temperature provides a severe underestimate of observed changes in hourly rainfall extremes in Australia, with implications for assessing the impacts of extreme rainfall.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.