Disasters and Climate Change

Roger Pielke Jr emailed me an advance copy of the 2nd edition of his book on Disasters and Climate Change. Roger’s email ended with I welcome your reactions, comments, critique. Past experience makes me slightly dubious, but I will take Roger at his word.

I’ve read the book, twice in fact, and am finding it quite hard to know what to say. Even though there are a number of things that I disagree with, I think it mostly presents information that is defensible. However, what I think many will conclude from reading this book is not really consistent with our best understanding of this topic.

Since I’m still recovering from having organised a conference that ran all of last week, I’m going to try and keep this short and just make some general comments. There are some specific issues that I may discuss in a later post.

The book discusses how unpleasant and difficult the public climate change debate can be. I think it definitely can be, but I also think it’s worth reflecting on how one’s style of engagement might have influenced how one’s views were received. There is plenty of discussion on whether or not disasters [have] become costlier because of human-caused climate change. The answer is no, the data don’t support claims that the rising costs of climate disasters are due in any part to a human influence on climate. There is even an argument that we should assume that the lack of a detectable signal should be taken as the signal not existing (I don’t agree with this, but will leave this for another post).

There is also a discussion of detection and attribution, that I may discuss further in another post. The book concludes with a discussion about policy and highlights the Kaya identity (emissions are basically a function of GDP, population, how we get our energy, and how we use our energy). It also highights an iron law. GDP growth is essentially sacrosant; any climate policy that will significantly impact GDP growth will never be accepted. It also discusses how difficult it’s going to be to reduce emissions sufficiently. Interestingly, it seems to mostly argue against a carbon tax (one that would have any significant effect, at least).

As I said at the beginning, there are many things I disagree with, but I think a lot of what is presented is probably broadly correct, or at least defensible. We may not yet have demonstrated that climate change has caused disasters to become more costly, it may indeed be difficult to develop effective policy, and getting emissions to reduce sufficiently is going to be very challenging. My biggest issue with the book is that, despite it containing all the necessary caveats, I think it will be used by those who oppose climate policy to argue that there is no evidence that anthropogenically-driven climate change is having any impact on us and, if it is, doing anything ambitious about this will simply not work. If that is the message that was intended, then it’s worked.

If anything, it’s hard to really interpret the intention in any other way. The final chapter seems to explicitly argue against anything too ambitious. The problem, in my view, is that there are indications that unless we get emissions to reduce soon, the resulting climate change could be severely disruptive. I think the book mostly ignores this possibility and seems to present an argument for a policy pathway that we may well be able to achieve, but that may fail to effectively address anthropogenically-driven climate change. I don’t see this as particularly helpful. Others may, of course, disagree.

Links:

Gavin Cawley has a couple of Twitter threads about the book.

Advertisements
This entry was posted in Climate change, ClimateBall, Policy, Roger Pielke Jr and tagged , , , , . Bookmark the permalink.

110 Responses to Disasters and Climate Change

  1. Just before my regulars jump in, let me clarify something. I suspect it may be true that if you do formal detection and attribution, we may not yet have demonstrated that anthropogenically-driven climate change has caused disasters to become more costly. However, I think the book is wrong to argue against single event attribution. Anthropogenically-driven climate change is clearly changing the environment in which these extreme events are occuring and it is clearly possible to study, in some cases, how it has changed the the underlying conditions and, hence, how it has potentially impacted an event. I may expand on this a little in a later post.

  2. jamesannan says:

    Iron law is interesting in the context of brexit. Not climate related, but it’s clear that a large proportion of the UK electorate is quite happy to see a GDP hit if it means fewer nasty foreigners. “It wouldn’t be the end of the world” from the prime minister no less.

  3. Joshua says:

    Anders –

    You say:

    It also highights an iron law. GDP growth is essentially sacrosant; any climate policy that will significantly impact GDP growth will never be accepted. It also discusses how difficult it’s going to be to reduce emissions sufficiently. Interestingly, it seems to mostly argue against a carbon tax (one that would have any significant effect, at least).

    Francis Fukuyama has some thoughts:

    The world is littered with optimal policies that don’t have a snowball’s chance in hell of being adopted. Take for example a carbon tax, which a wide range of economists and policy analysts will tell you is the most efficient way to abate carbon emissions, reduce fossil fuel dependence, and achieve a host of other desired objectives. A carbon tax has been a nonstarter for years due to the protestations of a range of interest groups, from oil and chemical companies to truckers and cabbies and ordinary drivers who do not want to pay more for the gas they use to commute to work, or as inputs to their industrial processes. Implementing a carbon tax would require a complex strategy bringing together a coalition of groups that are willing to support it, figuring out how to neutralize the die-hard opponents, and convincing those on the fence that the policy would be a good, or at least a tolerable, thing. How to organize such a coalition, how to communicate a winning message, and how to manage the politics on a state and federal level would all be part of a necessary implementation strategy.

    ttps://www.the-american-interest.com/2018/08/01/whats-wrong-with-public-policy-education/

    One problem is, IMO, how people define “significantly impact GDP growth,” and whether they account for the full range of uncertainties when doing so. IMO, there is enough uncertainty that people can basically draw any conclusions that they want, in that regard.

    Another, is whether people are willing to go along, rather slavishly IMO, as to how important or meaningful GDP growth is as a measure. In reality, IMO, GDP growth is actually not a favor that figures directly into how the average Jill and Joe go about making decisions in their lives, including voting decisions. Neither is GDP something that factors directly into how the more powerful stakeholders among us go about decision-making, although it certainly does correlate to some extent with the economic status of those powerful stakeholders.

  4. James,
    I agree. Seems that it can indeed be violated. Of course, Roger may argue that people believed that we could leave the EU without negatively impacting GDP growth, but that now seems clearly to not be true.

  5. Joshua says:

    james –

    …but it’s clear that a large proportion of the UK electorate is quite happy to see a GDP hit if it means fewer nasty foreigners.

    Agreed- the point being that GDP growth, or lack thereof, is an indirect effect of decisions that are made on the basis of other, more proximal factors (see my comment above).

    I think that this is a major problem with Roger’s “Iron Law” line of thought. I have seen him defend GDP as a metric to use when thinking about these issues…basically offering an “it’s better than anything else” line of thinking. Personally, I consider that to be an important shortcoming in his perspective.

  6. Joshua says:

    Anders –

    Roger may argue that people believed that we could leave the EU without negatively impacting GDP growth,

    I think that would be a hard argument to support. IMO, a much easier argument to support is that GDP growth does not factor very heavily into how people go about decision-making in their voting behaviors – in particular as compared to identity-associated factors..

    Of course, I could be wrong…but …

    People don’t vote for what they want. They vote for who they are

    https://www.washingtonpost.com/outlook/people-dont-vote-for-want-they-want-they-vote-for-who-they-are/2018/08/30/fb5b7e44-abd7-11e8-8a0c-70b618c98d3c_story.html?utm_term=.8a87550f8542

    IMO, vews on GDP growth, rather like as Kahan argues w/r/t views on climate change, tell you about who someone is, not how they analyze economic data.

  7. Joshua,
    That’s a fair point. I have heard arguments that some people feel that they have benefited so little from economic growth that they don’t really care if something like Brexit damages the economy. They feel that they would be no worse off than they are now.

  8. Joshua says:

    Anders –

    They feel that they would be no worse off than they are now.

    Maybe, but I think a slightly different angle might apply…

    IOW,, I’ve seen some fairly convincing evidence that the Brexit vote doesn’t track very well with economic or employment status per se, so much as with other demographic factors. IOW, those areas that were most strongly “leave” supportive were, in fact, not those who were the worst off economically, or those whose employment was most affected by immigration – but those where economic status was most stagnated relative to other groups whose status improved.

    There is a lot of parallel arguments being presented by social scientists on the profile of Trump supporters vs. non-Trump supporters … for example:

  9. Joshua,
    Interesting, I hadn’t seen that argument. So, it’s possible those who feel that they have not benefited sufficiently, rather than those who feel completely disenfranchised?

  10. Joshua says:

    A big factor may be the sense that many people feel, that they won’t be able to provide a better life for their children as their parents provided for them. For, say, factory workers, that may be something that is largely independent of GDP growth. GDP can grow very quickly even as the manufacturing sector declines.

    One test of how much voting behavior is a direct function of macro-level scales like GDP growth, particularly when they essentially measure only a limited slice of the economy as does GDP, may be the upcoming mid-term elections in the States. It’s not clear that a strong economy on measurement scales such as GDP has translated to a lessening of economic discontent for many Trump voters. Polling shows him dropping in support most in those areas where he got votes from voters who previously voted for Obama.

    Of course, there are many confounds, such as antipathy towards Clinton and tribalistic loyalty for Trump among those who voted for him… but if Demz do perform better relative to 2016, it could mean that GDP growth as a broad measure does not translate very directly to voting. behavior. I would certainly expect that wage growth would be much more explanatory… although GDP growth and wage growth would likely have some degree of positive association.

    But this gets back to. IMO RPJr.’s rather single-minded and, IMO problematic focus on GDP growth as explanatory of large-scale social behaviors. It’ s also why I think his rather singular focus on measuring impact of severe weather as a function of GDP is likewise problematic. Looking at the “costs” of severe weather merely as a function of GDP would necessarily hide the specifics related to disparate impact on specific demographic or geographical communities.

  11. Joshua,
    I wrote this post a few years about a reanalysis I did of one Roger’s papers. It was about the emergence timescale of Tropical Cyclone (TC) damage trends. The key result that Roger would highlight was that it would take maybe 200 years before a trend would emerge from the noise. I got the same result, in the sense that this is the timescale over which it would be almost certain to emerge. However, there was a 50% chance of it emerging before 2100 and a 15% chance of it emerging before 2045.

    However, the most interesting thing (I think) was that even if it took 200 years, you would see a difference between damages due to category 4 and 5 TCs and category 3 or lower TCs emerging much earlier. So, even if the overall damage remains close to the historical trend, there is clearly a difference between this damage being due to a mixture of category 3, 4 and 5, and it becoming predominantly 4s and 5s. The latter means that a smaller fraction of people are being impacted, but in a very substantial way.

  12. Joshua says:

    FWIW, an alternative perspective:

    https://bloggingheads.tv/videos/52739

  13. Joshua says:

    Anders –

    The latter means that a smaller fraction of people are being impacted, but in a very substantial way.

    Sure. More people could die, and in particular more poor people could die, even as economic damage as a % of GDP doesn’t increase.

    I once raised pretty much that issue with him, without much productive coming out of it.

    Suppose we said that there is a given and equal increase of deaths and economic damage due to extreme weather in two neighboring countries. In one country, there is significant GDP growth such that economic damage as a function of GDP doesn’t increase. In the neighboring country, there is exactly the same increase in damages but there is no GDO growth. Are the damages more meaningful in the country with no GDP growth? Seems to me that using a ratio of damages to GDP growth is of some value, but that the limitations should be explicitly stated and placed forward. I don’t recall seeing RPJr. doing that.

  14. ecoquant says:

    @ATTP,

    Yeah, it basically comes down to how much an economic individual or policymaker or investor assesses the price of an improbable event which by many assessments could cost a great deal, and, in comparison, whether or not preventative measures to protect or forestall such events are less than that assessed cost. I think, and that’s clear to anyone who has followed what I’ve written, that Pielke Jr’s and others assessments of that future cost is an underestimate. But what I find interesting, perhaps even fascinating.

    First, there’s the discount rate thing, but I judge, from reading around that, that this is overplayed. In order for it to work there needs to be a dominating trust in the power of continuing wealth creation, like either a trust in technological development or in extrapolation of how things have worked across centuries. I’m not sure investors’/policymakers’/individuals’ trust is that much. It could be this just makes a good argument for principled inaction because, basically, they aren’t going to be around when the most damage occurs.

    Second, while the idea that measured and reported history of quantitatively measurable is more solid than future projections carries some common sense appeal, a careful look at it suggests it’s less grounded than one might think. Summarizing and assessing past climate responses and weather consequences, say, itself demands application of models. Moreover, if one buys that Earth’s climate is a stochastic system, as I do, the trajectory it travelled in response to a set of forcings is only one realization of a large set of possible realizations it might have taken, and that hindsight doesn’t really tell a lot about the variance in the set of possible realizations. Indeed, hindsight is likely to underestimate that variance. So, I’d say, the uncertainty about the bullets dodged in the past is comparable to the uncertainty of what might happen in the future.

    Third, I’m disappointed, per Pielke Jr, that we really haven’t incorporated the experience of the Taleb Black Swan insight. In particular, the key lessons there are that economic and natural systems are fundamentally non-linear and that there are certain forcings of them which result in irreversible structural changes. In the case of the climate system, there is not sufficient wealth in a century of Gross World Product to fix some of the changes, such as the take-up of heat by the oceans.

  15. ecoquant says:

    Readers: Please replace But what I find interesting, perhaps even fascinating, which is incomplete, with But what I find interesting, perhaps even fascinating is why people might think this. Sorry.

  16. If all people cared about were GDP they would be reducing CO2 emissions.

  17. Greg Robie says:

    Doing not too much But talking ’bout such doing Usual business

    More of the same is Our good faith’s perfect practice Our demise insured

    Against such a loss No insurance can cover But who will believe

    Belied the belief That physics defines knowledge Other than action

    =)

    sNAILmALEnotHAIL …but pace’n myself

    https://m.youtube.com/channel/UCeDkezgoyyZAlN7nW1tlfeA

    life is for learning so all my failures must mean that I’m wicked smart

    >

  18. Steven Mosher says:

    Hmm
    https://link.springer.com/article/10.1007/BF00138862

    I’m looking for the citation, but during one climate talk by a climate economist, he cited a standard number of about 1% of GDP as being a threshhold of pain. That if your policy change better not result in a decline of 1%, or you gots some troubles. maybe Tol knows this work better

    But leave the exact number aside. Clearly regardless of what economics says, regardless of what social sciences tell you, regardless of what policy experts tell you, you can always employ the tools of doubt against those sciences and experts and demand your green agenda.

  19. Marco says:

    “There is plenty of discussion on whether or not disasters [have] become costlier because of human-caused climate change.”

    Should they become costlier? And how do we calculate “costs”?

    If I take the number of flat tires and associated costs I’ve had over the last ten years as a proxy, the number of sharp objects on my road to work has decreased a lot in the past three years. Could be true (but allow me to doubt it, nonetheless, considering the reconstruction going on (*))! But the reduction may also be because I bought tires that were designed to reduce penetration by sharp objects. I would have expected a reduced cost regardless of whether the number of sharp objects has changed or not.

    (*) that reconstruction is, somewhat ironically considering the subject, to upgrade the sewage system to handle the recent observed increase in extreme rainfall events

  20. ” The book concludes with a discussion about policy and highlights the Kaya identity (emissions are basically a function of GDP, population, how we get our energy, and how we use our energy). It also highights an iron law. GDP growth is essentially sacrosant; any climate policy that will significantly impact GDP growth will never be accepted. “

    The arguable point is that economic pressures caused by oil depletion and oil politics have always played a greater role in GDP than climate policy. James Hamilton of UCSD and the EconBrowser blog has correlated how all major recessions were associated with oil shocks

    If it wasn’t for miraculous (but inevitably short term) advances such as LTO fracking and absurd practices such as mountain-top coal removal, who knows where the American economy would be right now?

    Since we live in an energy-based economy, GDP growth is only sacrosanct as long as cheap(er) energy is available. The fact is that contrarians such as Jr are more contrarian toward peak oil issues than they are to climate change issues, and find it easier to argue the uncertainty around correlating weather extremes to AGW than the absolute certainty of the depletion of finite, non-renewable crude oil.

  21. Dave_Geologist says:

    you can always employ the tools of doubt against those sciences and experts and demand your green agenda.

    The irony is strong in this one.

  22. dikranmarsupial says:

    Prof Pielke Jr sent a rather odd tweet, apparently questioning the lack of comment about the new edition of his book

    Roger was kind enough to send me a copy of the book, and I thought I’d provide some comment (a task for which Twitter is not ideally suited), starting here:

    I’ve got as far as chapter 5 so far, a TL;DR summary will follow once I have finished the book. A (lightly edited) transcript is below:

    I thought as @RogerPielkeJr had been kind enough to send me the book, I ought to promote it up [my reading] queue, especially as it is so short. I’m afraid that I’m not too impressed by the first chapter, which is not so much “Climate’s Legitimacy Wars” as “Roger’s Legitimacy Wars”. The problem with personal accounts is that they are so much more open to (unintentional) bias than a broad, impersonal study. The discussion of the mystery graph on page 25 didn’t help (wish now I’d kept my 2007 WG1). The book claims the axes had been “jiggered to make them appear to increase in lock step”, which seems to imply a disingenuous intent. However the book doesn’t provide evidence that it is not a reasonable presentation of the data. I suspect MATLAB would have performed a similar scaling automatically. This does not give confidence that the book is presenting a balanced account. Similarly, a Whitehouse essay “An analysis of statements by Roger Pielke Jr” is described as being part of a “delegitimization effort”, but there is no evidence given that it is not simply what purports to be (or that any of the analysis is factually incorrect). How can I distinguish that from an effort to delegitimise the Whitehouse science advisor (other than my usual Hanlon’s razor). Anyway, I will continue…

    On to chapter 2. I really like the first two pages, if more people would explicitly and unambiguously set out their positions on the issues, the public “debate” on climate might be more productive. I should point out that these are just my observations as a reader, and its always possible that I have misunderstood, or that there are things I don’t know (it isn’t a topic that I have any special expertise in), so caveat lector. Twitter is hardly ideal for a proper Fisking anyway. I found the material on normalisation interesting, except I didn’t see a mention of possible observational biases in records of weather events that would be an obvious problem. The section on proving a negative was rather missing the point for me, science can’t *prove* *anything*. I’m afraid likening Russell’s quote about arguments for the existence of God to the current climate debate went down rather badly with me. It really is the most egregious hyperbole, even intended as humour. I know @RogerPielkeJr has had some unpleasant experiences, but that really is misrepresenting the debate in general. There is a false dilemma on page 34. The “yes” answer is O.K., but if there is insufficient evidence for a link between disasters and CC, that doesn’t mean there isn’t a link. This is a common misunderstanding of hypothesis tests (https://www.skepticalscience.com/statisticalsignificance.html) so that doesn’t justify the answer “no”. In reality there are three answers (if you discretisation is necessary): “Yes”, “No” and “Insufficient evidence either way”. In reality, we should do as Hume suggests and apportion our belief according to the evidence. I think there may be something missing in the penultimate para. I make the BEST trend over the last 100 years to be 0.089 +/- 0.011 C per decade (using the SkS trend calculator). That pretty much does rule out the possibility it hasn’t warmed over the “last century or so”. While we cannot absolutely prove anything, there comes a point where we have to use a bit of common (statistical) sense. I’ll end with a question for @RogerPielkeJr : Would you agree that the “apparent hiatus” in GMSTs since around 1998 is entirely explainable by ENSO and solar/volcanic activity (http://iopscience.iop.org/article/10.1088/1748-9326/6/4/044022/meta?

    [Turns out the answer is “yes” but the quote from the book about the hiatus suggests it is fine to talk about the existence of a hiatus even though it is entirely explainable by these factors, but this chapter of the book seems critical of those who argue for a link between climate change and the cost of exteme weather events on the grounds that it is entirely explainable by other factors (such as increasing wealth). Thus this seems to me to be err… “somewhat inconsistent”.]

    Right onto chapter 3, which seems pretty reasonable to me … until page 44. Most of the material is about the IPCC’s detection and attribution framework. I think it is unreasonable to expect the public or scientific discourse to adhere narrowly to that framework as the IPCC reports have a very specific purpose, where definitions must be made very clear (and the reports fairly circumspect). I don’t see a real problem in taking a less formal approach, especially in scicom, where we need to consider the background of the audience. The use of “counterfactual model-based studies” as “abandoning the IPCC framework” is hyperbole, which suggests an agenda. Such studies are a *complementary* approach, and do allow us to explore what is consistent with our knowledge of the physics, even though we know we have too little data to expect anything from the formal detection and attribution framework. The “cynical” evaluation of this starting at the top of page 44 is not suggestive of an objective evaluation, and AFAICS there is no evidence supplied to support the criticism of the models (e.g. “malleable methods”), or sufficient detail to allow a rebuttal (which is obviously unfair). The middle para of p. 42 suggests that associating extreme events with GHG emissions is “just wrong”. That seems to be at odds with what we previously agreed about chapter 2, i.e. that it is acceptable to suggest such associations unless there is significant evidence against them. This reinforces my confusion about the apparent difference between what is in the book, and @RogerPielkeJr’s tweets. It is a shame the “cynical” stuff about “political gambit[s]” rather spoiled what was a more promising chapter.

    Chapter 4. Most enjoyable chapter so far (unfortunately *much* better than listening to the cricket after Cook got out! [I’m pleased to report things improved later, at least in the cricket ;o)]). Basically a literature survey, results apparently consistent with the IPCC position (more than happy to take it on trust that it is). This is strictly within the formal framework of the IPCC detection and attribution framework though. The last paragraph lets it down a bit “These conclusions are very strong”. I tend to disagree, it seems to me the case that we would not *expect* to be able to make a detection at this point (just as we wouldn’t expect to see a significant trend in GMSTs over a period as short as 15 years). This isn’t a strong conclusion, nothing ruled out, nothing ruled in, so we should keep an open mind. Physics tells us that we should expect an association, so it is fine to claim one on that basis, as long as you are clear about that (or at least willing to answer questions and/or clarify). Not having proven a negative is not at all a meaningless assertion (especially if it involves NHSTs, which are not generally symmetric in H0 and H1.

    [The point is that if theory tells you to expect a large effect size and you perform a study large enough to confidently expect to see it if it is there (i.e. high statistical power), then the lack of a statistically significant effect is interesting (and probably means something). On the other hand, if you expect a small effect size (as in this case, at this point) and perform a small study (with low statistical power) then you would pretty much expect not to see a statistically significant result even if the research hypothesis is true. In that case, the “negative” result tells you nothing you don’t already know. In the case ofthe climate-change -> costs of extreme weather events, the lack of a statistically significant effect says very little about whether there actually is an effect, either way.]

    Chapter 5, best chapter so far, probably because like it, I am “rather dry and academic” ;o) Again, an interesting survey of work, although I am not able to judge its specific accuracy, I am happy to take it on trust that it reflects the @IPCC_CH position, which is good enough for me on most topics that go beyond the very basics. I would suggest however, that a book that suggests that others have politically motivated interpretations of the science (or words to that effect), it would be best to acknowledge where your sources that agree with you have links to political think tanks. Not a big deal for me, what matters is whether the science is right, but it invites rhetorical criticism in a political debate (which I am more than happy to leave to others!). Just a suggestion for 3rd edition. Nearly forgot, the bits I liked best were about the methods used to check the results (I like that kind of thing). Skepticism is good, but it needs to start with self-skepticism (of that kind).

    [to be continued…]

  23. Dave_Geologist says:

    Oil price shocks. You’re surely not suggesting those are all down to oil being a finite resource, Paul? No wars, no OPEC, no strikes, no nationalisation, no sub-prime mortgages and CDOs? Globally, not just in the USA? Even Hamilton (who seems to be a bit of a one-trick pony on this, see Hamilton 1985) attributes most of them to political actions, strikes or other human rather than natural resource factors. And channelling SM, ““Now, Comes the question, did the modelling study use the latest best data?” A 2010 study, updated in 2011 and not submitted to (or at least not accepted by) a peer-reviewed journey is pretty thin gruel. Oil supply is a just-in-time business with each step in the chain holding only a few days’ inventory, and lead-times of years to bring new production onstream. Of course the threat of a human-induced supply interruption causes a price spike. That graph would look exactly the same if oil was an infinite resource. From Hamilton (1985): “While the magnitude and violence of recent oil price changes are unique in postwar experience, the phenomenon of political instability producing disruptions in petroleum supply is not.”

    Of course oil will run out eventually (or rather, it will become too expensive to burn as fuel and be reserved for chemical feedstocks etc, as was the case in the 19th Century) And before that happens there will be plenty of notice as the price rises. So it won’t be a shock like in Hamilton’s graph, but rather a drag on GDP growth as fuel gets more expensive. Or not, if we decarbonise and renewables become cheaper than the GDP-and-inflation-adjusted oil price. Hence the argument from some that taking climate action now will have a positive impact on long-term GDP growth, not a negative impact.

  24. Dave_Geologist says:

    I’ve sent dozens of copies of my new book on disasters & climate change to a range of scientists and journalists on the climate beat. What can be in the book that is so powerful that it has led to complete silence from people rarely silent?

    I’ve commented previously on the paradox that really bad scientific publications tend not to attract Comments or to be withdrawn. Only ones that meet a certain merit threshold attract Comments, and only ones guilty of ethical violations get withdrawn. When the work is obviously wrong (or Not Even Wrong) to a “person having ordinary skill in the art”, its fate is to be ignored, not corrected. Not worth the effort, and Editors don’t like to waste space. Maybe it’s different in political science.

  25. Thx for the Douglas Beal TED Talk link, Joshua. Beal’s summary of SEDA’s expansion of/relationship to GDP, as a social metric regarding CapitalismFail, is concise relative to other discussions I’ve run across.

    The dynamic modeling of the 50,000 data points from different perspectives is cool. My mind’s eye wants to add variable fourth and fifth dimensions to what is presented. These relate to both population growth and collapse, and our Anthropocene’s unfolding abrupt climate change affect constraints on the probable range and direction of the planet’s future population.

    The current propensity to discount future costs is, within the economically constrained thinking engendered within CapitalismFail, our socially trusted motivated reasoning. Systemically, it discounts our progeny, and, in doing so, discounts our aging selves whose comfort in old age is inexorably linked to the well being of subsequent generations.

    RPjr effort at intellectual brilliance seems to constitute a ‘theological’-like, if vain, diatribe attempting to justify a failed status quo: CapitalismFail … and its handmaiden (within the constraints defined by physics’ defining knowledge as action): AcademiaFail. Or, the substance of my emailed set of haiku that WordPress’ programming [unhelpfully] reformatted.

    In my mind’s eye such a fourth and fifth dimension would, in a forward rotating visitation dynamic, conflate all those 50,000 data points into a spaceship earth singularity: limited liability law enabled, and its oil era debt-slave dependent, anthropogenic credit created CapitalismFail’s vanishing point.

  26. Joshua says:

    Steven –

    –snip
    The main result of this paper is that in countries and time periods with a high propensity of government collapse, growth is significantly lower than otherwise.
    –snip–

    Given the question of direction of causality, how do you see relevance between that abstract and RPJr.’s iron law?

    he cited a standard number of about 1% of GDP as being a threshhold of pain.

    Relative, or absolute?

  27. Dikran,
    Thanks. I linked to some of those tweets at the end of the post. Highlight a number of the issues that I didn’t have time to cover in this post.

  28. Dave,
    Hamilton is not really a one-trick pony in that regard, since the economy is an energy-based economy. The 70’s & 80’s shocks were related to the realization that the conventional crude oil production was peaking in the USA. This resulted in the first indications of global gamesmanship over oil supplies as illustrated by events such as the OPEC oil embargo. We are not that naive about the reality of finite resources. Do you really want to argue this graph?

  29. ecoquant says:

    @Dave_Geologist, @WHUT,

    I know nothing about this area, but I wanted to contribute a remark. It is decidedly untrue that markets, with regulations or not, will automatically produce a needed item if the price is high enough. Markets and their interactions among one another are too complicated and interconnected for that to be a simple proposition.

    There are three examples from recent business history.

    First, apparently, in the market for large hardware for telecommunications, ranging from cell and microwave towers to converters and transducers (I’m not sure of their technical EE names) for fiber optic cables, the supply change has sufficient variability that supplies of key parts start and stop at various times, depending upon impacts of demands for semiconductor chips and other components and weather-related outages. Accordingly, there is a business model springing up which identifies such key components and either manufactures or stocks ones they can make money on closer to home. Supply is not complete, and principals in these businesses are always looking for new components to stock, and existing inventory to stop carrying, because they judge their margins to be too low to make these worthwhile. One way margins can be too low is that the events which interrupt supply are too infrequent for sufficient margin, even if the companies needing the components are stranded when this happens.

    Second, there is an intermittent supply of sterile saline for hospitals in bags, so much so, that a network of hospitals, which are normally competitors, are creating a shadow manufacturing and distribution operation to provide saline in a pinch. Apparently, the trouble is that medical suppliers only can afford so much capacity and when they choose to market certain items, they make the judgment, again, on margins and rates of return. Whether or not, in this case, the rates of return are constrained because of price controls (e.g., Medicare) or if labor and other costs are just too high to make fractional rates of return for these products favorable, I do not know.

    Third, lags in market response can stop supply in critical times. Setting up new supply chains, and normal contracting, only happens with delays. Accordingly if there is an interruption, whatever the cause, while the market might, in the long term, respond, in the short term when there is a need, the end manufacturer or deliverer cannot wait for that response and pursues faster more expensive options. Demands can dissipate so the price signal dissipates as well, and the market response to these needs never materializes. A case in point was the price spike for electricity, of some 80% on Labor Day this year on the ISO-NE grid, when a demand forecast by ISO-NE underestimated actual demand, caused the RTO to call up peaking gas plants, and then several of these mechanically failed, without backup. ISO-NE then ought out-of-region electricity on the spot market to fill the needs. The point is that it is not sufficient to have a market mechanism. That market needs to be agile enough to adapt to critical outages before price signals evaporate. If its response is too slow, it’ll never see the signal and there’ll be nothing done to stave off this problem.

  30. izen says:

    @-ATTP
    ” My biggest issue with the book is that, despite it containing all the necessary caveats, I think it will be used by those who oppose climate policy to argue that there is no evidence that anthropogenically-driven climate change is having any impact on us”

    It is worse than that.
    Before I discovered you had made a new post, I encountered a climateskeptic reddit thread proclaiming that FINALLY a climate scientist had ADMITTED that AGW did not cause extreme events or disasters.!
    They weren’t referring to RPJr. The quote they gave was from a review of his book-

    “There is plenty of discussion on whether or not disasters [have] become costlier because of human-caused climate change. The answer is no, the data don’t support claims that the rising costs of climate disasters are due in any part to a human influence on climate.”

    I think it was pointed out that it might be an astronomer rather than a climate scientists, but that did not diminish the apparent glee this admission was greeted with.

  31. izen,
    Oh dear, that’s not very good. I actually didn’t consider that that might happen. Of course, the latter bit that they quote is actually a quote from the book.

  32. The one geophysicist that takes a systems approach to the current situation is Raymond Pierrehumbert.

    For one, he wrote the book on the principles of planetary climate.

    Yet, he also wrote a well-regarded article on “The Myth of Saudi America”, describing the trajectory of USA oil production
    http://www.slate.com/articles/health_and_science/science/2013/02/u_s_shale_oil_are_we_headed_to_a_new_era_of_oil_abundance.html

    Dave, is Pierrehumbert also a one-trick pony?

  33. angech says:

    I am impressed that you and Gavin were sent copies of the book for review and comment.
    Further more that you both find some good aspects in his arguments and are prepared to say so.
    Well done.

  34. Everett F Sargent says:

    Google …
    “Do natural disasters stimulate economic growth?”
    “Are Natural Disasters Good for Economic Growth?”
    “Can Natural Disasters Help Stimulate the Economy?”
    “How Natural Disasters Affect U.S. GDP”
    “Hurricanes, Disasters, and GDP”
    “GDP and Natural Disasters”
    “How do natural disasters affect the economy?”

    Note that none of those searches explicitly includes climate change. Most of the 1st page hits don’t even mention climate change.

    In other words, how much of GDP, as a percentage, is spent on all forms of maintenance (including, for the moment, natural disasters)?

    We have the parable of the broken window fallacy, but so what.

    Does RPJr even address the real possibility that GDP may even be a poor metric for natural disasters?

    [I’ll snip this paragraph. Next time it’s the whole comment. – W]

    I’m thinking that natural disasters would have to become chronic/very frequent and widespread.

  35. Willard says:

    You may also like:

  36. Eli Rabett says:

    Not to get personal about it, but Roger did not send Eli a copy. Perhaps he will but not taking bets.

    Dikran is doing a good job, but the problem with Roger’s work is that he has never confronted how hardening of sites and better forecasting has affected damages. Eli has pointed this out many time to no response. Perhaps some other bunny might try

    If damages scale with GDP in spite of better adaptations then obviously things are getting worse. You can say it in an old style tweet.

    As an example, we have the sad case of Haiti on Hurricane Alley. Haiti has constant population and GDP, and indeed, killer hurricanes are much worse in this century than last

    http://rabett.blogspot.com/2013/06/rabett-does-hurricanes.html

  37. angech says:

    Eli,
    I ask you to recast two of your comments.
    How does better forecasting affect damages.
    That last hurricane deviated from it’s forecast line and did most of it’s damages in flooding.
    I understand taking shelter in a tornado might save [valuable] lives People could try to flee hurricanes in cars but the GDP side of the damages is consequent on the wind strength, storm surge and rain and flooding. No forecast is going to stop that.
    “Haiti has constant population and GDP,”
    Look you may be right, would not write it if you are wrong but Haiti has had an increasing population over the last 100 years and I would feel [don’t know] that it would have increased a lot even in the last 10 years.
    The problem with GDP etc is that as more people and more structures go up the same intensity hurricane 30 years later must do 27 times more economic damage.

  38. angech says:

    Haiti has constant population and GDP,
    population growing
    1965 4,271,133
    1995 7,819,806
    2005 9,263,404
    2018 11,112,945
    Recent Gross Domestic Product (GDP) in Haiti was worth 8.41 billion US dollars in 2017.
    It has been stagnant the last 5 years but has grown a lot over the last 25 years.
    GDP in Haiti averaged 5.06 USD Billion from 1991 until 2017, reaching an all time high of 8.78 USD Billion in 2014 and a record low of 1.88 USD Billion in 1993.

  39. Dave_Geologist says:

    The 70’s & 80’s shocks were related to the realization that the conventional crude oil production was peaking in the USA

    Funny, from this side of the pond it looked like the Arab oil embargo and successive threats from war or civil unrest to Gulf and Middle East production, and to transport via the Straits of Hormuz and the Suez Canal.

    But, whatever, we’ve been round this loop before. Lets not derail this thread.

  40. Dave_Geologist says:

    Final word to Paul: “Do you really want to argue this graph?” Of course I don’t. It’s a no-brainer. I only want to argue its relevance. It falls down where all previous Peak Oil analyses have fallen down. It models production from existing plays using existing techniques, and ignores new plays and new techniques. Which of course are also finite, but are not yet exhausted. Not even close, judging by the number of $100/bbl-breakeven investments I’m aware of which were put on hold when the price crashed in 2008. You won’t see them in SEC Reserves statements because they don’t count until you’ve committed the capital funds and have government approval to proceed.

  41. dikranmarsupial says:

    Thanks ATTP/Eli, I thought I’d gather it together to make an amazon review. Hopefully finish the rest of the book this week.

    izen wrote “Before I discovered you had made a new post, I encountered a climateskeptic reddit thread proclaiming that FINALLY a climate scientist had ADMITTED that AGW did not cause extreme events or disasters.!

    well they obviously hadn’t read the book then as it repeatedly states how it is consistent with what the IPCC have been saying since 2007 (assuming that is true, but probably only within their strict detection and attribution framework and discounting what the models tell us).

    Well done me for not rising to angech’s trolling…

  42. Dave_Geologist says:

    ecoquant, I certainly wasn’t arguing that markets are perfect, with or without regulation. I was using just-in-time in the same way the car industry does. Obviously no-one sensible runs their inventory so tight that the production line has to shut down for six hours because a delivery truck got a puncture. Although KFC does appear to be like that, judging by the chaos when they changed delivery supplier a few months ago. My cousin works in IT and was recently involved in a number of major logistics projects, and gave me some additional insight from what he’d heard on the grapevine. They were delivering fresh each morning for the lunchtime rush, and even if the delivery only arrived an hour or two late, rather than wait the customers went elsewhere. So they had a double whammy of customers but no chicken, then chicken but no customers.

    Routine O&G is like your first example. There is a spot market where prices fluctuate well above and well below the headline price, often quite local because if you need it on the day, you can’t ship it from the other side of the world. Some small petrol stations run entirely on spot-market fuel, bought on days when the price is low. The downside is that they sometimes run out of some grades for a few days. They’re generally small and have a cadre of loyal customers driven by price, who are prepared to wait a few days until supplies are re-stocked. A supermajor with a busy filling station in a motorway service stop can’t do that. The reputation damage caused by stranding hundreds of cars and trucks would be too great.

    Prescription drugs have a similar issue to your second one. Some manufacturers run out a batch of 5mg, then a batch of 10mg, etc. Some make one size in the UK, the other in France, etc. Some retailers like to stick to one wholesaler. I don’t have a good O&G analogue for that.

    The third example is equivalent to the price shocks when supplies are threatened by war or unrest, or there is a risk (or actuality) of a major player like Kuwait being taken out of the game. The supply chain can cope with interruptions of a few days, but not a few weeks or months. Oil and gas are low-value bulk commodities, cheaper than bottled water, but unlike coal or iron ore are very expensive to store because of the fire or explosion risk. People store enough for business as usual. A major supply outage breaks the system. A gradual decline, for example the North Sea running dry, is fine. That’s a slow process, running over decades, and people plan for and invest in new sources of supply. The cost of storing a barrel of oil is in the order of $5-10/year. At $50/bbl gross, $10/bbl profit if you’re really lucky, plus the cost-of-capital incurred by storing not selling, no-one but governments can afford to store enough to compensate for something big like a hot war between Saudi Arabia and Iran. And they choose not to.

  43. Dave,
    What exactly does “fallen down” mean? The reason that areas like the Bakken are exploited is because all the conventional crude oil reservoirs are depleting. If the price of oil doubles that means that the energy expended during extraction essentially doubles. Money=Energy in an energy-based economy.

    “It models production from existing plays using existing techniques, and ignores new plays and new techniques. “

    Oh, is that why that Trump’s DOI guy Zinke is opening up national monuments and national parks in Utah and elsewhere to oil companies? Are those the new plays?

    It’s wild that a new book coming out by Bethany McLean is called “Saudi America” and that it is so close in content and name to Pierrehumbert’s article “The Myth of Saudi America”.

    BTW, thanks for your final word. Our own book on such topics will be out in a few months, and one of my co-authors, who helps run the popular Peak Oil Barrel blog, is happy to keep the discussion going there on both fossil fuel depletion and climate change.

  44. ecoquant,
    What Dave seems to be missing with his post-finalWord comment is essentially this:

    — Oil inventory and supply interruptions are analogous to weather variability.
    — Monotonic oil depletion is analogous to climate change.

    Based on our experiences, we all know which one of these is more important in the long term, yet Dave seems to want to argue that the former is more important.

  45. dikranmarsupial says:

    “If damages scale with GDP in spite of better adaptations then obviously things are getting worse. You can say it in an old style tweet.”

    or alternatively the adaptation isn’t working very well, so perhaps mitigation might be better? Good point, well made.

  46. Dave_Geologist says:

    Dave, is Pierrehumbert also a one-trick pony?
    Not in the slightest. For one thing he’s moved from his original speciality, and put in decades of hard yards to make him an expert in his new speciality. As I did when I went from my billion-year-old-hard-rock background into the oil industry. Whereas Hamilton 2010 looks like a warmed up version of his PhD thesis from 30 years ago. I’m used to a narrower definition of “geophysicist” however. The AGU encompasses a broad range, including astronomy and exoplanets. To say every geophysicist or AGU member is an expert on all aspects of that very broad definition of geophysics would be a false argument from authority. To say that each one is an expert in his or her own sub-discipline, is an argument from genuine authority.

    As we discussed previously, his Slate article is like the curate’s egg, good in parts. Unsurprising, as he’s writing outside of his academic speciality. As I would be if I wrote about meteorites and the outer planets, but Monica Grady would not, even though we’re both geologists. As Jane Plant (RIP – I knew her and worked with/for her briefly decades ago) was when she wrote about the distribution of potential carcinogens in geological deposits, landfills, tailings piles etc. But not when she wrote about beating cancer with a fad diet (no I’m not discounting the role of diet in cancer as demonstrated by epidemiologists, just the validity of simplistic “miracle cures”). Sadly, even very smart people can fool themselves with motivated reasoning, especially when they venture outside their area of expertise: “now that I got a recurrence and am again in remission proves that my diet works”. In Ray’s case, as I pointed out previously, he appeared to be unaware that the industry in general is not considering steam-flooding to extract oil from immature oil shales. The smart money is on the “Shell method”, i.e. controlled, in-situ partial combustion.

  47. Shorter Wotts: I know I disagree, I just can’t find anything to disagree with.

  48. verytallguy says:

    Shorter Wotts: I know I disagree, I just can’t find anything to disagree with

    Oh my aching sides!

    “There is no doubt in my mind that the literature on climate change overwhelmingly supports the hypothesis that climate change is caused by humans. I have very little reason to doubt that the consensus is indeed correct.”

    R. Tol

  49. dikranmarsupial says:

    Pity an economist can’t come up with something to say about the book and makes do with a bit of lame trolling 😦

  50. Dave_Geologist says:

    Based on our experiences, we all know which one of these is more important in the long term, yet Dave seems to want to argue that the former is more important

    OK because this one is at least tangentially related to climate change 😉 . I don’t. I want to argue that both are important. It’s the weather that kills people and wrecks their livelihoods, not the climate. But a changing climate drives changing weather. Arguing about whether Peak Oil estimates do or don’t include all the commercially exploitable plays is like arguing about whether ECS is 1.5K or 4.5K. Saying the resource is finite is true but trivial. It’s like saying ECS is positive not negative, because otherwise we couldn’t have Ice Ages. There is a lot of importance in knowing whether it’s 1.5K or 4.5K. Just as there is in knowing whether the availability of oil will peak in 2030 or 2100, as opposed to its production peaking because demand has been reduced by renewables competition.

    My initial reaction against the Slate article was to its complacency. A lukewarmer would read it as saying there ‘s no need to implement taxes and regulation to save the planet: the oil will run out anyway before we can do serious damage. I disagree. There was a shedload of stuff in the US onshore waiting to come onstream when the price crashed in 2008. It hasn’t gone away. BP’s assets are now profitable at $50/bbl. That’s why they bought BHP’s assets, because they think they can turn them around to be as profitable. When the price goes up, rig-rates will also go up, so the costs will go up. But they’ll be profitable at less than $150/bbl this time, Because Learning. The people who lost their shirts didn’t lose them because the oil is inherently uneconomic to extract. They lost them because they paid land and drilling prices and made commercial decisions on the basis of $150 oil. Had the price stayed at $150, they’d have made a profit. Because their product price fell by 70-80%, they made a loss.

    West Africa north of the traditional Gabon-Angola arc will come before Utah, then East Africa. A return to the sub-salt play in West Africa has already started, now that Brazil has led the way in producing from microbialites. Maybe the Tarim Basin in China. We had a look when CNPC were marketing it in the early 90s, but decided it was too much like the Lower 48 assets we were divesting. Maybe its time has come. Hey, it’s currently a carbon sink! maybe we should add irrigated agriculture to our list of NETs (it’s not very surprising at first glance, same process that forms calcretes, but looks a bit slow, thousands of years).

    Even if I’m wrong, the precautionary principle should compel us to act as if there is enough commercially producible oil and gas to wreck the planet (for humans). If you’re right, imposing restrictions now, by market means or fiat, will ensure that there is enough left for our grandchildren to use in speciality chemical manufacture. If I’m right, relying on it running out, rather than making tough decisions now, is a recipe for disaster.

  51. That’s what Pierrehumbert is referring to. If the Green River oil shale is partially combusted to add enough energy to convert the kerogen to something usable, then the game is over. The amount of CO2 emitted will skyrocket as N*X amount of energy will be lost as waste for every X amount that is usable. For every new find the EROEI keeps going down, meaning that more and more waste CO2 is being produced. That’s explained in Pierrehumbert’s article.

  52. Dave_Geologist says:

    The long-run impact of bombing Vietnam (my bold).

    This finding indicates that even the most intense bombing in human history did not generate local poverty traps in Vietnam.

    That’s a somewhat narrow and possibly biased (in the technical sense) definition. For example, how much was the overall economy of Vietnam adversely impacted? How much effort was spent by central government in rehabilitating the worst-hit areas? How much GDP growth was lost due to that diversion of resources? How much did the psychological impact of the bombing and other brutality drive Vietnam for decades down an economic path that it should have seen wasn’t working, by inspection of China and the USSR? Is that part of the reason why Vietnam’s GDP per head is a quarter of Malaysia’s and less than half that of Thailand’s (yes Malaysia has oil and gas, but so has Vietnam)?

  53. Dave said:

    “My initial reaction against the Slate article was to its complacency. A lukewarmer would read it as saying there ‘s no need to implement taxes and regulation to save the planet: the oil will run out anyway before we can do serious damage. I disagree. “

    Many that read that article would disagree. This is what Pierrehumbert said:

    “In his talk at the AGU session, Charles A.S. Hall pointed out that the energy return on investment—the amount of energy you get out of a well vs. the energy needed to produce the oil—has been getting steadily worse over time. As long as there is some net energy gain and some profit to be made, drilling may go ahead, but the benefits to the energy supply deteriorate at the same time as the collateral damage to climate (in the form of increased carbon dioxide emissions per barrel of oil produced) goes up.”

    The clear warning that Pierrehumbert made (and will be echoed in the forthcoming MacLean book) is that the depletion of conventional crude oil supplies is leading to these third-rate hydrocarbons that are much worse to extract in an environmental sense.

    It’ll be fun when we present on LTO at the AGU in a few months 🙂

  54. Dave_Geologist says:

    The Green River is an irrelevant distraction Paul. It’s small beer in global and even US onshore terms. The EROEI would be high from the Operator’s viewpoint. About the same as existing shale oil. You’re burning part of the resource underground. A resource which is otherwise of no economic value, so every barrel that comes out is a barrel more added to the world’s supply. That’s not what Ray was talking about. He was talking about steam-flooding, where you use energy at surface, taking a barrel from the world’s current supply to get a barrel or two back. That’s a completely different kettle of fish and would be silly, which is why the industry has rejected that method. As I said on the other thread, in situ combustion could potentially be done in a relatively low emissions way. Most of the CO2 will stay underground, or dissolve in the oil where it can be removed in the processing plant and potentially recycled or sequestrated much more economically then if you have to extract it from flue gases.

    “For every new find the EROEI keeps going down”. Simply untrue. Unless you make a direct equation of $ to bbl. Of course there is a general relationship, but an iPhone costs twenty times what it costs me to fill my tank. It doesn’t represent twenty times as much energy consumption. Most of the onshore it produced by depletion drive, where the only energy input after drilling and fraccing is pumping to market. Most offshore is produced by waterflooding, with huge gas turbines, large enough to power a small city, running for decades. Bet their EROIE is lower. Why not just run them on depletion then? Because $ ≠ bbl. When you’ve invested $10Bn on infrastructure before the first bbl flows, it makes sense to burn a bbl as fuel if you get 10 bbl back. Why not waterflood unconventionals then? Because the reservoir properties are unsuited to waterflooding. Technical issues, not $ or EROIE.

    Oh dear, I’ve got dragged in again 😦 .

  55. You say that the Green River is irrelevant after you have been describing how to partially combust oil shale underground.

    Green River oil shale = the biggest oil shale deposit known in the world

    If you want to disagree, then you can change the Wikipedia entry
    https://en.wikipedia.org/wiki/Green_River_Formation#Oil_shale

  56. ecoquant says:

    @Angech, @Eli, @ATTP,

    The relationship of losses to GDP and such has been repeatedly reported (e.g., in Science, noting their Figure 2; there’s also some focus upon Europe). The idea that increases in losses are due solely to increased economic development may appeal to the common, and is a handy claim for deniers or minimizers, but is, like most of what else they spout, at least disingenuous if not downright false. What do they think, the Munich and Swiss Res of the world try to retain profitability by pushing PR distraction in the manner of the present White House in the USA? This is not to say that there aren’t investments and preparations which can be made to help contain damage, whether in industries or protecting the public. But putting up Evacuation Route This Way signs is demonstrably an abject abandonment of government responsibility for doing just that. Ultimately, government is the one that’s going to have to contain both the moral hazard of putting assets repeatedly in harm’s way, and, if abandoning or hardening assets is unworkable, then figuring out how to protect them, at whatever increased taxation and cost it takes.

    The trouble with deniers and their ilk is, to me, not a matter of engagement or “listening to the other side”. It’s just that if a source (any source, including people like Guy McPherson or Allan Savory) has repeatedly been found to disregard and distort facts, whatever their motivation for doing so, my prior on their findings is going to weight them really low.

  57. Willard says:

    > The Green River is an irrelevant distraction Paul.

    In fairness, the whole discussion is.

    ***

    You know my policy, Paul. One drive-by per thread. You did yours.

  58. Lerpo says:

    Does Roger give any indication when we should expect (or should have expected) the signal to rise above the noise and become detectable? Is it surprising that this isn’t already detectable?

  59. Lerpo,
    No, not that I can recall. I don’t think it’s surprising that it is difficult to detect a signal. I don’t think that there is complete agreement that one hasn’t yet emerged. Of course, if you consider individual events, we have already shown that the conditions in which these events occur have been impacted by anthropogenically-driven climate change and that the changes to the underlying conditions have almost certainly influenced these events.

  60. Dave_Geologist says:

    Green River oil shale = the biggest oil shale deposit known in the world

    Wiki conflates in-place with reserves. Reserves is by-definition economically recoverable. And beware of statistics that only say “up to” and give no other numbers. “Up to” is synonymous with “less than”. To quote the USGS:

    No attempt was made to estimate the amount of oil that is economically recoverable, largely because there has yet to be a process developed to recover oil economically from Green River oil shale.

    Recovery factors of about 50% are spoken of for underground room-and pillar mining, and up to 80% for open-cast (but most of it is thousands of feet down, and it looks like only a hundred square miles or so is shallow enough for open-cast). Of course they both suffer from the EROEI drawback pointed out by Pierrehumbert, because you’d be trucking it to surface retorts powered by imported energy, or by burning some of the oil shale. Not worth the effort, I’d drill another thousand Bakken wells instead. For subsurface combustion, you’re basically turning an immature oil shale into a mature oil shale, so I’d expect similar recovery factors to the Bakken. BTW my memory played me false 😦 , that’s the Chevron method; the Shell method is subsurface electric heating which also suffers from the EROEI problem. This paper suggests 7% recovery, which seems pretty fair. Primary recovery for oil is usually around 5-10%, because you’re relying entirely on depressurisation in the near-wellbore and a degree of gas exsolution in the wider reservoir for your driving force. Depletion drive = expansion drive. But you’ll need to burn half of it underground, so lets say 3%. At 0.05 mD permeability you’d need a very high well density to get to that value. 2000 ft spacing seems to be the order of the day in the Bakken, so say one-and-a-half wells per sq km for the Green River, or about 25,000 wells. I can think of a lot of things I’d do with 25,000 wells before I’d go to the Green River. Let’s assume $10M per well, because we’ll be learning in a new play and will probably have to do lots of restimulations and sidetracks because the sweet spots won’t be where we expected them to be. It will depend on how fractures link the injectors to the producers, and the combustion zones to the retorting zones. Say $250B (the Chevron method as described in Wiki uses vertical wells, but I’d expect a future implementation to learn from the Bakken etc. and use multi-fracced horizontal wells). But there are about half a dozen shale members, and the method just retorts the central levels of each. So let’s say 150,000 wells and $1.5T. And probably halve the recovery factor again, so maybe 50 Bbbl. Spread over 50+ years, that’s about 10% of US consumption. Not chicken-feed, but not a game-changer either.

    And oh dear, 1.3% w/w H2S in the generated oil. Expensive pipes and jewellery and clean-up costs. Best to burn it locally in thermal power stations. And yes, best to retort it underground, not at surface.

  61. Lerpo says:

    Even if we can’t take the absence of proof of its occurrence as positive proof of its non-occurrence, I suppose he at least shows that with some bad luck things could have been as costly even without global warming?

  62. Dave_Geologist says:

    To continue the thought experiment … Shell and Chevron both seem to have put their pilot studies on the back burner. Looks like they see better prospects in the likes of the Bakken. Or offshore or overseas. So let’s say the technology gets reactivated and worked up, and permissions are granted for subsurface combustion (with an EROEI of at best 2, external-energy methods like Shell’s are a non-starter for fuel production IMO – might as well just mine). The Chevron method has obviously been thought through. Start with a seed pad injecting hot CO2 rather than hot air (less risk of a runaway fire), recycle the CO2, frac connections to the next pad and introduce air to burn the residual oil and kerogen at the first site, rinse and repeat. A nice, steady, learn-as-you-go-along approach, but not one that lends itself to a rapid production roll-out. The seed pad will have the same EROEI as Shell’s method, so ideally you want a small proportion of seed pads: one in ten would be good, one in a hundred excellent.

    If Paul is right about Peak Oil, I could see us doing that by 2050. If I’m right, maybe by 2100. It’s as big a step from the Bakken as the Bakken is from the invention of fracced and horizontal wells, and fraccing has been around since the 1950s and horizontal wells since the 1970s. 30-50 years lead time sounds about right. However, to have a chance of meeting Paris we need to already be cutting down on oil consumption by 2050, and have weaned ourselves off of it as a fuel by 2100. That’s what I mean when I say the Green River is irrelevant. We can’t afford to be using it as a fuel by the time I think its day will come, and we probably can’t afford to by the time Paul thinks its day will come.

    But it could absolutely keep us on course to RCP 8.5 if we abandon the clever technological stuff I’ve been talking about and mine it like coal, burning half of it at surface to retort the rest. That could end up with us burning a trillion barrels, of which only half is turned into usable product. We have the technology to do that today. It would be expensive, dirty, involve going back to underground mining, thousands of feet below the surface in shafts and adits. The sort of thing my grandparents’ generation did. Of course I’m not advocating it. Not because it’s difficult, or even particularly unsafe, but because the CO2 emissions would be horrendous. I agree with Pierrehumbert on that. The point I was making, previously and in this thread, is that the oil industry isn’t thinking of doing it that way. The coal miners and electricity utilities are the ones who have expertise in that sort of thing. O&G refiners would become the miners’ customers, and power utilities would install CHP plants at the retorts. The O&G industry would instead go down the underground retorting route, selling it as having a much better EROEI. The Green River alone would not add a huge burden (but could be the last straw). But it would be astonishing if it were the only such deposit in the world. I’d be astonished if there are not dozens or hundreds more. So once a technology was developed, it would spread across the world, and could be enough to put us on an RCP6 path which we also can’t afford.

  63. Dave_Geologist says:

    Sorry Willard, let myself get carried away there, once I started into the thought experiment. 😦 .

  64. Dave_Geologist says:

    Back to GDP, do I remember reading somewhere that catastrophe losses are not subtracted from a country’s GDP? And that the cost of rebuilding counts as additional GDP? So it’s like a corporation that overpays for a takeover target or has a cost over-run on a capital project – despite representing an erosion of net value, it shows up in the accounts as a positive. The more you overpay, the more positive it looks. Until the auditors tell you to take a write-down. But GDP is a magic-money-tree with no boundary conditions. So you can’t account for the impact the way a company would, just compare actual GDP growth with projected GDP growth absent the disaster.

  65. dikranmarsupial says:

    Here is my TL;DR summary: If you want to know what the science says, read the relevant sections of the @IPCC_CH reports (which apparently say the same thing the book does). If you want to know why the debate is the way it is, watch Rashomon (which doesn’t exclude me, by any means).

    Will transcribe the rest tomorrow.

  66. Willard says:

    > let myself get carried away there

    Once per thread is fine, Dave. Just make sure you leave breathing space for otters.

  67. Joshua says:

    Eli –

    but the problem with Roger’s work is that he has never confronted how hardening of sites and better forecasting has affected damages. Perhaps some other bunny might try.

    I would suspect that in RPJr.’s view, he has confronted that issue (and discounted it as having much explanatory power) – although others (including myself) might think he hasn’t done a very thorough job. Medoubts efforts from other bunnies would result in many carrots..

  68. Richard S J Tol says:

    @Dikran
    I have not read Pielke’s second edition. I did read the first one. I broadly agree with him on the role of science and scientists in policy advice, although his thoughts are not nearly as original as he would like us to believe. Pielke’s work on disaster damage normalization is flawed, as we show in our Nature GeoScience paper.

    @vtg
    If you read beyond those sentences, you find very specific critique of Cook’s nonsensus.

  69. dikranmarsupial says:

    Prof Tol. There didn’t seem to be much there about the role of science, other than that the real problems are more social, economic and political, and I thing Mike Hulme did a much better job of setting them out in his book (perhaps because there was less hyperbole and partisan sniping). Will look up the Nature Geosciences paper when I have a moment, a link would be nice.

    I don’t think you want to talk critiques of “Cook’s nonsensus” unless you want to discuss (on a more appropriate thread), the fundamental flaws in your critique of the paper, starting with the fact that it is based on a statistical (marginal) assumption that is clearly incorrect a-priori, and on which the conclusions are predicated and moving on to repeated use of mindless “null ritual” hypothesis tests (I suspect I have pointed out paper more than once).

  70. izen says:

    It is difficult to know if the timing of this discussion of RPJr and the uncertain impact of climate disasters is a feature or a flaw.

    With Florence heading inexorably towards N Carolina as a cat4 hurricane it certainly provides a topical boost for a book on such issues.

    With some news sources warning this will be bigger than Hugo, the last strong hurricane to impact this far North on the US East coast in 1954(?) with far more people and houses in the way, predictions are apocalyptic. Meanwhile WUWT reassures that it is a nothing-burger being over-hyped.

    I understand that because of the rarity of such an event there is no PDF that can be used to show this confirms a change in the odds of such a Northerly hurricane. Without much precedent also means it may be difficult to claim warming has made it much stronger than it would have been in past decades/centuries.

    How much property damage would it have to cause to put a dent in the apparent independence of cost and GDP to climate related extreme events ?
    Or is GDP inherently decoupled from damages by Wind, Fire, and Flood ?

  71. verytallguy says:

    If you read beyond those sentences, you find very specific critique of Cook’s nonsensus.

    Long version

    I’m sure you can go the rest of your career in this manner, but please take a moment to reflect. You’re far from retirement. Do you really want to spend two more decades doing substandard work, just because you can? You have an impressive knack for working on important problems and getting things published. Lots of researchers go through their entire careers without these valuable attributes. It’s not too late to get a bit more serious with the work itself.

    https://andrewgelman.com/2014/05/27/whole-fleet-gremlins-looking-carefully-richard-tols-twice-corrected-paper-economic-effects-climate-change/#comment-167776

    Short version: #freethetol300

  72. angech says:

    There do seem to be 3 potential hurricanes on the way to be in play. Hope the other 2 fizzle.
    Total Precipitable water graph at Arctic sea ice blog daily graphs bottom graph.
    May be a 4th currently reaching Mexico?

  73. ecoquant says:

    @Lerpo,

    Does Roger give any indication when we should expect (or should have expected) the signal to rise above the noise and become detectable? Is it surprising that this isn’t already detectable?

    This doesn’t specifically address your question of what or what not Doc Pielke Jr has said about timing of detectability. It tries to bound that by noting what the literature has already said on the subject.

    Note detectability depends, in part, on what you want to detect, which channel you want to use to detect it, and what methods you find acceptable for assessing the result. Some scholars seek evidence of changes in mean surface temperature attributable to greenhouse gas forcing. Some seek evidence of attributable heat content changes, for instance, in oceans. Some seek evidence of abrupt changes in Sea Level Rise rates. Some seek evidence of changes in precipitation rates. Some want a compendium of evidence that the biosphere is responding to forcing. Some want to know not only whether or not there is change, but when the phenomena will become dangerous.

    Here are some pertinent references, from most to least favorite of mine, obviously a subjective ranking:

    M. Oppenheimer, “When will Global Warming become dangerous?”, 5th assessment report, IPCC, 2014.
    R. S. Nerema, B. D. Beckley, J. T. Fasullo, B. D. Hamlington, D. Masters, and G. T. Mitchum, “Climate-change–driven accelerated sea-level rise detected in the altimeter era”, Proceedings of the National Academy of Sciences, February 27, 2018, 115(9) 2022-2025.
    J. Imbers, A. Lopez, C. Huntingford, M. R. Allen, “Testing the robustness of the anthropogenic climate change detection statements using different empirical models”, JGR Atmospheres, 2013, 118, 3192-3199.
    I. D. Haigh, T. Wahl, E. J. Rohling, R. M. Price, C. B. Pattiaratchi, F. M. Calafat, S. Dangendorf, “Timescales for detecting a significant acceleration in sea level rise”, Nature Commnications, 5:3635, 2014.

    To me, the method is as important as the channel, although I know policymakers and the public probably consider this the least interesting aspect. Viability and robustness of methods for detection have been addressed with the support of and sometimes exclusively by statisticians. Part of the problem has been that traditionally the means and thresholds for detection of warming-related events has been left to applications of old statistical methods.

    There was nothing wrong with teaching these methods in their day, when datasets were smaller, computers slower, and ambitions for modeling modest. We can do so much better now (given adequate funding). It’s time, in my opinion, that practitioners in all fields showed more respect and circumspection for the statistical methods they employ. Rather than, for instance, treating a p-value as a trophy, they really need to understand p-values are random variables. The roles of variability, bias, shrinkage, and expected total squared error need to be better appreciated. Bias can be your friend.

    In any case, here are three references on methods. I much prefer the first two.

    D. Hammerling, “Climate Change detection and attribution: Letting go of the Null?”, CHANCE, 2017, 30(4), 26-29.
    P. Guttorp, “How we know that the Earth is warming”, CHANCE, 2017, 30(4), 6-11.
    T. C. K. Lee, F. W. Zwiers, G. C. Hegerl, X. Zhang, M. Tsao, “A Bayesian climate change detection and attribution assessment”, Journal of Climate, July 2005, 18, 2429-2440.

    I am being a little unfair to Lee, Zwiers, Hegerl, Zhang, and Tsao. That paper barely stands upon its own, relying upon:

    L. M. Berliner, R. A. Levine, D. J. Shea, “Bayesian climate change assessment”, Journal of Climate, 2000, 13(21), 3805-3820.

    to justify why they do what they do. Lee, et al note their assessment is an incomplete Bayesian approach. Berliner, et al explain:

    A fully Bayesian analysis would involve formulation of prior probability models for the quantities a, \mathbf{D}, \boldsymbol\Sigma, and perhaps \mathbf{g}. In view of the high dimensionality of the problem, this is a daunting task. For purposes of illustration, we follow simpler strategies of (i) formulating plausible estimates of \mathbf{D} and \boldsymbol\Sigma, and (ii) selecting fixed fingerprints. We focus on modeling and inference for the amplitude a here and describe our estimation of \mathbf{D} and \boldsymbol\Sigma in appendix A.

    It may well have been daunting then. I do not really know. (I surely have not done this. It’s not my field.) But I suspect it may not have been as daunting if strict comparisons of final and intermediate results with those from non-Bayesian approaches were wanted. Berliner, et al were very much defending and justifying Bayesian practice. Berliner, Levine, and Shea wrote in an era where Bayesians methods in atmospheric geophysics were suspect and, for their audience, they still may be. Consider:

    N. Lewis, J. A. Curry, “The implications for climate sensitivity of AR5 forcing and heat uptake estimates”, Climate Dynamics, August 2015, 45(3-4), 1009-1023

    namely, and quoting:

    Moreover, Carslaw et al. [2013] use a subjective Bayesian statistical approach, which may give unrealistic uncertainty estimation
    when (as with aerosol forcing) strongly non-linear functional relationships are involved (Lewis 2013).

    The key there is Lewis 2013, or:

    N. Lewis, “An objective Bayesian improved approach for applying optimal fingerprint techniques to estimate climate sensitivity”, Journal of Climate, October 2013, 26, 7414-7429.

    Carslaw, et al [2013], by the way, is:

    K. S. Carslaw, L. A. Lee, C. L. Reddington, K. J. Pringle, A. Rap, P. M. Forster, G. W. Mann, D. V. Spracklen, M. T. Woodhouse, L. A. Regayre, J. R. Pierce, “Large contribution of natural aerosols to uncertainty in indirect forcing”, Nature, 7 November 2013, 503, 67ff.

    I go to this trouble, because:

    There’s little “objective” to Lewis 2013, apart from Lewis’ claim (rant?) of how use uninformative priors on parameters is the only `scientific’ way of doing Bayesian calculations. The claim is disingenuous: All priors are weakly informative. You simply cannot search the entire Real line. In today’s literature, an “objective Bayesian technique” does not mean what Lewis means. (In today’s literature it means embodying some mechanism for deriving priors automatically. And, you guessed it, these are weakly informative.) Lewis also ignores model specification error.
    Lewis and Curry complain about a Carslaw, et al using a “subjective Bayesian statistical approach”, something I find insulting to them. I know very little about aerosols, but an examination of Carslaw, et al methods shows nothing subjective apart from introducing informed priors. Their paper continues to be cited in their more recent work on aerosols, including a paper in PNAS from 2016.

    Fortunately, it’s good to see:

    M. E. Mann, E. A. Lloyd, N. Oreskes, “Assessing climate change impacts on extreme weather events: the case for an alternative (Bayesian) approach”, Climatic Change, September 2017, 144(2), 131-142.

    Given the 20 year gap between Berline, et al and Lee, et al and today, it also might be less daunting if Gaussian assumptions were relaxed and if high dimensionality were embraced. It would have been difficult for Berliner, Levine, and Shea to know that. The Bayesian computational revolution was just getting started.

    I think that’s also true for their blame of high dimensionality. Even for non-Bayesian approaches today, there are many more algorithms and options for dealing with it. It can be useful instead of an impediment.

  74. ecoquant says:

    @izen,

    Potential damage and erosion on coast, assuredly, but I’d say watch the water. In particular, given the density of population and infrastructure in the region, if the storm stalls in place or nearby for days, it could get really nasty.

  75. angech says:

    Prophetic Posting I put up elsewhere on August 26, 2018 at 12:24 am

    “Based on the overall expectations for low Atlantic hurricane activity in 2018, combined with forecasts of a U.S. landfall ranging from 50% to 100%, we can expect 2018 to be a year with smaller economic loss from landfalling hurricanes relative to the average.”
    Hope, not expect.
    It would only take one medium hurricane hitting a vital center like Florida to create massive economic loss.
    30 years seems to be the average time for repeat strikes. Why I am not sure, perhaps bandwidth to number of possible hurricanes. Judith might explain.
    So a 3% chance of severe damage at one site. 5 possible sites. 15% chance per year of an average economic loss. 2 hurricanes in the year put it above average damage, risk is 7 1/2% per year.”

  76. dikranmarsupial says:

    FWIW:

    Chapter 6 (this may take some time). Sorry, this seems to me an opportunity to set out some interesting material about the Kaya identity and the nature of the economic, social and political realities spoiled by hyperbole, rhetoric and uncharitable caricatures, too numerous to mention them all. This makes it difficult to see it as an objective assessment of the situation in which we find ourselves.

    The chapter opens on the topic of “so what” if climate extremes & climate change are exaggerated, but the book provides no clear examples (at least AFAICS). Straying outside the strict IPCC detection and attribution framework, as I have explained is not necessarily exaggeration.

    P.78 talks of “apocalyptic visions”, which is obvious hyperbole and rather ironic given the complaint about scientific exaggeration! Likewise “efforts to scare people”.

    Are the trends shown on figure 6.1 statistically significant?

    I don’t think it is true that “one ideological commitment that unites nations and people around the world” being “GDP growth is non-negotiable”. There clearly are those who would be willing to forgo GDP growth in the short term to solve a substantial world problem, if only because it was in their own long term benefit. They may be a minority, but they do exist. Of course the response to this is an ad-hom about “comfortable academics” in “posh university towns” – which IMHO is totally unconvincing (content free) rhetoric. This is especially ironic as later in the chapter there is the complaint that things are “obscured by the public was over who gets to be a legitimate voice in the climate debate”, which is ironic as @RogerPielkeJr appears to have just delegitimized the opinion of these “comfortable academics”!

    “Advocates fight over more trivial things like who should be allowed to speak on climate” I actually don’t see much of that what I mostly see is people arguing about the science, but this is often portrayed as an attempt to “delegitimise” people, which of course it isn’t.

    Referring to “incentives” as “economic pain” is indicative of political bias, one could equally say it is the removal of an unjustified subsidy, and we should pay the full economic cost of our actions. See, I can do it as well, the difference is that I am happy with the more moderate term as it doesn’t feed pre-existing biases. Referring to “strengthening of the incentives as “turning the screws” is just more emotive hyperbole. Equating the term “deniers” with Holocaust denial is also hyperbole, IMHO.

    The chapter is critical of the “deficit model”, however, this misses the point that there is no “one true” communication strategy, some want to be well informed (e.g. me before I got interested in this issue), others need a “values” based approach. It also misses the point that encouraging action on climate change is not the only reason for communicating the science, it has its own inherent value.

    Ironically, for a book called “The Rightful Place of Science: Disasters & Climate Change”, the book seems to have very little to say about the role of science, other than that it is largely irrelevant due to the political difficulties involved (a view with which I have some sympathy – arguments about the science are all to often largely a way of avoiding discussion of the social, economic and political problems). The rightful place of science is to help politicians understand the likely consequences of various courses of action and if possible to come up with some new technical/scientific options. Of course scientists should voice their opinions about the politics (for those that have them), but it should be done in a way that makes it clear what is based on science, and what is personal opinon. It is the responsibility of science to contribute to public understanding of science and to address scientific misunderstandings that promulgate in the public discourse on important topics like climate change. I’m sure skeptics would be very happy for scientists to stop pointing out their misconception of the second law of thermodynamics etc. But if being told you are wrong polarizes someone, that doesn’t mean that they should stop being told and allow the error to propagate further. As the books says “someone must take responsibility for scientific accuracy”.

    “Make no mistake, fighting skeptics has its benefits – it reinforces a simplistic good-versus-evil view of the world” is an uncharitable caricature (more delegitimising of opponents). It isn’t about good-versus-evil, it is about trying to get to the truth (or at least the closes approximation science can manage). It isn’t evil to be misinformed, that is ridiculous hyperbole, and the book would be more convincing if it left out this sort of cheap rhetoric.

  77. Dave_Geologist says:

    Moreover, Carslaw et al. [2013] use a subjective Bayesian statistical approach, which may give unrealistic uncertainty estimation

    Colour me cynical, but I’ve always considered that terminology more a rhetorical tool than anything else.

    The audience is supposed to read “subjective” as “influenced by the prejudices of the analyst” and “objective” as “independent of the prejudices of the analyst”. In a world where a large part of the population has been indoctrinated into believing the false accusation that climate scientists are letting political biases influence their results, that’s not so much a dog-whistle as a steam-powered klaxon.

    In the real world “subjective” means “taking into account independent sources of information, e.g. physics or palaeoclimate” and “objective” means “ignoring all independent sources of information, e.g. physics or palaeoclimate”. Which, by a remarkable coincidence, makes it easier to get the lukewarm answer you like. In the real world the personal subjectivity is reversed. The “subjective” analyst is using all the data and letting the chips fall where they may. The “objective” analyst is studiously ignoring inconvenient truths. And, IMHO, throwing away one of the arguments for/advantages of the Bayesian approach. The ability to use informed priors to improve your estimate beyond what the data in isolation can tell you. In a sense, it’s a version of consilience.

    Of course, the veracity of the informed prior should be demonstrated and can be challenged. But, for example, a prior ECS of zero can’t be justified without denying hard science. If you’re not a science-denier, you must know that you’re starting from the wrong answer. Both from physics and the fact that we had Ice Ages. At minimum you should test whether that perverse prior has biased your result low, for example by using a perversely high prior such as 6K as a sensitivity.

  78. Joshua says:

    30 years seems to be the average time for repeat strikes.

    Oy.

    Apopphenia

    https://goo.gl/images/jNG8Kp

  79. Joshua says:

    Does this populate the image?:

  80. Dave_Geologist says:

    Thanks for the references ecoquant. I had some but not all already. I didn’t mention above what I would see as the other advantage of a Bayesian approach, that it tends to converge more quickly on the “true” answer in datasets which are regularly updated, such as the synthetic examples in MLO17. Especially with relatively small effect sizes (their θ˳ = 0.6 example). Obviously the practical usefulness of that depends on whether we care about the small effect sizes. Per angech’s one-in-thirty-year event, I really don’t want to wait sixty years to find out that it’s changed to one-in-twenty or one-in-ten. Especially if I could have done something now to mitigate or even prevent that change. I want to find out as soon as possible. Waiting for a frequentist 95% confidence threshold before acknowledging that something bad is happening is the opposite of the precautionary principle. Ironically, Bayesian approaches have some popularity in oil and gas exploration for precisely that reason. You’re spending money like water and want to learn as soon as possible if you’ve backed the wrong horse.

    You could argue that MLO17 use an objective or non-informative prior in that it is centred on 0.5 for all three runs, but I think this is a different situation from L13. MLO17 are using a distribution which is naturally bounded between zero and one (you can’t have more than 100% active years, or less than 0%, but we’re uncertain where the peak of the PDF lies). That’s analogous to knowing that ECS can’t be less than zero, can’t be less than the TCS, and very probably can’t be less than the CO2-only forcing, otherwise stuff like Ice Ages couldn’t have happened. That should set a hard lower bound to the prior, like MLO17’s zero. L13 also differs in explicitly avoiding Bayesian updating, but I think that is less of an issue with EBM ECS estimates than with extreme events. You’re dealing with a parameter (ECS) which you expect to be stationary across the time-series, so you don’t expect the last few years to be more informative than the previous fifty. Whereas with extreme events, we expect from climate models and basic physics that at least some will be non-stationary and become more common (or indeed less common) with time. So an efficient approach is one that identifies that non-stationarity earlier rather than later.

  81. angech says:

    Apophenia, thanks Joshua. Fits perfectly.
    When I play the pokies I can sit for hours looking at little patterns that might lead to a win in the next 2 spins.
    Note to all. Does not work. Fun trying and limits bet size.
    Re hurricanes and 30 years. Well known with cyclones Southern Hemisphere.
    Would be a simple reason.
    Number of hurricanes per year, width of path of a cyclone on average means that at a particular latitude there would be a specific percentage chance of being in the zone.
    Note to all professional hurricane watchers out there.
    Can use this if attributed t ATTP and me.
    Sure it has already been done .

  82. Dave_Geologist says:

    Interestingly angech, it’s been done in the offshore Gulf of Mexico. It’s why production platforms are shut down and evacuated when a hurricane threatens. As opposed to the North Sea, where they just batten down the hatches during a storm. Back in the 90s, various Americans came over and told us our Southern North Sea shallow-water platforms were over-built compared to the GoM. When we asked why theirs were not stronger than ours “because hurricanes”, they said they don’t hurricane-proof because you can’t. You evacuate, and rebuild if it gets hit. But the hurricane track is so narrow, most platforms serve out their entire 20-50 year life and never get hit. Whereas (east) Atlantic storms come in on a front hundreds of miles wide, so every platform gets hit, multiple times, every year. Although the wind forces are less, the problem is metal fatigue from repeated stressing. We learned that early with the Sea Gem. And were perhaps primed by the Comet crashes, which were attributed to metal fatigue from repeated depressurisation and pressurisation, compounded by the squarish window frames. Air travel must have been really scary 50 years ago. Despite being famous for those crashes, almost ten times as many were lost due to things like pilot error, takeoff or landing accidents*, instrument failure or bad weather.

    With something like hurricanes, where the only way to be safe is to be elsewhere, it’s return times that matter, for the ones that are so bad the only safe place is elsewhere. For something like Atlantic storms, it may be sustained wind force that matters. The consequence may not be sudden failure, but ten years knocked off a structure’s life with increased risk or “premature” failure. For many of us, less frequent, more intense rainstorms will probably be the most obvious sign of climate change. Something for which the UK is not prepared, judging from recent events. Or indeed Texas.

    Regulars will be used to me saying just how drastically far down that enhanced-hydrological-cycle path the climate went during the PETM. Guess what, it happened before when there was a massive injection of carbon into the atmosphere and a 3-4°C global temperature increase. Third time’s a charm 😦 .

    * In the 90s I worked with someone who was also a private pilot, and he was always more nervous than me during airliner takeoffs and landings. When I asked why, he said “if you’re not a pilot, you don’t realise just how dangerous each takeoff and landing is”.

  83. Magma says:

    There’s a lot of fuss right now about Hurricane Florence, so I thought I’d channel my inner contrarian. Normalized by growth in population, GDP, wealth, and near-shore construction, adjusting for hardening of infrastructure and amortizing over a century at a 1% discount rate, the Carolinas and Virginia will be just fine. And nobody lives forever, anyway.

  84. Dave_Geologist says:

    Another GoM thought. You shouldn’t take offshore-installation hurricane return times as a guide to onshore climate risk. Offshore you don’t worry about the rain, only about the wind, and the swell if it’s big enough. The rain just goes into the sea. Many platforms are built with open sides and grating floors to prevent gas pockets forming, so the water just runs through them like a sieve. Onshore you can still get hit by flooding, miles away from the track of the highest winds.

  85. Richard S J Tol says:

    “Does Roger give any indication when we should expect (or should have expected) the signal to rise above the noise and become detectable? Is it surprising that this isn’t already detectable?”

    Yes, he has at least one paper on that, which puts the date somewhere in the second half of the millennium.

  86. I think it’s paper which I discuss in this post. It does depend on the models that are used, but the conclusion seems to be that it will almost certainly have emerged by the mid 2200s. However, a few other things to consider. I redid the analysis and there would seem to be a 50% chance of it emerging before 2100 and about a 15% chance of it emerging before 2045. Also, it seems likely that even if the signal doesn’t emerge for a couple of hundred years, we would detect a shift from the damage being due to a combination of category 3s, 4s and 5s to most of it being due to category 4s and 5s much sooner than that.

  87. dikranmarsupial says:

    This is pretty much the problem with the argument in Roger’s book. The failure to detect a link, when we don’t expect to detect one anyway tells us almost nothing about whether there is or is not a link. Even without the observations, physics (as implemented by the models) tells us to expect there to be a link that will become evident eventually.

    Let A represent the proposition that there is a link between climate change and losses due to climate extremes, and ~A represent the proposition that there is no link. Our understanding of the physics suggests that it is likely that such a link exists, say P(A) = 0.9 (=> P(~A) = 0.1). Next let B represent the event that a detection exercise identifies the link in some set of observations and ~B represents the event that no such link is detected. Now if there is no link, we can be quite sure of not finding one, so P(~B|~A) = 1 and P(B|~A) = 0. If there is a link, however, the chance of detecting it is small as the observation period is so short, say P(B|A) = 0.001 and P(~B|A) = 0.999. We can then use Bayes rule to find the posterior probability of a link, given that none was detected:

    P(A|~B) = P(~B|A)P(A)/[P(~B|A)P(A) + P(~B|~A)P(~A)] = 0.999*0.9/(0.999*0.9 + 1*0.1) = 0.8999

    so observing no detection, when we didn’t expect to detect a link even if it existed, has hardly changed our belief in the existence of a link. P(A) \approx P(A|~B).

    Fill in your own estimates of the various probabilities, but the basic fact is that the outcome of the detection and attribution study should not substantially change our prior belief about the existence of a link obtained from physics and model simulations.

  88. dikranmarsupial says:

    Or looking at it another way, the Bayes factor, which tells you by how much the posterior odds differ from the prior odds is P(~B|A)/P(~B|~A) = 0.999/1 \approx 1. In other words, the detection and attribution exercise doesn’t change your prior beliefs.

  89. ecoquant says:

    @Dave_Geologist, and regarding MLO2017,

    As I wrote, it’s good to see a Bayesian approach enlisted in such an analysis of risk. However, I think it instructive to point out that their Bayesian setup is a tad old-fashioned. In particular, for their experiment, they tried priors for \theta in their Binomial event model of \theta_{0} of 0.5, 0.6, and 0.75. Why? Why not say, instead, \theta \sim \text{Beta}(2,2) and produce a full-fledged Beta-Binomial posterior? It even has a closed form. And if \text{Beta}(2,2) is not to one’s taste, why not \text{Beta}(\alpha_{j}, \beta_{j}) as a prior for \theta and allow \alpha_{j}, \beta_{j} to range, perhaps even setting up a hyperprior where \alpha_{j} \sim \text{Gamma}(\kappa,\phi) and \beta_{j} \sim \text{Gamma}(\kappa,\phi) where \kappa = 2 and \phi = 1? The js could denote different kinds of events, one being a pool of storm events, another a pool of drought events, another an excessive rain event, another a heat event.

    This kind of hierarchical model is nicely illustrated in another context, and is taught by Professor Kruschke in his Doing Bayesian Data Analysis.

  90. Magma says:

    …we estimate the time that it would take for anthropogenic signals to emerge in a time series of normalized US tropical cyclone losses. Depending on the global climate model(s) underpinning the projection, emergence timescales range between 120 and 550 years, reflecting a large uncertainty. It takes 260 years for an 18-model ensemble-based signal to emerge” Crompton et al. (2011)

    Without having the time or inclination to go through the details of an eight-year-old paper, it seems to me that this fails the basic order-of-magnitude “sanity check” that’s advisable at various stages of any quantitative modeling analysis. Either that, and/or the normalization methods employed inadvertently remove much of the anthropogenic signal sought.

  91. dikranmarsupial says:

    [pedant]
    The title “Doing Bayesian Data Analysis” has never sounded right to my eyes, shouldn’t it be “Performing Bayesian Data Analysis”?
    [/pedant]

    I like hierarchical Bayesian models though.

  92. Magma,
    I think that one factor is that the reduction in the weakers storms almost balances the increase in damages due to the stronger ones. In this case, the total damage signal doesn’t emerge for quite a long time, but we would almost certainly (I think) notice a difference between the distribution of the damages. A world in which the damage is due to a combination of categories 3, 4 and 5 is quite different to one in which it is mostly 4s and 5s (even if the total is roughly the same).

  93. Dave_Geologist says:

    they tried priors for θ˳ in their Binomial event model of θ˳ of 0.5, 0.6, and 0.75

    Is that what they did ecoquant? I thought that 0.5, 0.6 and 0.75 represented realisations of the “real world”, i.e objective reality, and they compared how long it would take to demonstrate (or not), at a chosen confidence limit, that the frequentist null hypothesis of 0.5 was wrong, vs. how long it would take a Bayesian approach, with a prior centred on 0.5, to converge on the “real-world” value.

  94. Magma says:

    I think that one factor is that the reduction in the weakers storms almost balances the increase in damages due to the stronger ones — ATTP

    See, that’s where my intuition would kick in and question why there would be fewer weak hurricanes rather than an overall increase in all categories, and wonder if there is a problem with the model or basic underlying hypotheses. But of course intuition is only a guide.

  95. Joshua says:

    angech –

    Re hurricanes and 30 years. Well known with cyclones Southern Hemisphere.

    Do you have a link? In particular, one that controls for the impact of AGW, aerosols, and other exogenous? variables?

    Do you have a link that speaks to 30 year averages for repeat striikes?

    What do you think the odds are that you’ll discover a clear periodicity from idle speculation that just happened to slip past people who study this shit day in and day out for decades?

    Methinks there might be another pattern in play here.

  96. ecoquant says:

    @Dave_Geologist,

    The exact quote is:

    For the purpose of the analysis procedure, we generated via Monte Carlo simulations of a binary-valued process of length Nmax = 64 for both (a) the unbiased case \theta_{0} = 0.5, (b) the modestly biased case \theta_{0} = 0.6, and (c) the strongly biased case \theta_{0} = 0.75. The latter two cases correspond to a 20% higher and 50% higher likelihood, respectively, of active (B+^) years vs. inactive (B−^) years. Given, for example, that the rate of record-breaking warmth has doubled (i.e., exhibited a 100% increase) over the past half century
    (Meehl et al. 2007), our use of a 20 and even 50% increase is, at least for some extreme weather phenomena, conservative.

    For each experiment, we performed parallel frequentist and Bayesian estimation of expected values for increasingly large subsets of the data series of length N_{\text{tot}}, iteratively refining our estimates of the posterior distribution and bias (b = \theta_{0} - 0.5). We performed six sub-experiments that consist of using the first N_{\text{tot}} = 2, 4, 8, 16, 32, and 64 years of the total of N_{\text{max}} = 64 years of data for each site. These six experiments introduce, sequentially N = 2, 2, 4, 8, 16, and 32 new years of data, respectively. For each set, we computed the expected (Ñ+) number of active years based on updated estimates of \theta_{0} derived from the posterior distribution of the previous experiment.

    I might need to raise my Pedant Flag, as @dikranmarsupial did earlier, but, first off, the odds of extreme weather with \theta = 0.6 are 3::2 and with \theta = 0.75 they are 3::1. I don’t know what they mean by “The latter two cases correspond to a 20% higher and 50% higher likelihood.” Second, my point was, as I wrote instructive. MLO2017 are playing into the claims of “Bayesian subjectivity” per Lewis 2013 by using a non-informative prior. Third, to your question, I do not know exactly what they did. MLO2017 claim \frac{P(B|A) P(A)}{P(B)} is the likelihood function. It is not. The likelihood is just P(B|A). MLO2017 can be forgiven because the terminology is confusing. That’s why, in fact, P(B|A) is sometimes called, instead, the sampling density. In general, the rule is

    \text{posterior} \propto \text{likelihood} \times \text{prior}

    While P(A) is, indeed, the prior, I have no idea how they arrived at P(B). They don’t say. So I don’t know what they did.

    Normally, P(B) is called the evidence and, as a normalization constant, it can be tough to calculate. Fortunately, if a compound posterior with a conjugate prior is used, that’s not necessary. It’s also not necessary if something like Markov Chain Monte Carlo, or relative likelihood, or Bayes factors are used because, in the first case, regions of high probability mass of the posterior are just all scaled by the same P(B), so it doesn’t matter, and in the latter two cases it cancels out.

    Sure,

    P(B) = P(B|A) P(A) + P(B|\bar{A}) P(\bar{A})

    but how would they calculate P(B|\bar{A})?

    It’s a jumble. In fact, my treatment of MLO2017 above was rather charitable. Since you questioned, I ceased being charitable.

    You can’t really set a prior like \theta_{0} = 0.6 without stating how much weight ought to be associated with it. So, in the case of a Beta-Binomial posterior a prior of \theta_{0} = 0.6 might be set having \alpha = 3, \beta = 2 or \alpha = 18, \beta = 12. The prior is sharper in the latter case. So, when they write they tested a point as a prior, like \theta_{0} = 0.5, I don’t really understand what they mean.

    I’m guessing that what was done was that they iterated Bayes Theorem as

    P(A|B)_{n+1} = \frac{P(B|A) P(A|B)_{n}}{P(B)},

    where P(A|B)_{0} = P(A), and they did this for each of the M = 100 sites. I also presume that when MLO2017 say they tried a uniform prior P(A), what they mean is that

    P(A|B)_{1} = \frac{P(B|A) P(A|B)_{0}}{P(B)} = \frac{P(B|A) P(A)}{P(B)} = \frac{P(B|A)}{P(B)}

    But, since the Beta Binomial is available in closed form, and the process generating events is a Binomial with known \theta values, it’s possible to calculate what the Beta Binomial will estimate for each of their N \times M cases, where N \in \{2, 4, 8, 16, 32, 64\} as they seem to do.

    One thing is clear, though: While MLO2017 do use Bayes Theorem in an iterative way to obtain an estimate of probability of a Binomial event, it is highly misleading, in my opinion, to label their approach a Bayesian decision analysis of their synthetic data. What’s missing is that \theta should be estimated from whatever data there is, and, prior to the analysis, and using some kind of loss function, perhaps an asymmetric one, choice points of what \theta needs to be in order for change to be “detected”, fully considering the variance in the estimate of \theta so obtained. Doing Bayesian work means putting criterion for acceptance on parameters. Parameters are the random variables. It isn’t some distance or metric placed upon the observed number of events versus predicted. That’s squarely Frequentist.

    Not sure if this means anything to anyone, but I tried it out to see how Beta-Binomial might compare to MLO2017’s Figure 2. I used the following code with amendments to it for appropriate cases:


    library(xtable)

    alpha<- 300
    beta<- 200

    M<- 100

    thetaSim<- c(0.5, 0.6, 0.75)
    Ntot<- c(2, 4, 8, 16, 32, 64)
    N.c<- length(thetaSim)*length(Ntot)

    tabulation<- data.frame(N=rep(NA,N.c), b=rep(NA,N.c), N.events.per.site=rep(NA,N.c),
    posterior.mean.number.of.events.per.site=rep(NA,N.c),
    posterior.SD.number.of.events.per.site=rep(NA,N.c),
    posterior.mean.event.rate.overall=rep(NA,N.c),
    posteriorSD.number.of.events.overall.as.fraction.of.Ntot=rep(NA,N.c),
    stringsAsFactors=FALSE
    )

    K.t<- 0
    for (thetaS in thetaSim)
    {
    for (N in Ntot)
    {
    N.events<- round(thetaS*M)
    N.nonevents<- M - N.events
    alphaPrime<- 1 + N.events + alpha
    betaPrime<- 1 + N.nonevents + beta
    posteriorMean.numberOfEvents<- M*alphaPrime/(alphaPrime + betaPrime)
    posteriorMean.event<- posteriorMean.numberOfEvents/M
    posteriorVariance.numberOfEvents<-
    M*alphaPrime*betaPrime*(alphaPrime + betaPrime + M)/((alphaPrime + betaPrime)^2 * (alphaPrime + betaPrime + 1))
    posteriorSD.numberOfEvents<- sqrt(posteriorVariance.numberOfEvents)
    N.events.overall<- round(thetaS*M*N)
    N.nonevents.overall<- (M*N) - N.events.overall
    alphaPrimeOverall<- 1 + N.events.overall + alpha
    betaPrimeOverall<- 1 + N.nonevents.overall + beta
    posteriorMean.event.overall<- alphaPrimeOverall/(alphaPrimeOverall + betaPrimeOverall)
    posteriorVariance.numberOfEvents.overall<-
    N*M*alphaPrimeOverall*betaPrimeOverall*(alphaPrimeOverall + betaPrimeOverall + M*N)/
    ((alphaPrimeOverall + betaPrimeOverall)^2 * (alphaPrimeOverall + betaPrimeOverall + 1))
    posteriorSD.numberOfEvents<- sqrt(posteriorVariance.numberOfEvents.overall)
    K.t<- 1 + K.t
    tabulation$N[K.t]<- N
    tabulation$b[K.t]<- thetaS
    tabulation$N.events.per.site[K.t]<- N.events<- round(thetaS*M)
    tabulation$posterior.mean.number.of.events.per.site[K.t]<- posteriorMean.numberOfEvents
    tabulation$posterior.SD.number.of.events.per.site[K.t]<- posteriorSD.numberOfEvents
    tabulation$posterior.mean.event.rate.overall[K.t]<- posteriorMean.event.overall
    tabulation$posteriorSD.number.of.events.overall.as.fraction.of.Ntot[K.t]<- posteriorVariance.numberOfEvents.overall/(N*M)
    }
    }

    I also don’t know if I reproduced the intent of MLO2017 accurately:

  97. ecoquant says:

    @Dave_Geologist,

    Oops, while I tried that all out before posting, it, I forgot the routine for posting images is different in WP comments here than in the HTML of a blog post. Here’s the end of that post above:

  98. Willard says:

    If comments could be comments and not overly long blog posts, that’d be great.

    ***

    Meanwhile, to connect the topic of AT’s post with what’s going on in the world:

  99. [You’ve been warned. -W]

  100. Dave_Geologist says:

    Thanks ecoquant. I’m not a statistician, let alone a Bayesian one, so I was getting a bit tangled up in terminology and expression. I has misread you as saying they’d used an informative prior of 0.6 or 0.75, which is (potentially) an unrealistic version of the real world because it requires us to know the answer already. I say potentially because for some extreme events, for example heatwaves, we know the real-world answer pretty well already from the local inter-annual distribution of temperatures, the increase in mean global temperature, and the magnitude of its perturbation by internal variability such as ENSO. “MLO2017 are playing into the claims of “Bayesian subjectivity” per Lewis 2013 by using a non-informative prior” is equivalent to what I was trying to say.

    I look at it as playing with loaded dice (perhaps a frequentist approach though 😦 ). If you know the die is numbered 2 through 7 or 3 through 8, how long would it take you to detect (a) that it’s not numbered 1 through 6 and (b) what the actual numbers are (or at least the mean) using frequentist vs. Bayesian updating. I would favour using informative priors where there is solid physics behind it. If it’s meant to be relevant in the real world, and not a Martin-Gardner-style mathematical diversion, why tie one hand behind your back? But then lukewarmers would jump down my throat and accuse me of being subjective. I would retort that they’re only championing non-informative priors because it allows them to ignore all the physical evidence which refutes their case. And calling them “objective” for polemical reasons. You could test an informed prior in a similar way to MLO17 (or do it better/right) by using priors centred on 0.6 and 0.75 and seeing how quickly it converged on the right value, including in the 0.5 case. Which is what L13 should have done IMO. Even though he was quite explicit about ignoring all evidence not contained in his observational dataset, if it is to have any meaning in the real world (and if he thinks it should just be an academic parlour game, why all the social media activity?), he owes it to the readership to show what the consequences of that decision are.

    The 20% and 50% is, I think, just another way of saying the same thing as 3:2 or 3:1. If you take a baseline of 50% active years, 60% is 0.5 x 1.2, and 75% is 0.5 x 1.5.

  101. ecoquant says:

    @Dave_Geologist,

    The trick is, what do you mean by detect? The idea of a 0.05 or 0.01 or, for that matter, 0.001 test against a significance is completely indefensible. I tried to make the point through an extended exposition that throwing “subjectivity” at Bayesian analyses is an old and tiresome red herring, as long as the analyses are properly done. My point was that MLO2017: Not every study invoking Bayes Theorem or Rule is automatically Bayesian.

    Doing decisions using Bayes or Frequentist is nothing new. But it can’t be done properly unless the costs of being wrong in whichever direction are included as weights. People have different assessments of that, and these are almost intrinsically political (in the good sense of the word, per Aristotle). Accordingly, the Bayesian approach is to calculate and publish the entire posterior density from a calculation, and let the audience figure out, using their own loss functions, what it means for them. The old Frequentist approach, among its many other faults, calls the question too early, and short circuits people’s ability to apply different losses.

    I call it “old Frequentist approach” because the comparison in MLO2017 with the pseudo-Bayesian is doing using an early 20th century framework. A lot of great work has been done in the non-Bayesian world since then, too, and thinking that hypothesis tests and ROC curves are where the story of stats ends means people have really not been paying attention to the literature. I mean, few people write as if they know about the James-Stein phenomenon (see also), which was published in 1961. I was shocked to realize that Prof James Berger’s book on Bayesian decision analysis (which, incidentally, I was taught from, but I did not know about James-Stein at the time) dismisses James-Stein as essentially unimportant. For people who think pursuing the link is TL;DR, James-Stein’s theorem says maximum likelihood estimatiion (“MLE”) does not give the least error estimate, at least for systems having 3 dimensions or greater.

    James-Stein turned statistics on its head. Arguing “objective” NPHT vs “subjective” Bayes theorem is really rinky dink.

    Enough from me. I’m unsubscribing to this comment thread.

  102. Magma says:

    Enough from me. I’m unsubscribing to this comment thread. — ecoquant

    That’s a shame. I found your posts densely written but/and very interesting.

  103. dikranmarsupial says:

    Magma, seconded.

  104. Dave_Geologist says:

    Thirded. I was trying to learn, not criticising.

    And I do like “Doing decisions using Bayes or Frequentist is nothing new. But it can’t be done properly unless the costs of being wrong in whichever direction are included as weights”. To me it’s equivalent to what I was taught decades ago: the first question you should ask in any analysis is “what’s it for?”. Not in the sense of consciously biasing your results to get the answer you want, but in the sense that questions like threshold effect size, significance/confidence/attribution/whatever-you-want-to-call-it, should influence how you pose the question and interpret the answer. The reasons planes don’t crash and oil rigs don’t blow out every other day is because people spend their entire careers taking steps to ensure that really, really unlikely things don’t happen. They don’t say “I’ll wait until I’m very confident the bad thing will happen before I change things”, far less “I’ll wait until I’m really certain”. The Precautionary Principle is a version of that. As is buying fire insurance. But not at a premium of £50k per year, thank you very much. And yes people will have different views. Some will argue that taxes and government controls are such a bad thing that we shouldn’t do that, even if it forestalls the possibility that a million people will die or that the world economy will collapse. To them, the cost of needlessly interfering with the free market is greater than the cost of a million deaths. They’ll probably tell themselves they’re not bad people because the free market has dragged billions out of poverty and thus indirectly saved millions of lives. I, OTOH, would say we can’t afford to wait hundreds of years to be confident (for your preferred definition of confidence, or preferred alternative word) that global warming won’t supercharge the hydrological cycle in ways that make much of the world unlivable and unfarmable.

    I think it is difficult in practice, however, to say: just print the PDF and let each member of the audience make their own decision. Many will make the wrong decision because they don’t understand what they’re looking at. Expert guidance is required. perhaps by the person who did the calculation.

  105. ecoquant says:

    @Dave_Geologist, @dikranmarsupial, @Magma,

    Thanks much for the voices of support.

    I think it is difficult in practice, however, to say: just print the PDF and let each member of the audience make their own decision. Many will make the wrong decision because they don’t understand what they’re looking at. Expert guidance is required. perhaps by the person who did the calculation.

    I thought that’s was representative democracy was all about …. 😉

  106. Dave_Geologist says:

    Trump, Brexit…. I rest my case 😦 .

  107. hyper,
    Are you interested in putting together a post about what you’ve been discussing?

  108. Pingback: Disasters and Climate Change – part 2 | …and Then There's Physics

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.