Disasters and Climate Change – part 2

Since I have a bit of free time, I thought I would expand a little on my Review of Roger Pielke Jr’s book about Disasters and Climate Change. As I mentioned in my earlier post, there were a number of things I disagree with, so thought I would expand a little on those here.

One thing that should be stressed is that what the book was mostly highlighting is that there is no indication that trends in disaster losses are due to human-caused climate change. This does not mean that we have not been able to attribute changes in some extreme events to human-caused climate change, because we have; the book is focusing on trends in disasters, not trends in the extreme events themselves.

Of course, that we may not be able to detect a trend in disaster losses that is due to human-caused climate change, does not mean that there is no such trend. However, the book argues that a signal that may exist, but which cannot be detected, is indistinguishable from a signal that does not exist. The book points out that God, aliens, and celestial teapots are also examples of things for which we have no evidence, but that we might want to believe do exist.

The problem, though, is that climate change is clearly happening and is predominantly being driven by our emission of greenhouse gases into the atmosphere. It’s already changing the conditions associated with extreme events and, in some cases, we’ve even detected a climate change related influence in some of these events. It may well be that other factors are dominating trends in disaster losses, but it would be remarkable if climate change was having no impact at all. I don’t think that this means that we should assume that human-caused climate change has contributed to some of the trend in disaster losses, but does – in my view – mean that we should be cautious of assuming that the lack of a detectable trend means that there is no trend. Even if we can’t detect something now, it seems very likely that we will in the future if we continue to dump CO2 into the atmosphere.

The other thing I was going to discuss was the argument against single event attribution. The suggestion is that this abandons the IPCC framework, which involves detecting trends over climatologically relevant timescales, and then trying to establish if anthropogenically-driven climate change was a cause of this trend. The IPCC, however, is simply an organisation that produces synthesis reports; it doesn’t – as far as I’m aware – have any mandate to specify appropriate scientific methodology. Also, single event attribution is an entirely reasonable thing to do. You consider the conditions associated with an extreme event, try to determine if these conditions could have been influenced by human-caused climate change and, hence, how this may have influenced the extreme event itself. Arguing against this is essentially arguing against doing physics. Patrick Brown has a nice post that briefly discusses this and presents an illustrative video.

As usual, this post is now too long. I wanted to finish by highlighting an earlier post in which I discuss extreme events and anthropogenic emissions and argue that formal attribution is not really all that relevant; we don’t really need to do some kind of formal attribution study to be quite confident that our emission of greenhouse gases into the atmosphere will probably influence extreme events. Understanding how it will do so, and the likely impact, clearly is important, but that’s somewhat distinct from demonstrating an anthropogenic cause.

Links:
Signal, Noise and Global Warming’s Influence on Weather – post by Patrick Brown.
Extreme events and anthropogenic emissions. – earlier post about attributing anthropogenic emissions to trends in disaster losses.
Economic losses from US hurricanes consistent with an influence from climate change – a paper by Estrada, Wouter Botzen and Tol estimat[ing] that, in 2005, US$2 to US$14 billion of the recorded annual losses could be attributable to climate change, 2 to 12% of that year’s normalized losses.

Advertisements
This entry was posted in Climate change, ClimateBall, Policy, Roger Pielke Jr and tagged , , , . Bookmark the permalink.

75 Responses to Disasters and Climate Change – part 2

  1. Steve Forden has pointed out on Twitter that the last IPCC report actually discussed single event attribution in Chapter 5 of WGI (page 914 onwards and the conclusion on page 917).

  2. allowing the discussion to be framed in cherrypicked formats or time frames is not smart. When denialists/lukewarmers/merchants of doubt open the conversation without referencing the big picture (global warming is disastrous, human activity is the primary cause and that we are way late on responding to limit damage) should we argue about arcane aspects of AGW in a cherry-picked or limited frame, are we doing the right thing?

    Pielke seems to be a merchant of doubt. What is his motivation? Who funds his work?

    Don’t mudwrestle with a pig, you both get dirty and the pig enjoys it.

    Call people out for who they are and why they take the positions they take. Insist that discussion start with the big picture before you agree to have a discussion out in the weeds (single even attribution, cost calculations, etc).

  3. small,

    What is his motivation? Who funds his work?

    I don’t know his motivation and don’t care to speculate. I don’t think there is any funding issues here. I very much doubt that Roger is getting funded to specifically promote a particular narrative.

  4. does he start in agreement with you on first principles regarding AGW? If no, and his science is somewhat sound, then he would appear to be a merchant of doubt. Motivation is interesting when we know it, but not essential. Big tobacco did a great job of publicizing/funding the scientific work of the merchants of doubt. It’s the same process and routine. Surely there is a way to make the parallels clear if you are unable to insist that the discussion start with the big picture, right? Do you want to help the merchants of doubt peddle their wares to the general population?

  5. Joshua says:

    small –

    Maybe his motivation is to correctly interpret the science. If so, do we want to help him peddle his wears to the general public?

    These are unanswerable questions, and (IMO)we have no real power to affect RPJr.’s ability to peddle his wares. It is my belief, FWIW, that operating as if we do have such power is just a waste of time.

    I’d rather waste my time trying to learn something from what Anders writes.

  6. Magma says:

    An early classic (von Bortkiewicz, 1898) on the Poisson distribution of small number random discrete events was the study of the numbers of Prussian cavalrymen killed by horse kicks over a twenty-year period.

    Many climate change ‘skeptics’ are the point of grudgingly acknowledging that horses may exist but it will take another forty or fifty years to be reasonably sure and in the meantime let’s not do anything hasty.

  7. izen says:

    At the risk (or certainty!) of prompting a post by Paul/WHUT…

    The incidence of extreme events is clearly modulated by ENSO. From S. American floods, typhoons in Taiwan, and Indian failed monsoons, the correlation and thermodynamic teleconnections are evident. The intensity, duration and position of the ENSO can be tracked in archaeological and geological records. Floods, drought, and famine from ENSO enhanced events has clearly imposed costs on various civilisations around the world. Or contributed to collapse.

    Does the cost of extreme events track the ENSO cycle, are there, in any specific locality, more, costlier events in one ENSO state than another ?

    If so then any influence that AGW has on the cycle will have an impact on the cost.

  8. Steven Mosher says:

    if climate change brought no damages and no additional costs, it would not be important.

    is talking about the costs of extreme events verboten now?

    if events increase but costs decrease, does looking at that data make you a merchant of doubt? i think if you find that trends in storm damage cost decrease, this data should be destroyed, since it might be mis used and undermines the blue mike agenda.

    sarc of course

  9. This paper that appears to be accepted for inclusion in Physical Review Letters (!) says that global temperature variability has a pink noise spectrum, and that “the recent phenomenon referred to as the ‘global warming hiatus. may reflect a coupling to an intrinsic, pre-industrial, multi-decadal variability process”

  10. dikranmarsupial says:

    “does he start in agreement with you on first principles regarding AGW?”

    I’d say he was in accord with mainstream science on the basics, set out in pages 25 and 26 of his book. His problem IMHO lies with being too attached to the IPCCs detection and fairly poor grasp of the fundamentals of statistics (it is quite common in the sciences for people to be very good at applying recipes from the statistics cookbook to the kinds of statistical problems the face, without really understanding the underlying principles).

    “a signal that may exist, but which cannot be detected, is indistinguishable from a signal that does not exist.” However just because you can’t distinguish between the two situations, doesn’t justify asserting the signal doesn’t exist. While Occam’s razor would argue against its existence, we are not in a situation where we have no prior knowledge – there is plenty of physics that tells us to expect a signal. Also self-skepticism means that if you are arguing against a link then you can’t just rely on Occam’s razor (as that is assuming you are right a-priori). Recently finished reading Max Tegmarks “Mathematical Universe”, which makes a similar argument to say that the universe actually is a mathematical structure, because anything that is isomorphic to a mathematical structure cannot be distinguished from that mathematical structure, and hence *is* that mathematical structure. It is good to have imagination in science, as long as you don’t let it run amok! ;o)

    As I showed on the previous thread, if you are unable to detect a signal with a test that you do not expect to detect a signal even if it is there, it tells you almost nothing about the existence of the signal. This is simply a failure of skepticism/reasoning on Prof. Pielke Jr’s part.

    Q. When is an absence of evidence evidence of absence?
    A. When the Bayes factor is not approximately one.
    A. When you confidently expect to find evidence when you look* for it.

    * in the way that you looked for it.

  11. angech says:

    Gavin Cawley
    ‏in general. There is a false dilemma on page 34. The “yes” answer is O.K., but if there is insufficient evidence for a link between disasters and CC, that doesn’t mean there isn’t a link. This is a common misunderstanding of hypothesis tests https://www.skepticalscience.com/statisticalsignificance.html
    so that doesn’t justify the answer “no”. In reality there are three answers (if you discretisation is necessary): “Yes”, “No” and “Insufficient evidence either way”. In reality, we should do as Hume suggests and apportion our belief according to the evidence.
    While we cannot absolutely prove anything, there comes a point where we have to use a bit of common (statistical) sense. I’ll end with a question for @RogerPielkeJr : Would you agree that the “apparent hiatus” in GMSTs since around 1998 is entirely explainable by ENSO

  12. This is the precursor paper to the recent PRL paper linked to. The model is a combination of applying phase-synchronized seasonal forcing with stochastic elements to understand natural climate variability.
    A unified nonlinear stochastic time series analysis for climate science

    This research is really following a tightrope in distinguishing signal from noise. Based on their model, they are interpreting how the climate varies as a “competition between the orbitally enforced monthly stability and the fluctuations/noise induced by weather.”

    Lots of neat math included that touches on Mathieu equations (although they do not recognize it) and on Ornstein-Uhlenbeck random walk.

  13. dikranmarsupial says:

    @angech please do not involve me in your trolling. Troll other people if you absolutely must, but leave me out of it.

  14. Hyperactive Hydrologist says:

    What happens if when the signal appears from the noise the potential impacts are worse than we thought?

    https://www.nature.com/articles/s41558-018-0245-3

    In contrast, changes in the magnitude of hourly rainfall extremes are close to or exceed double the expected CC scaling, and are above the range of natural variability, exceeding CC × 3 in the tropical region (north of 23° S). These continental-scale changes in extreme rainfall are not explained by changes in the El Niño–Southern Oscillation or changes in the seasonality of extremes. Our results indicate that CC scaling on temperature provides a severe underestimate of observed changes in hourly rainfall extremes in Australia, with implications for assessing the impacts of extreme rainfall.

  15. ” hourly rainfall extremes are close to or exceed double the expected CC scaling”

    Both rainfall averages and extremes are fat-tail statistics compared to equivalent temperatures. Especially for extremes, that means that years can go by without seeing a consistent amount of extreme rainfall. In other words, the variance may not appear stationary for an assumed narrow-tail distribution, but would appear reasonable for a fat-tailed one. Also didn’t see any mention of extreme value theory analysis in the paper.

  16. Lerpo says:

    the book argues that a signal that may exist, but which cannot be detected, is indistinguishable from a signal that does not exist.

    Possibly indistinguishable, but not equivalent. Please correct me if I’m wrong here. Even if we cannot detect it, the signal will have made things worse for some people. Possibly it will have pushed us to the high end of natural variability where we could have enjoyed the low end. This may be a very significant difference as natural variability probably ranges from $0 of weather related disaster in a year to hundreds of billions in a year. Obviously it’s much better to be on the low side.

  17. Hyperactive Hydrologist says:

    Paul,

    The methods used are similar to this Fischer and Knutti paper from a couple of years ago. However, the analysis has been applied to not only daily but sub-daily rainfall. Sub-daily rainfall is poorly represented in climate models due to the requirement to parameterise convection and sub-daily rainfall records tend to be much shorter with poor spacial distribution. As a results potential changes in sub-daily rainfall due to climate change are poorly understood. This is the first paper to demonstrate changes in sub-daily rainfall greater than previously observed or expected. A 3xCC scaling give potential increases of about 20% per degree C of warming.

  18. My general issue with the Pielke approach is that he takes an existing trend that shows damages from extreme events are growing, shows that they are not growing faster than GDP growth, and concludes therefore that there is no space for climate change.

    Of course, if there are detectable trends in some of the events (e.g., extreme precipitation events), and if these events cause damages, then it would be very odd for trend in these events to NOT cause damages.

    The obvious potential explanation is that we shouldn’t expect the trend in damages to grow as fast as GDP: whether because of better building codes, or because more of the GDP is going into intangibles that aren’t susceptible to extreme event damage, or whatever.

    One obvious way to explore this would be to normalize extreme event damages not against GDP, but rather against geophysical damages like earthquakes, where one might presume that factors such as improvements in building code or shifting from tangible to non-tangible wealth would have similar effects on earthquake damage as hurricane damage. But despite publishing on earthquakes and extreme events separately, I see no evidence that Pielke has tried to do this: he gets his “no climate signal” answer and stops. In contrast, see Figure 5 of https://www.sciencedirect.com/science/article/pii/S2212094715300347 seems to show that there is a marked difference in the trend in losses from climatological extremes vs. geophysical events.

  19. The Guardian is reporting on Hurricane Florence. Here are a couple of quotes from that piece:

    “The primary fuel for hurricanes is a warm sea surface, which is getting warmer with climate change,” said Dr Kelly McCusker, a climate scientist at the independent economic research firm Rhodium Group. “While we can’t attribute this hurricane solely to climate change, we do expect these types of intense hurricanes to happen more often as the world warms.”

    and:

    Hurricane Florence has developed into a major storm over extremely warm water, Ginis said

    “That’s not necessarily connected to global warming, but that’s an indication of what we might see in the future more often,” he said.

    https://www.theguardian.com/world/2018/sep/13/monster-storm-hurricane-florence-is-a-rare-threat-in-an-unusual-location?utm_source=esp&utm_medium=Email&utm_campaign=GU+Today+USA+-+Collections+2017&utm_term=285532&subid=11249832&CMP=GT_US_collection

    In both cases, the experts used defensive language framing dictated by the merchants of doubt who will attack unequivocal language about climate disasters and AGW.

    I know that scientists with a lot of knowledge and integrity have incorporated this defensive framework for discussion of climate disasters, and their quotes are scientifically and mathematically accurate, but non-scientists/Joe SixPack will read these statements and their takeaway will be that the scientists aren’t sure about global warming.

    We will get Trump types elected to office until scientists learn not to play the game dictated by the merchants of doubt. There is another frame available for these discussion and it starts with an unequivocal: “this storm is what climate change looks like. This is the kind of storm that climate change science predicted. The risk of this kind of storm has increased and will continue to increase until we lower the CO2 in the atmosphere and oceans.”

    When the merchants of doubt show up with the causation question, brush it off with: “Causation questions are the wrong questions to ask when trying to understand global warming and climate disasters, the correct question is about the increased risk of this kind of storm. Climate change increases the risk of this kind of storm. Hurricane Florence is what global warming looks like and we are driving global warming with our CO2 emissions.”

    Keep it simple so Joe and Jane SixPack get the message. Don’t play the game established by the merchants of doubt. It is designed to create confusion in the general population and the design is very effective.

  20. Marco says:

    Thanks, climatemusings! You made the point I wanted to make, but couldn’t. I kept on deleting my incoherent rambling, but now I no longer need to do so :-).

  21. HH said:
    ” This is the first paper to demonstrate changes in sub-daily rainfall greater than previously observed or expected.”

    Our forthcoming book will describe a model of rainfall distribution that follows a BesselK function, which derives from first-principle Maximum Entropy considerations. This is fat-tail, with the extreme events well into low probability territory. Having something 3x scaling in probability due to chance is entirely possible due to the rarity of the events.

  22. Okay, maybe we can not pile on. Angech, I don’t understand your previous comment. It seems to be quote from Dikran, but you didn’t make that clear. Please try to indicate what is your comment, and what is a quote from someone else (would also be nice if you also didn’t continually repeat things that have been debunked many times).

  23. Hyperactive Hydrologist says:

    Paul,

    Maybe you should actually read the paper before you comment.

    Also how have you merged the hourly rainfall data for Iowa? Have you use a pooling method? And what is the resolution of the raw data?

  24. I’ve read it and am in complete agreement with the researchers that a Clausius-Clayperon activation energy acting on at most a 1C change won’t impact much in water-holding capacity. So a plausible alternative is the high variance of fat-tail statistics for extreme events explaining the outliers. Just not that impressed with the certainty that they are implying.

  25. izen says:

    A significant component of the impact of AGW on extreme events will be indirect. It will be the way it changes the incidence, duration and intensity of the ENSO quasi-cycle.

    http://www.aoml.noaa.gov/hrd/Landsea/lanina/

  26. Hyperactive Hydrologist says:

    They are using spatial aggregation at continental scales so the high variance is unlikely to explain it. They are also sampling the full tails of the observed data and bootstrap the time series and re-sample it to determine whether it can result from internal variability. The grey area in the figure below is the 99% CI and the observed hourly (b) is well outside this range.

  27. Hyper,
    What’s the difference between the left and right figures?

  28. Is that 99% CI for normal statistics or for fat-tail statistics?
    Taleb says that for the latter, the “very definition of inference and confidence interval goes out of the window.”

    What is more interesting about monsoon rainfalls is the strong biennial cycle between Australian and Indian monsoons. The biennial pattern has modulated over the years, so what impact does this have on their model of extreme events?

    A biennial factor is also key to ENSO.

  29. Hyperactive Hydrologist says:

    aTTP,

    a is daily and b hourly. The figure description:

    Fig. 1 | Changes in the magnitude of daily and hourly rainfall for different definitions of extreme rainfall. a,b, Changes (differences) in the magnitude of daily (a) and hourly (b) rainfall for different K-largest values. Changes are shown for: observations (blue line), expected changes based on CC scaling (red dashed line), double the expected changes based on CC scaling (CC ×  2; red dot-dashed line) and expected changes from triple the CC scaling (CC ×  3; red dots). The shaded grey area shows the changes expected due to internal variability using a 99% confidence interval (calculated using a bootstrapping technique on the observed dataset; that is, time series were randomly reshuffled, with replacement and retaining spatial correlation, then divided into two sub-periods and the changes were calculated: this procedure was repeated 1,000 times). All changes are shown as spatial means of 107 gauge records across Australia during 1990–2013 and 1966–1989. The labels in the figures show the lower end of the bins (that is, K-largest 20 (K20) shows the mean change of the 20 largest values; K40 shows the mean change of the values K21–K40; and so on).

  30. hyper,
    Thanks. I did see an argument recently that maybe the CC scaling only really applies on global scales and that we might expect quite large departures from this on smaller scales. Maybe it’s not all that surprising that there is quite a large departure on a continental scale (and at shortish timescales).

  31. Hyperactive Hydrologist says:

    Agreed. I think CC scaling is more applicable to frontal rainfall systems like we get in the UK in winter. Applied to convective rainfall, from thunder storms or hurricanes I think other process are likely to dominate. However, if there is more moisture available in the atmosphere to feed these types of processes then the potential increase in magnitude of these types of event could be significantly greater than current projection form climate models.

  32. dikranmarsupial says:

    “Is that 99% CI for normal statistics or for fat-tail statistics?
    Taleb says that for the latter, the “very definition of inference and confidence interval goes out of the window.”

    only if you are an incompetent statistician that doesn’t know how to deal with them (IMHO)

  33. Fat-tail statistics often come about from ratio distributions, where the ratio is between the numerator and denominator. Rainfall is one such example, e.g. Amount / Time, where each is a random variate. Intuitively, one can imagine short rainfall periods are controlled by the denominator and therefore one can expect much greater variance in the numbers there. And add to that the sparsity regarding counting statistics for K bins of 20.

    I have no idea if RP Jr discusses any of this in his book, but this is what concerns me in these kinds of analyses.

  34. dikranmarsupial says:

    that still doesn’t mean you can’t perform inference or construct confidence intervals (although credible intervals are usually what you actually want).

  35. From the linked entry:

    “Often the ratio distributions are heavy-tailed, and it may be difficult to work with such distributions and develop an associated statistical test. A method based on the median has been suggested as a “work-around” [Brody, Williams, Wold (2002)]”

    The issue is that the mean, standard deviation, and higher moments of a fat-tailed ratio distribution such as a Lorentzian (aka Cauchy) are all undefined. That understandably means that to get a 99% CI in the numbers is mathematically impossible. IOW, the integrated Lorentzian is infinite so 99% of infinite is still infinite. Yet, even though these well-known statistical moments do not exist for a fat-tail distribution, a median value will converge to a finite value.

    The “work-around” described in Brody et al is to rearrange the median values found defined by a ratio distribution so that these show a more Normal (Gaussian) distribution. Then the CI numbers will make sense since a Gaussian is finitely integratable.

    So I have an idea based on the model of rainfall PDF developed. The fat-tailed BesselK PDF for Iowa rainfall above is based on a single median value. Just thinking about it, I would take the Australian rainfall numbers and see how it fits to a BesselK distribution for various time intervals, and then explore the idea of testing the median value for significance. If the median value changes over time, then the statistical significance tests may be more valid.

  36. dikranmarsupial says:

    In practical terms what would it imply for the mean and standard deviation of rainfall to be infinite?

  37. In practical terms it wouldn’t be infinite, but for fat-tail mathematical models the moments are non-computable unless some finite constraint is applied. But, how does one select that constraint? That’s why the median (set by cumulative probability = 0.5) is used as the characteristic defining the fit for a particular fat-tailed model. There is always a median value of a CDF.

  38. dikranmarsupial says:

    The air being solid water?

  39. dikranmarsupial says:

    I’m sure physics could set a reasonable constraint that was somewhat lower than that ;o)

  40. Hyperactive Hydrologist says:

    The K20 bins (mean of the largest 20 hourly rainfall totals) are for each rain gauge and the mean Continental change in each bin is assessed between the historic and present period across all gauges. No need to fit a distribution, which is generally problematic for this type of analysis. The changes can’t be explained by random variability across the period at the 99% CI.

  41. izen said:

    “A significant component of the impact of AGW on extreme events will be indirect. It will be the way it changes the incidence, duration and intensity of the ENSO quasi-cycle.”

    The one aspect of ENSO that is definitely not “quasi” is the stationary aspect of the standing wave, which is fixed in wavenumber and geographical location. And the temporal phase is aligned by a seasonal impulse, as reinforced by this recent paper “A unified nonlinear stochastic time series analysis for climate science” by Moon and Wettlaufer :

    “There are two important behaviors of the seasonal variability of ENSO; (a) the two extreme phases–El Niño and La Niña–are slaved to a specific time of year, and (b) a vast array of prediction models deviate substantially from observations starting in the spring. Using our method, we found the existence of a positive feedback in the eastern Pacific and then we used the interaction of the seasonal stability with the noise to explain the basic mechanism of
    these two behaviors.”

    This is a intriguing paper if for no other reason than in their ability to model various temperature time-series with a stochastic forcing on a seasonally modulated equation. See inset (d) in the lower left of the figure below, which I think is quite good in capturing the dynamics:

    The question then is how much could the AGW signal impact this significant controlling seasonal factor?

  42. Everett F Sargent says:

    “In practical terms what would it imply for the mean and standard deviation of rainfall to be infinite?”

    Nothing. Something that is a property of a PDF is not necessarily a property of reality. For example, Tol wrote a paper where ECS varied up to +50C.

  43. Everett F Sargent says:

    Fat-tailed risk about climate change and climate policy (AFAIK paywalled)
    In Chang Hwang, Richard S.J.Tol and Marjan W.Hofkesg
    https://www.sciencedirect.com/science/article/pii/S0301421515301907#!

    No, I was wrong, their figures show ECS up to 55C and 1200C!

  44. dikranmarsupial says:

    EFS that was rather my point. Ideally the properties of the PDF should be a good match for the properties of reality that it represents.

  45. entropicman says:

    The existance of the Higgs boson was proposed in 1964 and detected in 2013.
    A number of climate change phenomena are in a similar intermediate state, projected by physics or models but not yet confirmed by observation.
    By Roger Pielke’s logic the Higgs did not exist until 2013 and no climate phenomena should be presumed to exist until proven by observation.
    It seems a rather negative approach.

  46. An additional case to consider: The adjustment time for sequestering of CO2. Is it 50 years, 200 years, 1000 years, more? From basic physics, the random walk has a ~1/sqrt(time) fat-tail damping, which has no computable moments and helps explain why it is so difficult to estimate an adjustment time.

  47. dikranmarsupial says:

    PP I think the reason is more that there is more than one mechanism that is involved. The 50-200 year estimates are just for the ocean uptake as far as the thermocline (my ultra-simple one-box model gives a bout 70 years); once that has equilibriated with the atmosphere, further draw down is limited by the exchange between the upper and lower oceans. The very long tail is caused by the need for the carbon to be returned to the lithosphere by weathering. I don’t think the timescales are actually all that uncertain.

    @antropic man – nice analogy. Oddly enough Prof Pielke doesn’t seem quite so unhappy for people to suggest the existence of a hiatus in GMSTs. Obviously the subtlety involved in the disctincion is a little too subtle for me.

  48. Dave_Geologist says:

    helps explain why it is so difficult to estimate an adjustment time.

    It’s not difficult at all Paul. We know it Because Geology. It’s happened before, on more-or-less the same planet, with more-or-less the same rocks and more-or-less the same hydrological cycle, with the same physics and chemistry. The answer is tens to hundreds of thousands of years.

  49. To follow on from Dave’s comment, it is actually quite difficult to estimate an adjustment time and that’s because there isn’t a single one. You have multiple timescales. Mixing with the upper ocean (years), mixing into the deeper ocean (decades to centuries), and then sequestration by the slow carbon sinks (weathering – millenia).

    If you calculate an adjustment time on the basis of observations, as Gavin’s paper does, you get around ~100 years (I think Gavin’s paper gives about 70 years; some others gives 100-200 years). However, what this doesn’t indicate is the timescale over which we asymptote back to pre-industrial levels, because there is a residual. Once ocean invastion is complete, about 20-30% of what we’ve emitted will remain in the atmosphere (Revelle factor) and it is this residual that will decay very slowly through weathering. it’s expected that it will only fully recover on a timescale of ~100000 years.

  50. entropicman says:

    @dikranmarsupial

    We won’t have statistical “proof” of any trend until it exceeds the uncertainty.
    IIIC the uncertainty in GMST values is about +/-0.1C. At the present rate of warming we only see significant change over at least two decades.
    On the same basis, the hiatus lasted about a decade and, statistically speaking, could have been anything between 0.2C cooling and 0.2C warming.

    Pielke and others do have a selective approach to statistical evidence. They demand a very high standard of statistical confirmation for anything they do not want to admit, while accepting things they want the public to believe on the flimsiest of evidence.

  51. David B. Benson says:

    With regard to global rainfall, please see Chapter 6 of “Principles of Planetary Climate” by Ray Pierrehumbert. There is an analysis which shows a supralinear increase in precipitation with increasing temperature.

  52. dikranmarsupial says:

    Alternatively you could look for statistically significant evidence of a change in the underlying rate of warming. That way you are arguing against the null hypothesis rather than for it, which tends to make life easier as you don’t have such a great need to look at the statistical power.

  53. Dave_Geologist says:

    You’re right ATTP, I was referring to geological drawdown by weathering and erosion. As for the next century or too, surely the CO2 profile depends more on our emissions profile than it does on the rate of ocean mixing? And the temperature profile more on the ECS than on the rate of ocean mixing. BTW the geological record already takes into account the enhanced hydrological cycle associated with warming spikes. Without that it might be more like a million years. You can see its consequences in trace elements, isotopes and clay mineralogy throughout the oceans. And expansion of oceanic anoxia due to a combination of reduced oxygen solubility, changes in ocean currents and an increased influx of terrestrial organic carbon. Which confirms that local evidence for megadroughts punctuated by megastorms doesn’t just represent lucky finds.

    Consistent with modern observations, our results show that the injection of 13 C-depleted CO 2 into the Carnian atmosphere–ocean system may have been directly responsible for the increase in rainfall by intensifying the Pangaean mega-monsoon activity. The consequent increased continental weathering and erosion led to the transfer of large amounts of siliciclastics into the basins that were rapidly filled up, while the increased nutrient flux triggered the local development of anoxia. The new carbonate petrography data show that these changes also coincided with the demise of platform microbial carbonate factories and their replacement with metazoan driven carbonate deposition. This had the effect of considerably decreasing carbonate deposition in shallow water environments.

    Subheadings of Discussion:
    5.1. Negative carbon isotope excursion at the onset of the CPE (it was the CO2 wot dunnit).
    5.2. 13C-depleted CO2 from Wrangellia LIP volcanism? (No question mark required in our case).
    5.3. Carbon cycle perturbation and increased rainfall (it’s a-coming, whatever RPJr says; it happens every time).
    5.4. Oxygen-depletion and massive siliciclastic sedimentation in marginal basins (the first is tough on fish, the second on whoever or whatever was living in the places those siliciclastic sediments were derived from).
    5.5. The crisis of microbial carbonate platforms (the reefs died; and didn’t really come back until the type of corals we have today evolved into reef-builders).

    What strikes me as a geologist whenever I read about the deposits from these events (which typically only involved 3-4 °C warming, albeit in an already-warmer world). is just how different they are. It’s like the landscape changed overnight into something from thousands of miles away. Or something that may not have existed anywhere. And, as I said, that it was global. To the extent that it changed ocean chemistry in lots of ways, not just pCO2. In that context, studies that indicate rainfall peaks greater than CC-scaling would imply make perfect sense.

  54. Dave,

    As for the next century or too, surely the CO2 profile depends more on our emissions profile than it does on the rate of ocean mixing? And the temperature profile more on the ECS than on the rate of ocean mixing.

    Yes, this is probably true. There’s some uncertainty associated with the carbon cycle, but you’re probably correct that what will predominantly influence how atmospheric CO2 varies in the next decades/centuries is our emissions.

  55. .Benson said:

    “There is an analysis which shows a supralinear increase in precipitation with increasing temperature.”

    I don’t think anyone denies that — in material science, thermo, chemistry, you name the discipline, these are Arrhenius rate laws (aka Clausius-Clapeyron) that scale as exp(-E/kT) for scores of behaviors. This is supralinear (concave up) for any activation energy E with respect to temperature T. The issue is whether this is that resolvable with the temperatures changes we are seeing. This is 1 part in 300, or 1 degree in 300K baseline. That’s the ~6% increase that should occur for H20 vapor pressure or holding capacity, but the Australian rainfall researchers say in the extremes is 12% to 18% above nominal depending how far down the fat-tail they go.

    My rather mild suggestion is that issues in the statistical variability are perhaps likely overriding the possibility that the activation energy is changing by 3X for extreme rainfall. In retrospect,if I was a reviewer for that paper, that’s what I would ask.

  56. Dave_Geologist says:

    For the small temperature changes we might hope to see, relative to a 300K baseline, the difference between linear and exponential should be small. For example, 1.07^3 is 1.23 rather than 1.21. Not worth fussing about, given the other, bigger uncertainties. Particularly as the latent heat of evaporation varies with temperature, so it’s not strictly exponential anyway. Geology is much more fun, taking properties from standard states up to 1000 K and 1000 MPa 🙂 .

    I’d have thought disequilibrium was likely to be important in things like convectional storms, which could easily take you well above or below CC scaling. The hourly, not daily signal perhaps hints at that. That we’re looking a process which dumps three hours’ worth of rain in an hour, but depletes the atmospheric water vapour which is recharged more slowly, so that daily totals follow CC. Off the top of my head, I can think of several generic mechanisms. Getting a package of supersaturated air further past equilibrium before nucleation, perhaps by lifting it higher and faster. Or getting a larger package of supersaturated air to the same state of disequilibrium, through some combination of larger and faster convection. Or keeping big hailstones in the top of the cloud for longer, due to a higher tropopause and stronger updrafts. Like the Renick & Maxwell nomogram, but in a place where the hail melts before it reaches the ground.

    Fig. 1. Nomogram developed by Renick and Maxwell (1977) that relates the maximum observed hail size on the ground to the forecast maximum updraft velocity and the temperature at the updraft maximum. Numbers 1–6 correspond to shot- through greater-than-golfball-size hail. Adapted from Renick and Maxwell (1977)

  57. entropicman says: “Pielke and others do have a selective approach to statistical evidence. They demand a very high standard of statistical confirmation for anything they do not want to admit, while accepting things they want the public to believe on the flimsiest of evidence.”

    Thank you. There it is. That is the functional and appropriate response to Pielke. Just note that his science is distorted by ideology, that he is doing poor science to further a disastrous ideology. Call the ideologues out and only take part in discussions that ask the right questions. Don’t bite on the troll science even if the mathematical arguments are fascinating to you because the general population will understand your solid mathematical/statistical arguments and will conclude that the science and risks of AGW are unclear.

    I can see reviewing a Pielke type book on a site like this, but only if the discussion opens with a rather brutal analysis of the author and the kind and quality of science that he does. Take the gloves off. This is not a dress rehearsal for the sixth extinction event, it is the real thing and we can still influence the trajectory of the extinction event if we are brutally honest about the science, politics and technology of AGW. Or we can be polite and watch the Trump Show roll on. I know polite is very appealing.

  58. Dave_Geologist says:

    Not that I recall. I downloaded the Lin & Emanuel paper in June, so may have been prompted by something. Or may just have found it in a reference list. But since we’re stoking up the heat engine, it wouldn’t surprise me as an outcome. Especially when it’s an outcome modelled by some of the top experts 😉 .

    Geology can’t help much there I’m afraid. We use Storm Wave Base as a reference depth in oceanic sediments, because you get quite different sediments and infauna above and below. I presume bigger storms, other things being equal, will have a deeper wave base. But return times also matter. Essentially, if the storm return time is longer than it takes to bury sediments deeper than the storm mixes the sea-bed sediment, you’ll get alternations of storm-influenced and storm-absent deposits. But there could be other causes of that like relative sea level change, secular changes in storm paths due to multi-decadal oscillations, changes in sediment supply due to changes in deep ocean currents or avulsion (lateral shift) of delta outlets, rainfall changes upstream, etc. It would be very difficult to disentangle a hurricane magnitude signal.

  59. JCH says:

    The sophistication of Professor Curry’s analysis is just amazing; as is her certainty. She and Jr. pulled the same stunt when the Wivenhoe Dam flooded downriver to Brisbane.

    I used to live at Emerald Isle, NC, the barrier island for Wilmington. I stayed in place for a hurricane. We went surfing. Ain’t nobody surfing there this time. The house I live-in was on a hill; we had to walk down a hill to get to the beach. Yesterday that house was surrounded by water. I think it’s backwash from the sound – coastal waterway – side of the island.

  60. izen says:

    @-WHUT
    “The question then is how much could the AGW signal impact this significant controlling seasonal factor?”

    How clear is it that the seasonal factor is controlling, rather than just correlated?
    There seems to be considerable research on how AGW does impact the severity and duration.

    https://journals.ametsoc.org/doi/10.1175/JCLI-D-14-00254.1
    https://www.nature.com/articles/nclimate2743

    Paleo evidence also indicates that ENSO is modulated by the global state.

    https://www.nature.com/articles/ncomms3692

    Past reconstructions of the ENSO quasi-cycle indicate a significant variability. Disasters from climate impacts are most obviously and strongly linked to the ENSO state. longer and stronger El Nino’s will suppress Atlantic storms.
    It seems obvious that any impact from AGW on the rate and magnitude of climate disasters will be primarily from the changes it can make to ENSO.

    Unless you think that ENSO is inherently independent of, and its strength and duration are impervious to a changing global climate.

  61. izen says:

    I wonder where JC got the figure for vertical land movement at Wilmington of -1.41mm/yr ?

    NOAA gives -0.43mm/yr
    https://tidesandcurrents.noaa.gov/publications/Technical_Report_NOS_CO-OPS_065.pdf

    However that 2013 paper is using a constant rate of Sea Level rise over the century of 1.7mm/yr and gives the trend at Wilmington as 2.1mm/yr over a 72yr record.

    Perhaps the problem arises because the rate of change of both sea level and vertical land movement are assumed to be constant.
    That does not appear to be the case. (Unless you accept Houston and Dean) Sea Level is accelerating. It certainly appears from eye-metrics that the Wilmington tide gauge shows around half of the 9 inches since 1935 has occurred in the last 25 years. So warming may have caused 4 inches (0.1m) in 25 years.
    https://tidesandcurrents.noaa.gov/sltrends/sltrends_station.shtml?id=8658120

    Either Wilmington is sinking faster, or the Sea Level trend is increasing.
    Or both.

  62. izen says:

    “How clear is it that the seasonal factor is controlling, rather than just correlated?”

    The research by Moon & Wettlaufer (Oxbridge!) on ENSO substantiates this. Even though the cycles of ENSO appear erratic, it doesn’t mean that the forcing or response is random or chaotic. What they are finding is that a specific type of non-autonomous differential equation can generate a similar time-series signature as observed in ENSO indices.

    The first term on the RHS is modulated by a seasonal factor. That’s a form of non-linearity called a Mathieu (and more generally Hill) formulation which has been well-known to hydrologists that study sloshing of volumes of water. The confounding part of analyzing Mathieu equations is that the spectral frequency responses do not show the seasonal components even though the time-series model seems to match the observed data. Nonlinear functions will “fold” the frequency response until it becomes hard to disentangle.

    So I think (imho) it’s clear that a seasonal (annual, biennial, etc) factor is in control, with another factor contributing as well. Moon & Wettlaufer suggest that “although the breaking of timereversal symmetry by Earth’s rotation has been shown to provide a topological origin for equatorially trapped waves” which is the other recent interesting work by Delplace and Marston.

    After this is sorted out, they may start looking at any AGW contributions — don’t want to put the cart before the horse, i.e. until the fundamental understanding of ENSO is worked out.

  63. Dave_Geologist says:

    Returning to palaeo evidence for increased hurricane intensity. The other way too look for it is storm-surge deposits on land. That will also be hard to disentangle. For example, with Florence the rainfall backing up and unable to flow into the sea fast enough is likely to be more important than the winds or low-pressure cell in controlling the degree of inundation.

    Hansen et al. have suggested that chevron deposits on land or palaeoland (giant water-laid crescentic dunes) may be due to hurricane storm surges greater than today’s magnitude. But that is controversial and AFAICS still at the preprint stage. Chevrons have traditionally been attributed to tsunamis, but in any case boulders, gravel and sand don’t care what caused the wall of water that’s sweeping them ashore. Only how tall/deep it is and how fast it’s moving. These can be estimated quite accurately, based on scaling relationships that have been known since I was an undergraduate and thoroughly validated in flumes. But independent evidence is needed for what caused the wall of water in the first place. Even regular NE Atlantic storms can cause surprisingly powerful storm surges with the right alignment of conditions. There’s a beach in Ireland where huge boulders had been interpreted as moved by an unidentified tsunami, but careful investigation of historical evidence revealed that they shifted during an Atlantic storm at a time when there were no tsunamis.

  64. Dave said:

    “Hansen et al. have suggested that chevron deposits on land or palaeoland (giant water-laid crescentic dunes) may be due to hurricane storm surges greater than today’s magnitude. “

    So what about the 185 almost periodically placed beach-ridges (carbon dated to be created about ~45 years apart) along the shores of Hudson Bay researched by Hillaire-Marcel & Fairbridge in 1976?

    Curious about that and I think I can identify these via Google Earth.

    It’s really still a mystery as to what caused the 45 year period on top of the receding shoreline. I really doubt that it is double the 22-year sunspot cycle that it’s claimed to be due to.

  65. Dave_Geologist says:

    As the paper says Paul, their preservation is due to steady background rate of post-glacial isostatic uplift, which raises each beach above the wave level of the next cycle. It’s then preserved in what is effectively a cold desert. I’d have thought that melting and freezing of ice would be a bit slow for 45 year cycles. And they seem time-symmetric, where I’d have expected them to be asymmetric like the longer glacial cycles. I wonder if it’s enough just to change storm tracks? Or maybe the Jet Stream. In which case it would be interesting to know if the pattern has changed during the AGW period. I’d also wonder how tightly the 45-year cycle is tied down. Radiocarbon dating was expensive in 1977, and needed big samples compared to today. Did they only sparsely sample and are basing it on 20 ridges in 900 years? Maybe it’s only quasi-periodic then. And some cycles may be missed. If it’s shifting storm tracks, maybe SST is implicated. It’s a bit more than half the AMO, but if some cycles are missing you could overestimate the period. Or about double the NAO. That’s the trouble with cycles. Too many candidates, especially when they’re only quasiperiodic and you’re not dating every feature, just counting how many per millennium or whatever 😦 .

    I found aa paper on Spain linking a similar 45-year cycle to the NAO, which I believe is tied into the Arctic oscillation, which might provide teleconnections to Alaska. They also link it to the sunspot cycle. The Spain example has the advantage of a continuous record of coastal and shallow-water sediments, which are incredibly sensitive to small changes in relative sea level.

    A simple division gives values of 11.25 years to deposit a ridge and swale unit, 22.5 years to deposit a couplet, and 45 years to form a set of beach ridges (Fig. 4). Thus, both the
    morphology and the chronology of the prograding beach-ridge and swale system reveal that sedimentation followed a periodical pattern with decadal periodicities: 11.25, 22.5, and 45 years

    They attribute the proximal driver of sea level change to changes in winds and air pressure, related to SST cycles. That seems more reasonable than melting and snowing or freezing out enough water to change global MSL on a 45-year time scale.

  66. Dave_Geologist says:

    Hyperlink failed for some reason, but you can find it through Google Scholar:

    A beach-ridge progradation complex reflecting periodical sea-level and climate variability during the Holocene (Gulf of Almerı́a, Western Mediterranean)

  67. Thanks, 185 ridges * 45 years/ridge = 8000 years of periodic beach ridging is a long time to collect some good statistics. Hope someone looks at this closely again.

  68. Something must be leaving a mark. At this year’s EGU General Assembly, a group from Southampton U found that the long-period tidal energies can vary by as much as 30%
    http://adsabs.harvard.edu/abs/2018EGUGA..20.1925H

    ” However, tidal levels and tidal currents vary over monthly, annual and inter-annual time-scales due to changes in the position and alignment of the Moon and Sun relative to Earth. Over a month, tidal range changes as the Moon moves from its closest (perigee) approach to Earth, to its furthest approach (apogee) and back. Over annual time scales, changes in tidal range occur as the Sun’s position varies north or south of the equator, and as it moves from its closest (perihelion) to furthest approach (aphelion) to Earth and back. Over inter-annual time scales, two precessions (a precession is defined as the rotation of a plane with respect to a reference plane) associated with the orbit of the Moon cause systematic variation of tides; the 8.85-year cycle of lunar perigee, which influences tides as a quasi 4.4-year cycle; and the 18.61-year lunar nodal cycle.”

    Pertaining to the ~45-year period beach ridge extremes, there is an interesting reinforcement whereby precisely 623 anomalistic (perigee/apogee) lunar months constructively interfere with the annual solar every 47 years. This is trivial to calculate, and it also shows up in the 581 lunation cycle for eclipses, where 581 synodic lunar months align every 46.975 years.

    So there is a strong gravitational pull and a strong alignment of the moon and sun at approximately the same calendar date every 47 years. This would be an example of how the tidal energies can change over the long term, as reported by the Southampton team. But until this has greater discrimination power, it means little for matching to the 45 year cycle.

  69. Dave_Geologist says:

    Returning to extreme events Paul, that left me wondering if people are considering the possibility that a new-normal hurricane, downpour or whatever will have a greater chance of coinciding with a normal extreme tide if the weather events become more frequent. I have a vague recollection of seeing a TV programme (!) where a big loss event hundreds of years ago was attributed to all the stars coming together, metaphorically speaking, which is why it was a once in a 1000 year event, not once in 100 years. The weather event may have been once in 100 years or less, but it had to happen at exactly the right time in terms of tides etc. to be catastrophic. So Hansen’s “grey swans” might not need a Cat 6 hurricane, just an increase in the frequency of Cat 5s,so that an unhappy coincidence which hasn’t happened in the century or so of modern global records, starts happening on a less-than-centennial timescale.

    They are thinking about it at the weather/engineering end of things, don’t know about climate-model projections. Spatial and temporal analysis of extreme sea level and storm surge events around the coastline of the UK, High-Water Alerts from Coinciding High Astronomical Tide and High Mean Sea Level Anomaly in the Pacific Islands Region, Forecasting Extreme Water Levels in Estuaries for Flood Warning.

    This sort of research, along with steady improvements in weather forecasting, satellite monitoring etc., is another confounding factor in analyses like RPJr’s. Better preparedness overall (flood defences, building codes) can mitigate damage from extreme events, but so too can better prediction and better preparedness-on-the-day.

  70. Dave_Geologist says:

    So the good news is people are looking at the impact of climate change on extreme storm surges. Unsurprising really. The bad news is, well, bad. Equally unsurprising.

    Combining peak annual TA [tidal anomaly] with projected sea level rise, the historical (1970–1999) 100-yr peak high water level is exceeded essentially every year by the 2050s. The combination of projected sea level rise and larger floods by the 2080s yields both increased flood inundation area (+ 74%), and increased average water depth (+ 25 cm) in the Skagit floodplain during a 100-year flood. Adding sea level rise to the historical FEMA 100-year flood resulted in a 35% increase in inundation area by the 2040’s, compared to a 57% increase when both SLR and projected changes in river flow were combined.

    Interesting, the speed of travel of the cyclone also matters, and where you are relative to the eye of the storm.

    Numerical experiments with three different cyclone translation speeds show that when the surge height is directly forced by the cyclonic wind speed especially within the RMW (Radius of Maximum Wind), faster translation speed produces reduced surge height as the cyclone gets less time to force the water. On the other hand, at locations outside the RMW, surge waves travel as a propagating long wave where higher surges are produced by faster moving cyclones. It is found that surge arrival times are more and more affected by tidal phase when cyclone translation speed is reduced.

    One from Germany, which also shows how flood management has changed to mitigate the impact. And that what you do in one place impacts others. “It is estimated that measures of coastal defense led to an increase of 45 cm and deepening the shipping channel to an increase of 15 cm” (in the canalised river HWM). So as long as the water stays in the river (analogous to the levees during Katrina), it’s good. But if it escapes, it won’t just be bad, it will be very bad. Losses to date may well have been maintained stable or reduced, but how do you measure the opportunity cost of that fraction of GDP which was spent mitigating losses rather than investing for gains?

    The scenarios all point to extreme high waters that are higher than at present both in Cuxhaven and in Hamburg St. Pauli. Until 2030 the possible increases seem less dramatic and to be manageable within presently available tools and strategies. For the later time horizon 2085, however, the possible and plausible changes may require not only much more costly but possibly different adaptation measures.

    And speaking of the unintended consequences of channel deepening*: oops, Wilmington.

    These tidal changes are reproduced by simulating channel depths of 7 m (1888 condition) and 15.5 m (modern condition). Similarly, model sensitivity studies using idealized, parametric tropical cyclones suggest that the storm surge in the worst-case, CAT-5 event may have increased from 3.8 ± 0.25 m to 5.6 ± 0.6 m since the nineteenth century.

    * There was much fuss in the UK among farmers and AGW deniers that the Somerset floods happened because the Environment Agency stopped dredging the Somerset Levels. For those unfamiliar with the Levels, they drain into the Bristol Channel, which has the second largest tidal range in the world. Or at least, they drain at low tide. Talk about being caught between a rock and a hard place!

  71. My friend Jim reviewed the campaign against the WA State carbon tax initiative and reports the following:
    The No On 1631 campaign is off to a roaring start, funded by the oil companies. No on 1631 is sponsored by the Western States Petroleum Association.

    “Feel free to share this email with your other climate lists.

    First, their contributors; not a very long list (but when you get checks with two commas in them, you don’t need a very long list). However the Associated General Contractors involvement does mean that there are hundreds of construction businesses that may be mobilized to provide billboard and sign locations, volunteers, and other services. Need a dump truck or a bulldozer: AGC is your go-to group. Need a machine that can sweep down a street collecting OTHER signs? AGC probably has access to one of those too.

    Phillips 66 Cash 3,701,186.54
    BP Cash 3,000,000.00
    Andeavor Cash 1,662,827.17
    US Oil and Refining Cash 308,531.31

    Next, their big expenditures:

    $300k to Amplified Strategies. A full-service direct mail advertising agency in Seattle. It appears they have already done $150k of mail, which is about 600,000 pieces of mail. I’ve asked the PDC if they have reported it correctly.

    $8k to Mark Funk Public Affairs Mark worked for the No on 522 campaign (GMOs) and the No on 1098 campaign (Income Tax)

    $147k to Moore Information Portland based polling company

    $80k to Peri Hall and Associates Massachusetts based web services company known for very creative websites,

    $20k to the Clarke Company in Olympia, for campaign accounting and reporting support. Heather Clarke works for many conservative campaigns.

    $140k to Winner and Mandabach Campaigns, a big national consulting firm working on anti-environment campaigns. They led the No on 522 campaign.

    $17k to SalientPoint LLC, the message development consultant that wrote the No on 522 campaign.

    Clearly they are not reinventing the wheel. They are using pretty much the same campaign team as was used for No on 522. I-522 had 66% support in early polling, and ended the election with 45% support.

    The No on 522 campaign raised a total of $37 million in 2013. It was a record at the time. I will not be surprised to see this one be much bigger.

    With combined emissions of about 8 million tons/year, the initial $15/ton carbon fee in I-1631 would cost the refineries about $100 million per year. To put that into perspective, their combined refining capacity is about 700,000 barrels per day. Assuming an 80% capacity utilization factor, that’s about 8.5 billion gallons per year. The cost to them, therefore, is $100 million / 8.5 billion, or 1.2 cents per gallon. That’s not the carbon fee on the fuel itself, just the carbon tax on the carbon emissions from the refinery. The carbon fee on the fuel will be about $.15/gallon. Put another way, it takes about 10% of the fuel in crude oil to refine that into gasoline, diesel fuel, and jet fuel. But, much of the energy consumption at the refineries is natural gas, because it’s cheaper than oil; but they do use various waste refinery products as well.

    We should not be surprised to see them spend 1 years potential fee on the campaign. That’s $100 million. Which would be an all-time record for the state. See this calculation at the bottom of this email.

    It is a little weird for them to be so upset. It affects ALL of the refineries about the same. It’s not economical to bring in refined petroleum from elsewhere (California refineries are already paying a carbon fee; BC has little refinery capacity; everything else is far away.) So it will not affect them competitively. But they don’t seem to take my advice on this.”

    Mike says: Off topic on climate and disaster, but review of the WA carbon tax initiative might warrant a thread and analysis. I think WA voters will keep proposing carbon tax initiatives until we get something in place, but the merchants of doubt have so far succeeded in persuading the voters that the proposals are too flawed to pass. It’s a pretty effective tactic to pretend to be in favor of action, but always find that proposed actions are too flawed to enact. More study needed, I guess.

  72. anoilman says:

    I think the economic arguments against dealing with global warming and carbon taxes are just plain wrong. I seem to recall all kinds of claims of doom and gloom with carbon taxes driving up oil prices. Yet, the world didn’t end when we hit $100 a barrel. On the plus side it did spur renewables…

    Most estimates on energy costs are based on today’s prices. Which also assumes no change in production or engineering. When we start using renewables whole sale, the prices will absolutely go down. This is because companies will invest in a whole lot of D (development… not research) to make what they make better.

    Going forward there are big issues with switching to renewables. We can and should aim for 30% right now. That’s the current grid base load, and if we did that, we’d be shutting off base load carbon emissions hugely.

    Over 30% gets mighty hard, might fast, and I think we need to be looking at nuclear.

  73. John Hartz says:

    I suspect that ATTP and all of the regulars posting comments on this website and others like it are under the watchful eye of Welund North America.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.