This is a joint post between myself and Eric Winsberg, Professor of Philosophy at the University of South Florida. Eric has just published, together with Naomi Oreskes and Elisabeth Lloyd, a paper called Severe Weather Event Attribution: Why values won’t go away. The paper discusses the issue of how one might assess the anthropogenic influence on an extreme weather event. This post describes what was presented in the paper and tries to justify why there may be value in approaching this issue from more than one perspective.
Extreme weather event attribution
One way that we can gain confidence in our understanding of anthropogenic influences on climate is to carry out detection and attribution studies. The basic idea is to consider an extreme event, or a pattern of extreme events, and to establish the probability of that extreme event occuring under current conditions. This can then be compared to what would be expected had we not undergone greenhouse warming, and other anthropogenic changes, to determine some kind of risk ratio.
One advantage of this approach is that it largely avoids false positives; it will only assign some probability of an anthropogenic influence if there is some clear detection and if some of this can indeed be attributed to anthropogenic influences. However, this also means that there will almost certainly be circumstances where we do not assign a probability of an anthropogenic influence when, in fact, such an influence does indeed exist. From a scientific perspective this might be fine; we would simply be waiting for a sufficiently strong signal to emerge. However, this could potentially lead us to under-estimating the impact of anthropogenically-driven climate change.
A complementary approach is to consider a storyline. For example, given that an event has occured, how might climate change have influenced this event? If the air was warmer, then we may expect enhanced precipitation. If sea surface temperatures are high, then we may expect a tropical cyclone to be more intense. The focus here tends to be on the thermodynamics (i.e., the energy) and to take the dynamics as given (i.e., the event happened).
It turns out, though, that the story-line approach has been rather controversial, with many who favour more formal detection and attribution being highly critical. They argue that it could lead to more false positives and that taking the dynamics as given ignores that dynamical factors could actually work to make some events less likely. Essentially, they argue that the storyline approach may over-estimate anthropogenic influences, potentially mistaking natural variability as being anthropogenic.
The problem, though, is that although the two approaches are complemetary, they’re not actually quite addressing the same issue. The detection and attribution approach is essentially trying to determine how anthropogenic-driven climate change influences the probability of a specific class of event. The storyline approach, on the other hand, is more looking at how anthropogenically-driven climate change might have influenced an event that has actually occured. There is no real reason why we should prefer one approach over the other; they can both play an important role in aiding our understanding of how anthropogenic influences impact extreme weather events.
The criticism of the storyline approach seems to have two main strands. One is that the more formal detection and attribution approach avoids the reputational harm that may occur if climate scientists make claims that later turn out to be wrong. The other, is that the storyline approach involves decisions that are likely to be influenced by value-judgements. Given that the detection and attribution framework relies on probabilities, it may be somewhat closer to value-neutral that the storyline approach, but it’s not completely value-free. There are always going to be judgements associated with things like model assumptions and how to present the results.
Also, the judgement that detection and attribution is preferable to the storyline approach is fundamentally value-laden. It’s a judgement that avoiding false positives is preferable to potentially presenting false negatives. Just as incorrectly associating climate change with an extreme event could lead us to investing in infrastructure that turns out to be unnecessary, under-estimating the link between climate change and extreme events could lead to harm that could have been avoided.
From a scientific perspective, the detection attribution approach may well be preferable. However, the storyline approach seems very valuable from a public perspective. It allows us to consider how climate change may have influenced a specific event and also allows us to discuss how it may impact similar events in future. Without it, you run the risk of articles like this one that uses the lack of a detectable trend to conclude that there’s no solid connection between climate change and the major indicators of extreme weather. The storyline approach would almost certainly not lead to such a conclusion.
Overall, it’s hard to see why we shouldn’t be using both approaches. They’re complementary, address slightly different aspects of the link between climate change and extreme events, and should ultimately tend to be consistent. If an expected influence doesn’t emerge, then we’d have to either re-think the storyline, or double check the detection and attribution analysis. However, the storyline approach can also allow us to stress that the lack of a detectable trend doesn’t necessary imply no link between anthropogenically-driven climate change and extreme weather events; absence of evidence is not the same as evidence of absence.
We would clearly like to quantify the impact of anthropogenically-driven climate change on extreme events, but we would also like to avoid concluding that there is no link when in fact such a link is expected and will ultimately become evident. Using both detection and attribution and the storyline approach can help to present an overall picture that best represents our understanding of the link between anthropogenically-driven climate change and extreme weather events.
Links:
Severe Weather Event Attribution: Why values won’t go away – Winsberg, Oreskes & Lloyd, Phil Sci Archive, 2019.
Extreme events and anthropogenic emissions – post I wrote a while ago that – I think – makes a similar point to the point being made in Eric’s paper.
Knutson et al. have also just submitted a paper on Tropical Cyclones and Climate Change Assessment: Part I. Detection 1and Attribution which also discusses the issue of either reducing Type I, or Type II errors.
quite right. Use both approaches. There is a place and audience for a highly constrained discussion of the evidence and there may be an even larger audience and place for a narrative/story about the possible impacts of climate change. Here is a thoughtful piece about that subject
View at Medium.com
Pingback: Quanto costa? - Ocasapiens - Blog - Repubblica.it
So this paper has not been formally (meaning peer review, acceptation and appears in a publication) published yet? Not trying to take anything away from the paper (it does say preprint), just can’t find it (yet) in a formal publication. TIA
EFS,
Eric can clarify further, but I think it’s been submitted to an Philosphy Archive site. I’m not sure if it’s also being submitted to a journal.
How about also distinguishing between Extreme Weather events versus Extreme climate events? Extreme climate is typically associated with El Nino and La Nina years. So when extreme weather events occur during extreme climate years, then the attribution has to discriminate between (1) AGW-induced, (2) ENSO-induced, and (3) randomly-occurring extreme weather events.
It will eventually be submitted (and maybe with luck accepted) to a peer-reviewed journal. For now its just on a preprint archive for philosophy of science. It is still adaptable to comments before it is submitted.
“One is that the more formal detection and attribution approach avoids the reputational harm that may occur if climate scientists make claims that later turn out to be wrong”
it goes beyond individual reputational harm.
there is institutional harm, and should you take action on false positives, real financial costs.
both approaches are needed as input to decision makers.
Steven,
Something I had though of adding was that there is also the potential for individual, and institutional, harm if we fail to take action because of false negatives. My guess is that there will be a stage in the not too distant future when climate scientists will be criticised for not speaking out enough and for underplaying the risks associated with anthropogenically-driven climate change.
Indeed.
Seismologists now have a similar problem. All earthquakes used to be natural, well, some due to filling large reservoirs behind dams. But now there are earthquakes due to so-called fracking.
“the judgement that detection and attribution is preferable to the storyline approach is fundamentally value-laden.”
– How so? Both approaches should lead to the same outcome whatever that is.
You cannot tell a story about expected outcomes over a time frame in which no expected outcomes eventuate. If you do tell a story about expected outcomes that dies not eventuate then the storyline approach is proved to bet the one that has been fundamentally value laden.
“ It’s a judgement that avoiding false positives is preferable to potentially presenting false negatives.”
– Surely it is just respecting scientific technique?
As you rightly point out
“From a scientific perspective, the detection attribution approach may well be preferable.
If an expected influence doesn’t emerge, then we’d have to either re-think the storyline, or double check the detection and attribution analysis.”
–
Storyline
All the good intentions in the world, as shown here, are of no use if the little old lady did not want or need to cross the road.
You are all trying to help her cross the road when you you have not sorted out if she wants or need to go.
on earthquakes. It’s not just the fracking, some of us will live to see evidence accruing that melting of glaciers stimulates tectonic activity that will be most apparent in the form of earthquakes.
DBB said:
A related attribution effect to follow concerning earthquakes is the tidal stress trigger. Every year, there are papers showing a statistically significant relationship for earthquakes to occur along perigean lunisolar orbital paths. But it is comical to watch the director of the USGS, Dr. Susan Hough’s explode when anyone brings this up
https://pubs.geoscienceworld.org/ssa/srl/article-abstract/89/2A/577/525827/do-large-magnitude-8-global-earthquakes-occur-on
Funny but juvenile on her part.
angech,
They should tend to give the same outcome, but the formal D&A approach avoids false positives and so could lead us to under-estimating the link, while the storyline approach avoids false negatives and so could lead us to over-estimating the link. If we had perfect data, they should converge, but we don’t.
“Something I had though of adding was that there is also the potential for individual, and institutional, harm if we fail to take action because of false negatives. My guess is that there will be a stage in the not too distant future when climate scientists will be criticised for not speaking out enough and for underplaying the risks associated with anthropogenically-driven climate change.”
yes.
https://www.sciencemag.org/news/2015/02/why-italian-earthquake-scientists-were-exonerated
personally I am happy to give both sides and suspend judgement
https://en.wikipedia.org/wiki/Pyrrhonism
One more thing
‘One advantage of this approach is that it largely avoids false positives; it will only assign some probability of an anthropogenic influence if there is some clear detection and if some of this can indeed be attributed to anthropogenic influences.”
you talk about clear detection here. If you adopt a policy of “reported” detection probablity you dont have the problem of “missing” a positive. If you apply strict ( lets say 95%) prob of detection THEN you have the problem.
Steven,
True. However, if the signal is small relative to the noise then you could end up with essentially no detected signal, even if it’s there. Of course, this would only be important if there was some chance of the signal emerging.
SM I rather prefer Humes “A wise man apportions his beliefs to the evidence.”. Judgement doesn’t have to be binary (or “trinary” if you include a “suspend judgement” option), it can be continuous.
“True. However, if the signal is small relative to the noise then you could end up with essentially no detected signal, even if it’s there. Of course, this would only be important if there was some chance of the signal emerging.”
I suppose you could end up with a very small probability of the signal being there, the thing is
if you just give up the ‘95%” confidence nonsense and report the observed probability of signal present, then you shift that “decision” over to the decider.
we are 64% sure there is a signal there and the decider decides this is enough, typically
in view of the storyline work.
I’m tryin to recall a specific time I did this ( more concrete is better for me )
As if on cue here is a article about attribution from the Carbon Brief website.
“Northern hemisphere’s extreme heatwave in 2018 ‘impossible’ without climate change”
https://www.carbonbrief.org/northern-hemispheres-extreme-heatwave-in-2018-impossible-without-climate-change
It was all good till I got to this quote:
“The findings reinforce the need to strengthen efforts to meet the 1.5C target, Vogel says”
We should just stop with the false hope there is any chance of reducing our carbon foot print to hit this target. I just found a study that calculates what the reduction in carbon emissions would require in adapting to meet the 1.5C target.
New analysis of adapting to a 1.5C target by 2050:
https://www.aalto.fi/en/department-of-design/15-degrees-lifestyles
“1.5-Degree Lifestyles: Targets and options for reducing lifestyle carbon footprints”
…
“Globally, citizens and society need to aim for per-person consumption-based greenhouse gas emissions targets of 2.5 (tCO2e) in 2030, 1.4 by 2040, and 0.7 by 2050 in order to keep global temperature rise to within 1.5 degrees. The gap analysis reveals that footprints in the developed countries studied (Finland and Japan) must be reduced by 80–93% by 2050, assuming that actions for a 58–76% reduction, necessary to achieve the 2030 target, start immediately. Even for the developing countries studied (China, Brazil, and India), a 23–84% reduction, depending on the country and the scenario, would be required by 2050.”
Which questions should we not ask and not try to answer?.. This would be so much easier if we just reversed the growth of human populations.
jacksmith4tx —- Human population is stabilizing and declining everywhere except Africa and maybe South America.
DBB,
World




Africa
Southern Asia
South America
(note relative y-axes and that Asia needs tweaked a bit wrt countries selected)
https://population.un.org/wpp/Graphs/Probabilistic/POP/TOT/
The correct answer is Africa and Southeast Asia (Pakistan through to China and every island in between that is not Australia or New Zealand).
‘we are 64% sure there is a signal there and the decider decides this is enough, typically‘
Unfortunately, there are those (including politicians) that would require 100% sure, even though that is (i) irrational and (ii) impossible. One of the problems with statistics and probability lies in communicating it to the general public, where just giving them the probability is likely to be misinterpreted. I quite like the approach used in the IPCC reports as a compromise.
Francis E Sargent — I read all the data you provide as stating every region except Africa is projected to go into decline.
Re earthquakes:
David, there were a lot of human-induced earthquakes in the past. Big ones too: Global review of human-induced earthquakes. Not just dam impoundments: Five decades of triggered earthquakes in Koyna-Warna Region, western India – A review, but mine collapses and building construction, water, oil and gas production and waste-water disposal. None so far from fraccing (OK two, out of 2,500,000 wells), unless you count the Swiss geothermal site which was really a water injection issue not a fraccing issue per se. It was the cumulative amount of water pumped that caused the earthquake, not the fracturing that had been occuring progressively for months and years. The actual fraccing process only produces microseisms, usually too small to be felt, at most like a passing truck. The earthquakes you’re thinking of were caused by produced-water disposal wells. That’s produced formation water (saline aquifer), not frac fluid. The flowback frac fluid is generally trucked away, because the well isn’t yet connected to a pipeline. Tight or low permeability reservoirs (strictly, high capillary entry pressure, but that correlates closely with permeability because they’re both controlled by pore-throat size and pore-lining wettability) always have a high water saturation, and unless they have an unfeasibly high oil or gas column, always have a non-zero relative permeability to water.
The problems arising from produced-water disposal have nothing to do with whether or not the wells producing the water had been fracced, and nothing to do with fraccing. They occurred years later, as result of the cumulative disposal of water produced from hundreds of thousand of wells, of which less than 10% was actual frac fluid (not all comes back initially). This may seem like a pedantic point, but you can’t risk-assess and mitigate a situation if you don’t understand the source of the risk. Ban fraccing tomorrow, keep those wells flowing and never refrac them, and if you keep on disposing of the water the way you’re doing so today, the earthquakes will get worse, not go away. Ban subsurface water disposal, and you can frac to your heart’s content and not have earthquakes (you can dispose of water safely, but the Oklahoma/Alabama geology in the cases I’ve looked at is inherently risky). The disposal sites are near the sea so you could do what is done offshore: clean the water up to an adequate standard (40 ppm in the UK) and dispose of it to sea. Of course that will cost more. Or you could monitor the disposal wells better. I forget which and can’t find the paper ATM, but in the case of either the Oklahoma or Alabama one, they published the pressure-rate plots from the wells and I could see where one fractured out of zone two or three times a few years before the earthquake. They should have recognised that as a rogue well running away from them, and choked it back.
The above is the equivalent of the scientific attribution of extreme events. “We need to stop fraccing because it’s causing earthquakes” is the storyline version. The trouble with the storyline version is that when it’s based on a false premise, mitigation attempts can be ineffective or make matters worse.
Dave_Geologist, it is spelled “fracking”. Thank you for the amplification and clarification.
Paul, we’ve had the discussion before where I pointed out that you were misrepresenting Susan Hough, the evidence, and the work of those in her team whom you claimed were contradicting her but weren’t. Let’s not go back down that rabbithole.
David, it’s spelled “fraccing” and has been spelled that way since it was invented in the 1950s (in its modern form: they were dropping sticks of dynamite down wells in the 19th Century). “Fracking” is a relatively recent media invention. As I was making a technical contribution, I chose to use “fraccing”, just as I use “tonnes” not “tons” Although I draw the line at -ize, even though I was an editor at a journal that used it because in the 19th Century, when it was founded, UK English used -ize 😦 . Some Americanisms are archaisms, not neologisms 🙂 . Ironically, some of the professional publishers and societies have debated changing their spelling, because nobody Googles for “fraccing” and so scientific papers don’t come near the top of web searches. I can live with that as I don’t see myself ever getting the hits of a social-media “influencer”, and don’t need them to earn a living 😉 .
“Unfortunately, there are those (including politicians) that would require 100% sure, even though that is (i) irrational and (ii) impossible.”
yes. but I am talking about an approach that avoids the intellectual pitfall of missing a signal because of “95%” purity.
you cant control deciders. My suggestion to Eric and Dr. R is that a more nuanced version of D&A can avoid some of the shortcomings they identify in it.
It is true however that deciders often lack the skill. there is a funny personal story there.
Short version: we doubled the missles in a platform and it performed WORSE .
my storyline approaches fell apart and the D&A approach fell apart.
the decider was quite upset. basic issue.. small sample sizes, shit happens
there was never enough funding to increase the N where you could get any decent power
out of the test and they all insisted on metrics that were not normally distributed, and refused to let me transform them because they were “used to” the native units.
fustration
Some deglaciation earthquake links, for those who are interested:
Did deglaciation trigger intraplate seismicity in the New Madrid seismic zone? Yes, the lithosphere really does take that long to recover. The stress recovery is slower than the isostatic rebound.
Impact of glacially induced stress changes on fault-seal integrity offshore Norway. Repeated reactivation of reservoir-bounding faults = small earthquakes; of bigger faults = big earthquakes.
Stress orientation, pore pressure and least principal stress in the Norwegian sector of the North Sea
Along-slope variation in the late Neogene evolution of the mid-Norwegian margin in response to uplift and tectonism
These are due to regional stress changes caused by the loss of an ice sheet. Or rather, the stresses were changed by the ice-sheet’s load, mainly an increase in horizontal stress through the Poisson effect (it’s an effective Poisson’s ratio, because rocks are meta-materials with holes and cracks, and are not strictly elastic on hundred-thousand-year timescales), and the vertical stress has relaxed in 10 ka but the horizontal stresses are still relaxing. We won’t live to see them.
The sort of things that will happen in human lifetimes are analogous to dam impoundment/drainage events, typically localised loads in valleys coming and going and causing very large lateral variations in stress. They’re well capable of triggering fatal magnitude 6 earthquakes.
Dave_Geologist, I suppose you enjoy going piniccing.
Incidentally, I have no idea why “fraccing”. There’s only one c in fracture, so it couldn’t be “fracing” as it would be pronounced “fraysing” by some; and frac’ing has I’m sure been used but is awkward if more grammatically correct. I would probably have chosen “fraccing” because of the similarity between “fracture” and “accent”, hence the double-c. But then it should probably be pronounced “frak-sing” 😦 .
It’s not just nations that are divided by a common language!
Precedent David. English is a language famous for its irregularity so we’re allowed inconsistencies. As well as the explanation I proposed above, it’s possible that an odd spelling was chosen deliberately, to signify that it was a Term Of Art. Hydraulic fracturing carried out in a well is a very specific subset of fracturing, and in the profession, “fraccing” is used for that and for nothing else. For example, while I’ve worked on and with fracced wells, most of my professional usage of the word “fracture” has been in the context of naturally fractured reservoirs, which are a quite different beast.
We are all suckers for a good story. Our perception of reality is almost entirely about a story our brains confabulate to rationalize our actions.
The Science of Storytelling – by Will Storr
https://www.goodreads.com/book/show/43183121-the-science-of-storytelling
“if we are to truly understand storytelling in its grandest sense, we must first come to understand the ultimate storyteller – the human brain.”
If we want to change human behavior we will need to tell a compelling story to construct a better reality.
I have normally used the “frack” spelling, but then I say, tomato and you say, tomato. So who knows? and I do enjoy the occasional picnick.
‘yes. but I am talking about an approach that avoids the intellectual pitfall of missing a signal because of “95%” purity.‘
Ironically, there is no 95% purity, that is part of the “null ritual”. Fisher wrote that the appropriate significance level depends on the nature of the experiment. Even more ironically, that depends on exactly the sort of prior consideration that frequentist methods were intended to eliminate.
But you’ve probably not been using it for three decades, mike 😉 .
I’m curious about what causes the daily/weekly noise in global temperature. Are there studies? Climatereanalyzer is predicting a value of – 0.4 C for several intervals in the near future. This, during an El Niño.
https://climatereanalyzer.org/wx/fcst/#gfs.arc-lea.t2anom
Pingback: Extremes | Climate Etc.
“A complementary approach is to consider a storyline. For example, given that an event has occurred, how might climate change have influenced this event? If the air was warmer, then we may expect enhanced precipitation. If sea surface temperatures are high, then we may expect a tropical cyclone to be more intense. The focus here tends to be on the thermodynamics (i.e., the energy) and to take the dynamics as given (i.e., the event happened). The storyline approach, on the other hand, is more looking at how anthropogenically-driven climate change might have influenced an event that has actually occurred.”
I’m not sure it’s complementary. It’s also assessing, in a way, the probability of an event which has occurred, occurring, by an informal storyline process of looking only at how thermodynamical changes induced by warming might have made the event more likely and/or more extreme. By ignoring the dynamical circumstances which directly caused the event to occur, your storyline approach dismisses a vital component of attribution analysis, because dynamical influences leading to the occurrence of extreme weather events may be affected by natural and anthropogenic factors and it is only via a thorough examination of past weather events in relation to the specific event in question that can one can disentangle (or at least try to disentangle) these various factors.
“The problem, though, is that although the two approaches are complementary, they’re not actually quite addressing the same issue. The detection and attribution approach is essentially trying to determine how anthropogenic-driven climate change influences the probability of a specific class of event. The storyline approach, on the other hand, is more looking at how anthropogenically-driven climate change might have influenced an event that has actually occurred. There is no real reason why we should prefer one approach over the other; they can both play an important role in aiding our understanding of how anthropogenic influences impact extreme weather events.”
In actual fact, extreme weather attribution studies most often look at specific events which have occurred. That is what people are most interested in. That is the rationale behind most extreme weather attribution studies – to quantitatively determine what role (if any) anthropogenic climate change might have played in the extreme event which unfolded. By examining circulation patterns, past similar weather events and basic thermodynamics, an attribution study provides a tentative assessment of the fraction of attributable risk of that specific event happening in a world warmed by GHGs. It’s not perfect by any means, but it’s preferable to forming a truncated storyline based only upon a knowledge of basic thermodynamic physics combined with the simple observation that a specific event happened, with value added judgement thrown in. I think in such circumstances, one indeed should prefer one approach over the other.
Jaime,
That’s why it’s complementary. It’s another way of assessing a possible link.
This is becoming more common, but a lot of D&A involves detecting some kind of trend and then trying to determine what caused that change (attribution).
Me on Twitter, more in response to Curry’s response to this article than to this article directly. Still trying to formulate an accessible public-facing statement! But this is sort of what I’m thinking.
1) The statistical approach to severe events is fundamentally frequentist; as a consequence it is very prone to false negatives. That an effect is undetected doesn’t prove its absence.
2) There is often no obvious Bayesian alternative formulation. How can we determine in a systematic way, given the limited evidence, whether an event was substantially more likely in the changed climate. (Or even, if it happened despite being less likely, which is conceivable.)
3) Since the statistical approach is very conservative, and will yield many false negatives in the event of an effect, coming up with storylines is a reasonable way to apply thinking where math won’t do. It can rarely be decisive, but sometimes it can.
4) For instance, suppose one considers the case of a large airplane flown into the structure of a tall building, which collapses within the hour. Statistically, this is meaningless. You have N=1. But it’s obvious that the collision and the collapse are not merely coincidental.
5) On the other hand, this sort of clarity is not generally available in earth science. It’s notoriously possible to come up with causality chains that indicate contrary effects. e.g., Great Lakes levels rising under climate change vs falling. Storylines are not usually decisive.
6) In general, we have to tolerate the unfortunate fact that we can anticipate a consequence long before we can detect it, and that sometimes that anticipation can be incorrect.
7) I’d point out that there is no guarantee that consequential climate changes will be monotonic. It’s plausible to me, for instance, that Great Lakes levels will attain levels both far below AND far above natural variability in the near future, perhaps a decade or two apart.
8) The more rare an event, the less statistics can inform us about it. We have to resort to physics. In the most extreme cases, the likelihood that physical reasoning will be fruitful increases while the likelihood that statistics will be fruitful declines.
9) Climatology is not mere empiricism. In fact, mere empiricism can’t possibly prove anything about climatology, if you don’t accept models as empirical, because you don’t have multiple earths to experiment upon. N = 1 no matter what.
10) If you reject models and you reject physical reasoning and insist on statistical empiricism, then N = 1 and you have made any scientific statement about climate change impossible. Which for some authors sometimes seems like the point.
11) Nevertheless we are changing the radiative properties of the atmosphere with wild abandon.
So it’s necessary to repeat that uncertainty is not our friend. If we really can know nothing, it seems to me we should stop changing those properties as soon as feasible.
12) But we have other evidence besides observation. We have physics. We have models. It is in the light of these that we must interpret observations.
13) In some cases, the conclusions are obvious. Consider record tidal flooding. We know GHGs -> warming -> melting and thermal expansion -> sea level rise. Even in that case we are informed by physical reasoning, albeit very simple and obvious reasoning.
14) Are extreme events over-represented and over-interpreted in the public discourse on climate change then? My position is that *it depends on the event*.
15) I predict that every time there is a severe event, for the foreseeable, from now until well after all the participants in this thread shed this mortal coil, we will have the same #climateball scrimmage for every severe event.
“It’s because climate change.” vs “No it ain’t.”
16) I suggest that neither position is ever really 100% rationally coherent, but in some cases one side or the other will be closer to the truth.
17) Which brings us to the generalization “climate change is already causing severe disruptive events”. Here the affirmative is in a very strong position, because given the breadth of the claim, it is the union of the other claims, while the negative is the intersection.
18) I wish we could find better ways to talk about this, though. /fin
Dave said:
Only someone that is the head of the USGS can get away with writing an abstract that contains the single word “No”. That’s childish.
In the greater scheme of things, the big worry is if TPTB start considering fracking shale in the Monterey Formation, which lies with the San Andreas fault region. That’s why I’m glad that California is a progressive state and the citizens will never let it happen.
I don’t agree that a statistical approach has to be frequentist. For a start it is fundamentally impossible to attach a frequentist probability to the likelihood of an anthropogenic effect on a particular extreme event.
The difficulty usually lies in setting out the question.
Ken, yes, detection and attribution of climate change is a very large subject, of which extreme weather attribution is but a small part. There is a subtle distinction between the detection and attribution of extremes on the global scale (SREX) and the attribution (they don’t need to be detected) of specific extreme weather events. So far, only the frequency of heatwaves on a global scale have been detected to be increasing and that trend positively attributed to climate change (basically, on account of the simple fact that the globally averaged temperature has increased, which is generally reflected in a corresponding increase of mean seasonal temperatures on the regional scale).
JJ said:
That is fracking obvious. There is a discipline of statistics devoted to extremes called Extreme value Theory. The essential analysis teats these “specific events” as probabilities within the cumulative probability of all events, and then making predictions based on whether tails of the distribution will be thin or fat.
I think Taleb has been on a recent Twitter rant about the risk of illnesses based on frequency of occurrence, where he is highlighting Ebola. This is an interesting video where he is applying EVT and working out how fat vs thin tail applies:
Jaime,
I’m not quite sure what your point is. There are certainly other events for which an anthropogenic influence has been inferred. The point in the paper is that there is merit to also considering how we might have expected climate change to have influenced some event (a storyline) even if we can’t do a formal detection & attritbution analysis. The reason for this is not to try and infer an unjustified anthropogenic influence, but to use our understanding of how it would be expected to influence such events to enhance the overall picture.
Taleb talking about EVT seems somewhat ironic.
Ken, my point (which I thought I made fairly clearly) is that if we’re talking about attribution of specific extreme weather events, an analysis based on a ‘storyline’ constructed from a knowledge of how thermodynamics might be expected to influence specific events is no substitute for a more rigorous attribution study which looks at atmospheric dynamics, return times of extreme events compared to the instrumental record and thermodynamics. I fail to see how it might be complementary too. That is all.
Jaime,
But noone is suggesting that it should be a substitute for a study that can consider atmospheric dynamics and thermodynamics. The suggestion is that a storyline approach can complement the more formal D&A approach, not that it replaces it.
The storyline approach may have been used to sell two recent climate research articles to Physical Review Letters. This journal is the most prestigious in physics and so rarely do you find a climate science paper (as climate physics is rarely cutting-edge(, yet two papers on QBO were published within the last year
— “Periodicity Disruption of a Model Quasibiennial Oscillation of Equatorial Winds” (2019)
— “Nonlinear saturation of the large scale flow in a laboratory model of the quasibiennial oscillation” (2018)
Both papers discuss the disruption of the regular QBO in 2016, and they also both suggest that climate change may be responsible for the disruption. Would these papers have been published if they didn’t suggest this attribution? Making the connection does create an aura that the research is important, and also may be used to substantiate the idea that global warming is contributing to bifurcations or tipping points in the climate.
“It’s not perfect by any means, but it’s preferable to forming a truncated storyline based only upon a knowledge of basic thermodynamic physics combined with the simple observation that a specific event happened, with value added judgement thrown in. I think in such circumstances, one indeed should prefer one approach over the other.”
huh?
Sorry but you havent addressed the fundamental issue that everyone here who actually does these types of analyses understands.
ya need to do both approaches because they each have a short coming and you want your decision maker to be fully informed of everything you know and everything you dont know.
mt says: (thanks by the way for such a well set out point of views)
“Me on Twitter, more in response to Curry’s response to this article than to this article directly. Still trying to formulate an accessible public-facing statement! But this is sort of what I’m thinking.
1) The statistical approach to severe events is fundamentally frequentist; as a consequence it is very prone to false negatives. That an effect is undetected doesn’t prove its absence.”
–
It would be equally true to say 1) The statistical approach to severe events is fundamentally frequentist; as a consequence it is very prone to false positives. That an effect is detected doesn’t prove its causality.
DM seems to share this view “I don’t agree that a statistical approach has to be frequentist. For a start it is fundamentally impossible to attach a frequentist probability to the likelihood of an anthropogenic effect on a particular extreme event.”
–
Your comment on “That an effect is undetected doesn’t prove its absence.” Echoes ATTP recently and invokes Taleb. Nonetheless absence is a significant factor in probability and the longer something is absent the less likely the probability is likely to be.
–
We have had massive atypical weather events in the past, horrible destructions. Rare events are just that but in any one part of the globe, in a 5×5 k area it would be possible to say it has had an unprecedented weather event today. Such events are therefore commonplace. Narratives focus on these and generally should not.
–
By the way you have done it again getting your view out prior to the general narrative appearing on the streets. Well informed and well done.
“ Nonetheless absence is a significant factor in probability and the longer something is absent the less likely the probability is likely to be.“
Not the most correct thing I have ever said, along the lines of I think I will call tails on the next toss since we have had 10 heads in a row as a fallacy?
Or not?
Anders –
The other, is that the storyline approach involves decisions that are likely to be influenced by value-judgements.
That seems rather generous. I just read Judith post and the follow-on comments. I’d say that a fair characterization might be: “One other is that the storyline approach is seen by some as being the product of liars who are trying to scare children.”
Anders –
The other, is that the storyline approach involves decisions that are likely to be influenced by value-judgements.
That seems rather generous. I just read Judith post and the follow-on comments. I’d say that a fair characterization might be: “One other is that the storyline approach is seen by some as being the product of l*ars who are trying to scare children.”
ok dr. R.
I remember what i did in the storyline analysis.
i was able to breakdown story lines down into discrete events.. scenes if you will. then construct markov chains with branching and transition probabilities.
hmm in the end it didnt help because end state events were already rare, but it gave a structure to use in follow on simulations, and it helped you see how signals got lost in the noise. it helped you see why you might fail to get the detection of a real signal.
climate might not lend itself to the creation of discrete events however. but think of the life of a hurricane and break that down into discrete events..
Ken says:
“Jaime,
But noone is suggesting that it should be a substitute for a study that can consider atmospheric dynamics and thermodynamics. The suggestion is that a storyline approach can complement the more formal D&A approach, not that it replaces it.”
Mosh says:
“huh?
Sorry but you havent addressed the fundamental issue that everyone here who actually does these types of analyses understands.
ya need to do both approaches because they each have a short coming and you want your decision maker to be fully informed of everything you know and everything you dont know.”
As everyone here apparently understands these fundamental issues, please demonstrate that an analysis of extreme weather based upon thermodynamic considerations only, with a large chunk of value judgement thrown into the mix, is complementary [i.e. mutually supplying each other’s lack] to a more rigorous scientific analysis which incorporates thermodynamics, atmospheric dynamics and an examination of past weather events – albeit far from perfectly. What does formal attribution lack which the new, complementary ‘storyline’ attribution brings to the table? Huh? As far as I can see, yes, both methods do have their shortcomings, but the one method does not make up for the shortcomings of the other, and vice versa.
That applying frequentist statistics might generate false negatives is the least of its problems.
It overlooks the deeper flaw in the argument which is the naive and incoherent concept of ‘causation’ that floats around undefined in the foundations of this approach.
Every extreme event is caused by that flapping butterfly wing…
And so is every event and ‘non-event’ that happens.
Something always happens, and what happens depends, or is caused, by the totality of the historical context in which it occurred. This is as true for extreme events as any other. Given the complex web of teleconnections between each aspect of the climate it is impossible for ANY event to be independent, or uncaused, by the climate change already observed in temperature, humidity SLR, ice cover and polar vortex changes.
All events, extreme or otherwise are cause by AGW to the degree in which that warming has changed the climate from what it would have been without the rapid addition of CO2 to the atmosphere.
Record hot days are driven directly by the climbing global temperature, the trend makes attribution obvious.
There may be no sigma significant trend detectable in rare extreme events. But they are also events within the causative context of AGW history. Different aspects of the climate have different tine-scales and sensitivity to past changes and current conditions. The event is contingent on the climate as altered by AGW. A frequentist statistical approach can only ever give an answer that climate change equal to the degree of difference between the actual and unaltered climate, is a unconditional modifier of any probability of that event occurring. Any estimate of the probability of the event, or an altered magnitude is doomed by the lack of any link to the underlying process of causation.
The degree to which the new normal has altered the magnitude or probability of an event can only be a narrative. A storytelling based in the physics and especially the thermodynamics of the system is the best way we have of estimating the actual influence AGW has on extreme events. The complexity and historical contingency of all weather events make any other approach to attribution or causation pointless.
Jaime,
You seem to be defining complementary in a rather rigid way. The point is that the formal D&A analysis has the potential to present false negatives, while the storyline approach has the potential to present false positives. One should always be aware of this issue. The storyline approach, however, allows one to consider how ACC may have influenced an event that the D&A approach is unable to really consider. Hence, the two different methods can – together – provide more information than each one could do by itself.
Frequentist statistics (applied properly) don’t have a particular false-negative problem. Set the significance level to 1 and there will be no negatives. Better still, adopt the Newman-Pearson approach.
Having defended frequentist procedures, I need have a second shower this morning ;o)
Ken, Izen, have you ever considered the fact that these extreme events must be viewed in the context of past events going back hundreds of years (if possible) and that the prior knowledge incorporated via the Bayesian approach (assumed to be complementary to the frequentist approach) itself relies upon a formal detection and attribution of global warming to anthropogenic factors, which only yields a confident attribution since 1950? Is it not faulty logic and somewhat circular reasoning to claim legitimacy and independence (or complementarity) for a Bayesian approach which inherently relies upon the (time limited) frequentist approach for its prior knowledge?
Dave_Geologist, “fraccing” is not to be found in the Oxford English Dictionary.
Doesn’t a mixture of the two approaches help here? Second Law and Clausius-Clapeyron tell us to expect certain things to happen, globally, on average. The geological record tells us that (geologically) short and rapid-onset warming events like the PETM and the Carnian Pluvial Event were associated with s supercharged hydrological cycle with droughts and rainfall intensity that we simply don’t see today, and didn’t see either side of the warm event. So extreme, in the case of the PETM, that the clay budget washed into the ocean measurably changed, globally.
So our prior expectation should be that extreme weather events caused or amplified by AGW are very, very likely. The only questions are how soon, where, and how big.
I’m sure there are lots of specialist words that are not found in the OED David. Practically every Linnean species name, for example. It is, however, found in USPTO patents, which I would regards as a more authoritative source for Terms Of Art. And Google Scholar has a thousand hits, which again I would regard as more authoritative. And impressive, because there must be a thousand uses in proprietary industry documents for every one in the published literature. It’s the spelling used for decades by Those Ordinarily Skilled In The Art. Like me.
Fracking has more than 50,000, although of course that includes more then just peer-reviewed articles. More tellingly, it has only 357 prior to 1990, and 763 prior to 2000. And only 2300 prior to 2010. Despite the fact that it’s been in use in its present form* since the 1950s, and the North Sea’s first gas well was fracced in 1967. Fracking is a neologism, invented by people who neither practised nor understood it. You say tomayto, and I say tomarto. Can we call it a day?
* Shale plays tend to use slickwater fracs, whereas more conventional plays added viscosifiers to prevent the frac fluid running away into their larger pores and fractures. So ironically, the shale frac fluids contain fewer additives.
Since 1910 Jaime. A limited role for unforced internal variability in 20th century warming. Science Moves On.
And, per my previous comment, we’re not just relying on statistical detection at 95%. Because, as the website says, And Then There’s Physics. And Chemistry. And Geology.
And BTW, why do those denying a role not apply the same 95% threshold to their non-attribution claims? Uncertainty cuts both ways.
Jaime,
I think that depends on what question one is considering. One can consider the physical processes associated with such events and, hence, how changes to the underlying conditions may influence them. If there was a period hundreds of years ago where TC intensity was enhanced for some reason, that doesn’t suddenly mean that we shouldn’t consider preparing for more intense TCs in the coming decades. The question isn’t simply “are TCs today, or in the near future, unusual when compared to past centuries”. The question that many would like to know the answer to (I think) is something like “are we going to see an increase in the frequency and intensity of extreme TCs, relative to what we’ve encountered in recent decades” (i.e., relative to what our infrastructure has coped with – or not, in some cases – in the recent past).
Aha, science certainly does move on Dave, but it moves sideways too. Unforced internal variability deemed not to be significant in early 20th century warming. Study finds that AMO is externally forced. Confirms earlier study that AMO is indeed externally forced – a lot more so since the termination of the LIA. Alas, the external forcing turns out to be natural!
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3948066/
Jaime,
But if the AMO is externally forced, then it’s not internal variability.
Ken, quick rewind:
“By ignoring the dynamical circumstances which directly caused the event to occur, your storyline approach dismisses a vital component of attribution analysis, because dynamical influences leading to the occurrence of extreme weather events may be affected by natural and anthropogenic factors and it is only via a thorough examination of past weather events in relation to the specific event in question that can one can disentangle (or at least try to disentangle) these various factors.”
Where did I mention ‘internal variability’? Internal variability is a thing. It may be either forced or unforced. I’ve always thought it rather naive to assume that multidecadal internal variability just ticks away happily, all of its own accord, without being externally forced.
Jaime: here
Paul, why are you not worried about the existing Monterey Formation oil production? Google Geertsma or Segall compaction or see Fig. 2 of the linked paper. And Scholar Groningen earthquake or Lapeyrouse Golden Meadow Fault. The Role of Hydrocarbon Production on Land Subsidence and Fault Reactivation in the Louisiana Coastal Zone.
Incidentally this is an example of storylines I suppose. There’s a sample of one, how do you know oil production caused the subsidence and fault reactivation? Normal faults on the GoM coast are slipping all the time. How do we know it wasn’t just its time to slip? Because… And Then There’s Physics. Implemented in a simple boundary element model using software I used to use myself. The same group also run full finite-element models on similar problems with more detailed distributions of properties, generally written in MatLab as an exercise for the student even through Schlumberger’s commercial products come with a free academic licence. They’re black-boxy, and it can sometimes be hard to tell, as is the case with GCMs, whether an unexpected result is an artefact of the discretisation and parameterisation of the model, or just a coding error. I recall an example where I and the really-expert experts thought we could get away with making the basement the lower boundary, essentially making it rigid. We intuited when we saw some really dramatic local stress reductions that that was the problem, and it went away when we made it very stiff instead, tens of GPa. Unfortunately that made the model three or four times bigger.
Poly3D is a simple model where the equations are solved analytically, with the only numerical approximation being the discretisation of the subsurface into elements. If you make a very simple model like a single Somigliani dislocation, there are fully analytical solutions to compare to, worked out before there were computers. Poly3D has the advantage that you can run many models and scenarios quickly on a PC, and you can generally understand at a basic-physics level what is happening where and why, by using the property display and QC tools (or porting it to the likes of Gocad which has better tools), providing guidance on yes/no questions if not on quantitative forward prediction.
See my reply to David B for why produced water disposal would be of far more concern than fraccing. But you’d have to be really stupid to dispose of it into the Monterey. All permeability, no storage. It looks like desalination is being considered, which is a good idea in a drought-prone state. Water cleaned up to North Sea 40ppm standards would be safe to drink if not for its high salinity (100-300‰), and fine for agriculture.
BTW you do know that SAFOD drilled though the San Andreas fault?
Jaime,
If you mean “forced” as in “externally forced”, then internal variability cannot also be externally forced. It’s either some internal mode, or it’s externally forced. It can’t be both.
Indeed, which is why people argue that the long-term warming cannot be due to some internal mode of variability.
Jaime, I assumed that you were quoting the zombie meme that forcing models can’t explain temperature anomalies in the first half of the twentieth century. They can. That was a precondition of picking apart the AMO. You have to know what the forced trend is first. Apologies if you were not.
If it was the case that we did not understand pre-1950 forcings and temperature response, we would indeed have problems defining a stationary baseline to compare to.
Dikran.+
Check you link on your 9:49?
[Mod: fixed]
Dave,
I guess my information is old regarding the potential for fracking the Monterey Formation
This now states there is potential for only 21 million barrels of oil from fracking, which is a single day worth of USA consumption
https://www.scpr.org/news/2015/10/06/54867/no-fracking-bonanza-for-california-s-monterey-shal/
This paper confirms the AP story, which lacked a citation:
https://www.onepetro.org/conference-paper/SPE-190035-MS
That means the potential is down from ~2 years to 30 days worth of USA consumption. Still not really worth the risk of triggering an earthquake, which is the accepted storyline by the majority of the politicians in California
Mixing of the ocean, which is thought to be the primary cause of long-term natural variability, is due to a combination of wind and tidal forcing according to Munk and Wunsch.
The tidal forcing would be considered an external forcing, and some fraction of the wind would be external also if it is caused by atmospheric tides. A remaining internal variability would be due to volcanic and mantle processes.
mt: “There is often no obvious Bayesian alternative formulation. How can we determine in a systematic way, given the limited evidence, whether an event was substantially more likely in the changed climate. (Or even, if it happened despite being less likely, which is conceivable.)”
Bayesian attribution studies exist. My institute did some of it. Here is a paper by James Annan:
Click to access 020ead08e2356887af1b752c807db69d1330.pdf
“A complementary approach is to consider a storyline. For example, given that an event has occured, how might climate change have influenced this event? If the air was warmer, then we may expect enhanced precipitation. If sea surface temperatures are high, then we may expect a tropical cyclone to be more intense. The focus here tends to be on the thermodynamics (i.e., the energy) and to take the dynamics as given (i.e., the event happened).”
Let’s take the Great Lakes example. One story line could be:
“The water level of the Great lakes is rising because of more precipitation.”
Another story line:
“The water level of the Great lakes is falling because of more evaporation.”
Maybe I should read the manuscript first, but just giving a possible relationship that could explain what happened does not make much sense to me. You need to quantify stuff (a relationship can be easy to explain, but very weak in practice, various changes could have opposing influences on the event of interest).
I think I would prefer to say something similar to what we said before we had event attribution: For this single event, the flooding in the middle of the USA, we cannot tell whether it is relate to climate change, just like getting a two aces can be due to chance, but it is clear that precipitation is increasing due to global warming. We expect precipitation to increase on average by x percent and strong precipitation even by y percent, in the middle of the USA we expect an increase of strong precipitation a whooping/mere z percent. Maybe I am getting old, but I would even prefer such claims over event attribution, I like to focus on the biggest picture and not one one event.
If a story means explaining the physics that leads to such events, I am all for it. I love science. If it means cherry picking one relationship that makes a convincing story to someone who does not know the full system I am less happy.
I wasn’t recommending fraccing it Paul. It’s the wrong kind of rock, too cherty. And naturally fractured which means it behaves like a much higher permeability conventional reservoir-source system, where the oil readily migrates from the source to a trap such as an anticline rather than staying behind in the source rock, which is where they place shale oil or gas wells. That’s one reason California has so many oil and bitumen seeps at surface.
I wouldn’t be sanguine about the depleted fields. Production or injection induced earthquakes (other than microseisms) are invariably due to reactivation of pre-existing faults in shear. That’s a completely different mechanism from the tensile failure in engineered fracs, and can occur under depletion as well as under pressure increase. The two examples I mentioned, Groningen (Mw 3.6) and Lapeyrouse (no earthquake, the sediments are too soft), were both triggered by depletion. Bearing in mind that the earthquake scale is logarithmic, I’d be far more worried about the prospect of an induced 3.5 activating the San Andreas fault than I would about a fraccing 0.5 – 1.2.
> As everyone here apparently understands these fundamental issues, please demonstrate that an analysis of extreme weather based upon thermodynamic considerations only, with a large chunk of value judgement thrown into the mix, is complementary [i.e. mutually supplying each other’s lack] to a more rigorous scientific analysis which incorporates thermodynamics, atmospheric dynamics and an examination of past weather events – albeit far from perfectly.
Loaded and leading questions that shift the burden of proof are usually shorter than that.
From the paper:
Click to access Attribution%20May%20final%20draft.pdf
We already know AGW. It may be time we update our beliefs accordingly.
No wonder contrarians are up in arms as we speak:
“It’s a judgement that avoiding false positives is preferable to potentially presenting false negatives.”
That is a thing that is not just true in this case. There seem to be many people who feel they are only responsible for what they do, but not for what they do not do. Also being responsible for what you do not do is uncomfortable because then you unavoidably do bad things. My impression is that these are people who feel they are a bad person when they do something bad, who see the world as a battle between good (typically their tribe) and evil (others). While people who think in terms of making the world a better place are more comfortable with just trying to do more good than bad.
“Instead, the idea is: take the extreme event as a given constraint and ask if thermodynamic factors are involved in such a way as to worsen it.”
It’s a false dichotomy. You can’t conveniently separate out atmospheric dynamics and thermodynamics and analyse an extreme weather event with reference to just thermodynamics, ignoring any possible impact of the dynamics. You can’t do a magnitude attribution in isolation from dynamics, because the dynamics affects the geography and frequency of the event in question, which in turn can influence its magnitude/severity/impacts. If you don’t know the dynamics, then you don’t know, it’s as simple as that, and consequently you have to pronounce low confidence upon attribution, not dream up a storyline which ignores the uncertainty and focuses instead upon conditional knowledge.
Jaime,
You seem to be ignoring that it’s possible to ask different questions, all of which are valid. Understanding if climate change is making certain extreme events more likely, would require including the impact of dynamics. Determining if climate change made a specific event more severe, does not necessarily require considering the dynamics.
Ever since I was young I noticed, and liked noticing records.
Most rainfall in Darwin, Katherine, Alice Springs, the wettest place in each state, the change that might occur in the hottest or coldest day.
I considered records as something that could always happen, somewhere in my lifetime.
Extreme events occur, Cyclone Tracey destroyed my child hood home.
Where we live in dry is hot central Victoria there was a storm 10 years ago that left a 2 K wide, 100 K long swathe of localised destruction to mainly trees. Not rare in USA but very uncommon here.
There is an upward swing in temperatures, more hotter than colder in recent times but we have had new cold records in various areas set as well.
Over a 4 generation time scale 100 years such increases would normally be taken as mere natural variation, how else to explain the past.
On a longer scale we know that we have come out of a mini ice age, it shaped us into what we are today by adaption.
So natural warming? ACC? Both?
Science predicts a slow warming, 100 year time scales.
Hard to tell between nature and CO2.
What we do know is no matter how hard we try, as Victor says, to do good the natural response of people is to seek food, money, shelter, warmth if available in short term gain.
Good luck standing in front of that juggernaut.
Let’s hope there are other negative feedbacks.
When Judith Curry postulated that reduced Arctic sea ice might have contributed to the blocking event that steered Sandy towards New York, she was using a story line–which she should remember now. Perhaps you should bear in mind that even your political opponents are willing to consider the potential impacts of changing climates…
Ken, what you are doing there is disagreeing with me that you cannot analyse an extreme weather event, looking at just the thermodynamics, insisting instead that you can. I disagree. There’s only one question being asked: did anthropogenic climate change play a part in the occurrence of a severe weather event? That can be broken down into: did climate change influence the probability of s severe weather event occurring and/or increase the severity of that event. Both answers necessarily require a knowledge of thermodynamics and atmospheric dynamics.
Jaime,
Clearly, the only question being asked is not simply whether or not climate change played a part in the *occurrence* of a severe event. We’re also interested in how it might be influencing the characteristics of events. Hence, given the existence of an event, it is perfectly valid to consider how the thermodynamic conditions might have influenced that event, without necessarily needing to consider the dynamics that led to its existence.
@-jj
“If you don’t know the dynamics, then you don’t know, it’s as simple as that, and consequently you have to pronounce low confidence upon attribution”
One key feature that we DO know about the dynamics is that they are contingent on the effects of ~50 years of AGW.
They are not the dynamics of a pre-industrial climate.
If you have a good argument for a particular dynamic being independent of all the other climate dynamics (ice cover, jet streams) then you could claim there is little effect or attribution.
But given the web of interconnections between the dynamics of the climate, then you have to pronounce high confidence upon attribution. The dynamics are always going to contribute whatever changes AGW has already imposed upon them.
Dave, I wasn’t the one that originally claimed that the Monterey Formation shale could supply 15 billion barrels of oil via hydraulic fracturing. If the average well would have produced ~1 million barrels cumulative, this would have meant at least 15,000 fracturing events. Whether that would have had had any impact on triggering of earthquakes, the storyline was scary. If you live by the storyline (“Just look at all that potential oil!”), you can certainly die by the storyline (“…. but the San Andreas fault!”).
“As everyone here apparently understands these fundamental issues, please demonstrate that an analysis of extreme weather based upon thermodynamic considerations only, with a large chunk of value judgement thrown into the mix, is complementary [i.e. mutually supplying each other’s lack]”
pretty simple . the D&A approach risks false negatives. FAILURE to detect REAL signal.
The storyline approach risks FALSE POSITIVES , assuming a signal where there is none.
a good analyst will want to look at both methods and consider both.
its not that hard.
Izen,
“One key feature that we DO know about the dynamics is that they are contingent on the effects of ~50 years of AGW.
But given the web of interconnections between the dynamics of the climate, then you have to pronounce high confidence upon attribution.”
No, you do not know that. Therefore you cannot pronounce judgement upon attribution. A major component of the dynamics of extreme weather in mid-latitudes for example are the polar jet streams, particularly the northern hemisphere polar jet stream which has a profound influence upon the weather in the US and Eurasia. Simply saying that 50 years of global warming MUST have affected the jet stream, therefore we MUST pronounce high confidence upon the attribution of climate change to extreme weather events is obviously wrong.
“No, you do not know that. Therefore you cannot pronounce judgement upon attribution. ”
Jamie you misunderstand the storyline approach. A storyline approach is like an accident investigation, like a foresic science, where the goal is understanding. There no probabilities
assigned to the storyline. rather you try to understand the factors that drive the storyline
here read this. it is good on the science and philosophy
https://link.springer.com/article/10.1007/s40641-016-0033-y
“In general, there seem to be two basic (and at first sight orthogonal) approaches for determining the impact of one factor on an effect involving multiple factors. One is what will be called the ‘risk-based’ approach, where the change in likelihood of the effect arising from the presence of that factor is estimated. It is understood that the attribution is only probabilistic, much as smoking increases the risk of lung cancer but is neither a necessary nor a sufficient cause of lung cancer in any particular individual. This approach to extreme event attribution was introduced to the climate science community by [11] and applied by [12] to the European heat wave of 2003. The second is what will be called the ‘storyline’ approach, where the causal chain of factors leading to the event is identified, and the role of each assessed. This approach is exemplified in [13••]’s study of the 2011 Texas drought/heat wave.”
“If an extreme event was mainly caused by purely thermodynamic processes, then the risk-based analysis using a climate model is probably reliable and a strong attribution statement can be made. If, on the other hand, an extreme event was caused in part by extreme dynamical conditions, then any risk-based analysis using a climate model also has to address the question of whether the simulated change in the likelihood or severity of such conditions is credible. Without attributed observed changes, or a theoretical understanding of what to expect, or a robust prediction from climate models, this would seem to be an extremely challenging prospect. And if plausible uncertainties are placed on those changes, then the result is likely to be ‘no effect detected’. This is indeed what tends to be concluded in event attribution studies of dynamically driven extremes [31]. But absence of evidence is not evidence of absence. Can we do better?”
The Storyline Approach
Since climate change is an accepted fact [15•], it should no longer be necessary to detect climate change; rather, the question (for extreme event attribution) is what is the best estimate of the contribution of climate change to the observed event. In this case, effect size is the more relevant question than statistical significance [32]. Trenberth et al. [16••] argue that a physical investigation of how the event unfolded, and how the different contributing factors might have been affected by known thermodynamic aspects of climate change, is the more effective approach when the risk-based approach yields a highly uncertain outcome. This storyline approach, which is analogous to accident investigation (where multiple contributing factors are generally involved and their roles are assessed in a conditional manner),
###################
To put it bluntly. you might consider the reality of AGW to be in question. However, it is quite reasonable for any researcher to accept it as a given and then explain how it drove a particular event. This is a totally normal approach in other disciplines.
https://www.ncbi.nlm.nih.gov/pubmed/30880852
haha, it is very similar to the event based forensic approach
“As climate change research becomes increasingly applied, the need for actionable information is growing rapidly. A key aspect of this requirement is the representation of uncertainties. The conventional approach to representing uncertainty in physical aspects of climate change is probabilistic, based on ensembles of climate model simulations. In the face of deep uncertainties, the known limitations of this approach are becoming increasingly apparent. An alternative is thus emerging which may be called a ‘storyline’ approach. We define a storyline as a physically self-consistent unfolding of past events, or of plausible future events or pathways. No a priori probability of the storyline is assessed; emphasis is placed instead on understanding the driving factors involved, and the plausibility of those factors. We introduce a typology of four reasons for using storylines to represent uncertainty in physical aspects of climate change: (i) improving risk awareness by framing risk in an event-oriented rather than a probabilistic manner, which corresponds more directly to how people perceive and respond to risk; (ii) strengthening decision-making by allowing one to work backward from a particular vulnerability or decision point, combining climate change information with other relevant factors to address compound risk and develop appropriate stress tests; (iii) providing a physical basis for partitioning uncertainty, thereby allowing the use of more credible regional models in a conditioned manner and (iv) exploring the boundaries of plausibility, thereby guarding against false precision and surprise. Storylines also offer a powerful way of linking physical with human aspects of climate change.”
@-jj
“Simply saying that 50 years of global warming MUST have affected the jet stream, therefore we MUST pronounce high confidence upon the attribution of climate change to extreme weather events is obviously wrong.”
Saying that 50 years of global warming MUST NOT have affected the jet stream, therefore we CAN pronounce high confidence upon the non-attribution of climate change to extreme weather events is obviously simply wrong.
The jet stream behaviour is just one of many interconnected factors that can be said to ’cause’ an extreme event.
When a claim is made that an event cannot be attributed to AGW, what causal process do you understand has taken place ? If an extreme event is not attributed to climate change, what IS its causation being attributed too ?
There is a mistake in the concept of causation underlying this. It is not meaningful to partition causation of extreme events into climate processes that are independent of AGW and those that are altered by climate change and ascribe percentage probability to each.
It would be like trying to determine whether the 2 cogs on the pedal crank or the 5 cogs on the back wheel (7 total) confer ten speed gears onto a bicycle. And attribute percentage influence.
Extreme weather events are emergent phenomena of the dynamics of the climate. What we want to know is if the climate change caused by AGW is enhancing or suppressing extreme events on balance. Frequentist stats are confined to a retroactive analysis and have no predictive skill with sparse, changing, and noisy data. Informed hypothesis on the possible outcome of climate change on dynamic processes that may increase or inhibit extreme events is the only option with a predictive power because it is linked to the actual process of multi-factor causation.
Or you can be like me, Paul, and live by by neither storyline because they’re both fairytales.
I think storyline is a bad chose of words here, as evidenced by some of the other discussions which conflate storylines in the sense of the OP (arguments from expectations base on the broad sweep of solid science and observations, which can’t be directly tied to specific events), with stories (fairytales, often concocted from nothing for polemical purposes).
“Because Clausius-Clapeyron” is a storyline based on well-understood physics. “Look what happened in the PETM and CPE” are storylines based on well-established geological records, which happen to reinforce the first storyline because what happened is just what you’d expect from physics. “Polar bears flourishing somewhere in Canada so it’s not warming, or if it is things will be fine” is a fairytale, one which like the best fairytales contains a kernel of truth.
OK Steven, a lot to unpick there, but it’s still not making a lot of sense to me – and the inconsistencies in the argument for the storyline approach are becoming increasingly apparent.
“Jaime you misunderstand the storyline approach. A storyline approach is like an accident investigation, like a forensic science, where the goal is understanding.”
That is a perfectly reasonable motivation. But I don’t think better understanding is the primary motivation. Attribution in the case where a more formal attribution cannot give high confidence appears to be the motive. That is political.
“If an extreme event was mainly caused by purely thermodynamic processes, then the risk-based analysis using a climate model is probably reliable and a strong attribution statement can be made.”
I can’t think of many instances where extreme weather is caused mainly by thermodynamic processes. Can you? Extreme cloud bursts, thunderstorms, intense rainfall over short periods (hours) spring to mind, but not much else. It’s really hard to separate dynamics from weather.
I totally get the false positives vs. the false negatives and I get the suggestion that, where formal attribution fails to come up with a ‘result’, then ‘absence of evidence is not evidence of absence’ and that it might simply be a false negative. But likewise, presence of evidence is not evidence of presence and false positives are a possibility. In my humble opinion, given the current lack of knowledge about the many different factors contributing to extreme weather, the probability of false positives using the unsophisticated and less robust storyline approach is higher than the probability of false negatives using formal attribution, and in any case, it is largely value judgement which determines the relative importance of either possibility. The authors seem to regard the possibility of false positives as a lesser evil than false negatives, that their value judgement is superior to the consensus attribution scientists’ value judgement for the purposes of “actionable information”. Again, political, not scientific.
“As climate change research becomes increasingly applied, the need for actionable information is growing rapidly. A key aspect of this requirement is the representation of uncertainties. The conventional approach to representing uncertainty in physical aspects of climate change is probabilistic, based on ensembles of climate model simulations. In the face of deep uncertainties, the known limitations of this approach are becoming increasingly apparent. An alternative is thus emerging which may be called a ‘storyline’ approach.”
Kind of gives the game away doesn’t it? It’s not complementary, it’s not a tool for additional understanding, it’s a tool to use as an “alternative” when formal attribution doesn’t give the ‘right’ result for the purposes of actionable information. Better err on the side of caution and pronounce that climate change did play a significant role in extreme weather events A and B, rather than make do with a “low confidence” assessment. Thanks, but no thanks. We should stick with the best that science can offer, even if it does risk missing an attribution here or there (and it’s not averse to returning false positives either). Storylines belong in story books – and there’s still the rather knotty issue that the first chapter was written by formal attribution scientists.
Jaime,
No, it’s not.
Why do you seem to so object to this storyline approach? All it’s suggesting is that there are cases when our physical understanding can allow us to say something about an anthropogenic influence even when a formal D&A approach may not detect a signal. It’s not suggesting that we make up stories; it’s simply suggesting that we use all the available information. Why does this seem so objectionable to you?
Ken, formal detection and attribution does use all of the available information. The storyline approach suggests focusing only upon some of the available information when other vital information (dynamics) is not available and you just can’t get a good signal on the old wireless attribution detector – which is not useful when you’re trying to form evidence-based policy and convince the more sceptical elements in society that bad weather is principally due to GHG emissions. That’s what I’m objecting to.
Jamie –
… and convince the more sceptical elements in society that bad weather is principally due to GHG emissions. That’s what I’m objecting to.
The vast majority of the “skeptical” public have no in-depth knowledge about the methodology of either approach, let some have the skills or knowledge necessary to assess the validity of either methodology.
So it seems to me that you’re concerned abut a problem that doesn’t exist.
Also, I don’t think anyone is trying to convince anyone that bad weather is “principally due to GHG emissions?” that seems like a distortion of what the goal is for using this method.
I think that this method is being used to show that there is evidence that there are likely increases in anonymously “bad weather” due to GHG emissions.
Let some =last alone some…
Also.
… it’s a tool to use as an “alternative” when formal attribution doesn’t give the ‘right’ result for the purposes of actionable information
It seems you’re interpretation is that the use of “alternative” there means to use one method to replace the other method rather than as a compliment to the other method for assessing probabilities. How do you know which is the intent? What evidence do you use for judging motivation there? I see evidence that the intent is to use both methods in a complimentary fashion to assesses probabilities. Although clearly, your intent is to use only one of the methods and to exclude the other
You’re = your
Geez. Last alone some = let alone
Joshua: I don’t have evidence of intent, it’s a value judgement, just like some people don’t have good evidence of AGW’s involvement in ‘unusual’ extreme regional weather, other than a basic knowledge of thermodynamics applied globally to the entire atmosphere/ocean system. However, the authors are on record as saying that as applied climate change research develops, there is an “increasing need for actionable information”. That hints very strongly at the motivation for this method.
Geez. anonymously = anomalously
(sorry for all the posts).
At any rate…it seems to me that the the question of whether this method is complementary to other methods is not so much a characteristic of the method itself, but a function of how it is used. If someone wants to one method mutually exclusive to another, they can do so. If someone wants to use a method as a complement to another they can do so. Projecting the quality of being complementary onto the method itself seems a bit Rorchachian, to me.
Joshua,
“The vast majority of the “skeptical” public have no in-depth knowledge about the methodology of either approach, let [alone] have the skills or knowledge necessary to assess the validity of either methodology.
So it seems to me that you’re concerned abut a problem that doesn’t exist.”
That’s the whole point. the public listen to the experts and the experts get their expert judgments published in the popular press. So if another Sandy slams into New York and a formal attribution analysis fails to determine with any confidence whether the storm was ’caused’ by climate change, the alternative validated experts take over and declare that ‘all the evidence suggests climate change played a significant role’, or something similar. If the alternative experts are validated and operating under the umbrella of the ‘climate change consensus’ the public are more likely to take notice, the media far more likely to air their expert opinions.
Jamie –
It’s a reasonable question as to whether someone’s intent might be to convince people that extreme weather is (or isn’t) increasing irrespective of the evidence, or to have people look at more evidence that might help inform us about the probabilities. But it’s ultimately, often, unknowable. Just like it’s unknowable to anyone other than yourself (or perhaps those who know you well, personally) whether your intent is to limit the evaluation of probabilities to only evidence that is scientifically valid, or whether you want to exclude any evidence that runs contrary to your preferred narrative.
At some level, you can take people at their word. Their word is some evidence. Although that isn’t dispositive, I’d suggest it’s better than reading intent into their motivations – because one’s “reading comprehension” is obviously influenced by your own values and biases.
One way forward is to judge based on actions. Are people promoting the use of one method exclusively? Perhaps not dispositive, but if so, then I think that’s a fairly good indicator of their intent to not use methods in a complementary fashion.
Izen,
“Saying that 50 years of global warming MUST NOT have affected the jet stream, therefore we CAN pronounce high confidence upon the non-attribution of climate change to extreme weather events is obviously simply wrong.”
You seem to be forgetting that we have observations of the real world to guide us. Observations which show that there are no significant long term trends in the wave amplitude of the jet stream and that short term fluctuations are dominated by natural variability. Therefore it is scientifically acceptable to conclude that GHG warming has had probably very little influence on the jet stream pattern and thus, on extreme weather driven by dynamics.
https://onlinelibrary.wiley.com/doi/10.1002/wcc.337
Dave said:
As a rule, I don’t live by long involved anecdotal arguments — the “just-so stories”, which understandably seem to be favored in earth sciences studies. I like to bring the earthquake/tidal forcing attribution up because it is amenable to law-of-large-number statistical studies.There are richly populated databases of recorded earthquake magnitudes, and any deterministic trajectories can be statistically correlated. These studies continue to appear with regularity, for example this one asserts significant correlations
Moncayo, Gloria A, Jorge I Zuluaga, and Gaspar Monsalve. “Correlation between Tides and Seismicity in Northwestern South America: The Case of Colombia.” Journal of South American Earth Sciences 89 (2019): 227–45 — arxiv.org/ftp/arxiv/papers/1804/1804.07235.pdf
One can read through this paper and figure out exactly where they are going with it. It’s tempting but I haven’t done any significant earthquake datamining myself yet, as there is so much other low-hanging fruit to look at right now.
Jamie –
the public listen to the experts and the experts get their expert judgments published in the popular press. So if another Sandy slams into New York and a formal attribution analysis fails to determine with any confidence whether the storm was ’caused’ by climate change, the alternative validated experts take over and declare that ‘all the evidence suggests climate change played a significant role’, or something similar.
Although I follow this stuff much more closely than Joe or Jane Public, I’m probably far closer to being one of “the public” than you.
So, FWIW…
My impression is that the frame of a storm being “caused” by climate change isn’t particularly in play. I don’t think that many of my fellow public think that storms are caused by climate change so much as perhaps made worse or more likely by climate change. There are some, of course, who think that GHG emissions “caused” Sandy, just as there are those who think that Katrina was evidence that gay marriage is against god’s will.
If I can serve as a stand in for “the public,” my sense is that there is much expert opinion that GHG emissions have made it likely that while no storm can be directly attributed to climate change, there is a strong possibility that storms like Sandy have been made more likely, or are likely made worse due, to GHG emissions. I’m not sure, actually, that you’d argue that my sense of expert opinion isn’t in line with expert opinion.
As such, I find it useful to know that (as I understand what’s being said – which may well be completely off base) due to signal/noise factors and the long time horizons involved in attribution analysis, a lack of clear signal over relatively short time horizons is of somewhat limited use for assessing the longer term probabilities with respect to the risks of continued BAU GHG emissions. Thus, I find it useful to see that there are other methods that can help us to more fully understand the probabilities, for the purpose of decision-making in the face of high levels of uncertainty.
Yes, I think that there is a fairly large segment of the public who don’t clearly understand what the science says about GHG emissions and extreme weather. I think that probably runs in both directions: There are people who think the evidence of a clear signal is less ambiguous than the evidence suggests, and there are people who think that there is absolutely no evidence that there is a greater risk of extreme weather due to GHG emissions.
So then, in the end, this discussion is whether this method is of use to helping me (or the public) to understand. My sense is that using this method in a complementary fashion can probably help with understanding. It seems to me that using it to the exclusion of more traditional attribution methodologies would be counterproductive. It also seems to me that using traditional attribution methodologies to the exclusion of this methodology is likewise, counterproductive.
> That’s what I’m objecting to.
As if “we don’t know everything therefore we can’t know anything” was a sound principle for policy-making.
We’ve been round this loop before Paul. With regard to Hough, how many of that paper’s 100,000 earthquakes are magnitude 8 or above? Any? How would the correlation look if you removed all those below magnitude 8? That’s why, when I read the papers you linked to in our previous discussion, I pointed out that their was no contradiction between her paper and those of her co-workers. It only appears if you ignore the third and forth words in her title (Magnitude ≥8). I’ll probably read the paper at some point. A quick skim didn’t reveal something I could take as an effect size. With 100,000 samples, even a very weak correlation with low effect size can be statistically significant (we can be very confident that it explains 6% of the variance, but nevertheless something else explains the other 94%).
Let’s not go down that rabbit-hole again, although it did make me think of an extreme weather analogy.
I marvel at the dynamics of the internet back and forth where some arguments become so personal and entrenched. I think of this as SIWOTI – someone is wrong on the internet. I have been in my share of those and I will normally offer to agree to disagree (ATD) when they arise now, but that seldom works. I find the endless bickering a little discouraging and think we should be able to come up with a standard means of declaring the ATD truce. I know it can be fun to grind the axe and/or grind your intellectual opponent into the virtual ground, but don’t we have better things to do? How about a paradigm shift where the parties make note of areas of agreement, if any, and then ATD.
[Mod: redacted]
Cheers
Mike
Mike –
But, maybe I am wrong about all that?
I would suggest that if you don’t benefit from reading Jamie’s comments (she’s a she, BTW), don’t read them.
More on topic:
Earthquakes are caused by a buildup of stress eventually exceeding the frictional strength of a fault; in Colombia, the driving forces are primarily plate tectonic. Storyline: we can’t attribute individual events without knowing more, like was there a dam impoundment or mine collapse nearby.
The precise timing of an earthquake is influenced by small, perhaps non-tectonic stresses present when the cumulative stress has almost-but-not-quite reached the critical stress. Depending on the sign, it may trigger (bring forward in time) or delay the earthquake. Storyline: we can’t attribute individual events because even if the Colombia paper holds up, it was only a correlation with an effect size less than 100%.
Earthquakes can be triggered by nearby earthquakes, either by dynamic stress changes (induced by “shaking” as the earthquake wave passes) or static stress changes (a permanent change in stress state (until the next earthquake) caused by the changed subsurface geometry and the release of elastic strain which had built up around the fault). Storyline, based on basic physics and the observation that there exist earthquake clusters and after-shocks.
Aftershock triggering by complete Coulomb stress changes. “… rupture directivity, which does not affect ΔCFS, creates larger peak ΔCFS(t) values northwest of the main shock. This asymmetry is also observed in seismicity rate changes but not in ΔCFS. This result implies that dynamic stress changes are as effective as static stress changes in triggering aftershocks and may trigger earthquakes long after the waves have passed” (my italics). Attribution: we have a critical observation that unpicks dynamic from static, and it finds a directional asymmetry (upper map in the linked figure) that can only be explained by a dynamic effect. Interestingly, we also find something new and non-intuitive (the italics). The passage of the seismic wave changed the rocks in some way (stress, pore pressure, permeability, material properties of the fault gouge, whatever) that resulted in delayed triggering of the earthquake. Maybe through a time-dependent process like fluid migration or strain-softening during creep, maybe it just left it poised for the right tidal stress*. It’s still not absolutely certain for every after-shock (the black dots). Some are in the overlap area between the two maps, and a few might have happened anyway. But that’s as far as we normally get in extreme weather attribution. “Climate change made this event x% more likely or y% more extreme”, not “the 2007 floods were natural but the 2011 floods were due to AGW”.
* Because tides are cyclical and plate-tectonic loading is monotonic, we can analogise tidal fault-loading with the likes of El Niño, and plate-tectonic loading with AGW forcing.
Joshua, that’s why I enclosed ’caused’ in inverted commas. More often, you will hear in the press that Sandy was made much worse by climate change, that climate change was ‘to blame’ for the damage caused by Sandy, that such severe storms were far more likely to occur with climate change etc. etc. Most of them rely upon thermodynamic considerations applied to a warming world. The fact is, Sandy was not uniquely severe, it wasn’t even a designated hurricane when it struck New York, the storm surge was made more severe by a full moon high tide and the fact that cold fronts merged two storms to form one “super storm”. So thermodynamics in a warming world appears to have played only a small part in the formation of Sandy, it’s track and the severe damage caused mainly by the storm surge associated with this event. But that didn’t stop the press and climate alarmists pinning the blame firmly on climate change for ‘Hurricane’ Sandy. So I can imagine that a storyline narrative based upon thermodynamics could be used to officially endorse an attribution of a Sandy-like storm in the future, where more formal attribution may find little connection. This is what NOAA had to say about Sandy at the time:
“No significant increase in Atlantic hurricanes since the late 1880s has been observed.
The number of hurricanes that make U.S. landfall has not significantly increased or decreased.
There is low confidence on changes in either the number or the intensity of mid-latitude storms, and there is also low confidence on the role played by sea ice forcing.
Scientific understanding remains controversial whether there is an appreciable or detectable impact of Arctic sea ice loss on subarctic weather during Fall and early winter.
The immediate cause for the severe U.S. impacts induced by Hurricane Sandy is the fatal, albeit random, merger of two transitory weather systems. It is very unlikely that either of these weather systems individually was appreciably affected by Arctic sea ice loss. The case of the unusual merger of two weather systems into a single potent and destructive force along the eastern seaboard in late October 2012 thus is most likely an example of a great event having little underlying cause.”
https://www.esrl.noaa.gov/psd/repository/entry/show/PSD+Climate+Data+Repository/Public/Interpreting+Climate+Conditions+-+Case+Studies/Climate+Change+and+Hurricane+Sandy?entryid=98c8065f-d639-496a-a684-fe4762e1d1be#
Winsberg, Lloyd and Oreskes appear to be suggesting that their attribution method should be used to alternatively diagnose an influence from climate change in such uncertain circumstances. I don’t see why. NOAA’s assessment seems fair and balanced to me.
Meant to add, the non-intuitive bit is another reason for doing the formal, detailed, numerical attribution studies. Not just to more convincingly demonstrate a degree of cause and effect, and hopefully to enable us to make better predictions and projections, but because building that level of rigour may throw up something that leads to an improved level of understanding. In this case, that you can’t just assume that every after-shock that happened hours or days after the shaking was over is static, not dynamic. The storyline approach can leave you blind to situations where the whole is greater than the sum of the parts.
Perhaps Jaime could link to a post-2015 study that rejects the Arctic / jet stream relationship.
More recent pesearch seem to suggest otherwise.
https://journals.ametsoc.org/doi/full/10.1175/JCLI-D-17-0299.1
https://www.eurekalert.org/pub_releases/2019-05/awih-awa052819.php
[Playing the ref. -W]
> Winsberg, Lloyd and Oreskes appear to be suggesting that their attribution method should be used to alternatively diagnose an influence from climate change in such uncertain circumstances.
Seemings can be dispelled by reading the article:
Click to access Attribution%20May%20final%20draft.pdf
Running in circles in comment threads to evade the points made in the paper one wishes to criticize should be left as exercice to ClimateBall players at Paul’s.
@-jj
“You seem to be forgetting that we have observations of the real world to guide us. Observations which show that there are no significant long term trends in the wave amplitude of the jet stream and that short term fluctuations are dominated by natural variability.”
Direct jet stream observation started with satellites in 1979.
The jet stream was not even a recognised ‘thing’ until the jap balloon bombs in WW2.
It is possible to derive a proxy pattern of the behaviour of the jet-stream from weather observation back to pre-industrial times, but that does rely on assumptions about the observed weather and the role of the jet-stream that have all been developed from post-1980s data.
https://www.nature.com/articles/s41467-017-02699-3
Long-term records of jet stream variability are thus needed to put recent trends in a historical perspective and to investigate non-linear relationships between jet stream variability, mid-latitude extreme weather events, and anthropogenic climate change17,18.
Here we reconstruct interannual variability in the latitudinal position of the August ‘North Atlantic Jet’ – NAJ back to 1725 CE by combining two summer temperature-sensitive tree-ring records. We find that extreme weather events—including floods, heatwaves, and wildfires—in BRIT and NEMED over the past 300 years have been linked to August NAJ anomalies. Our NAJ reconstruction shows that late twentieth century NAJ positions fall within the range of the preceding centuries, but that a recent increase in the number of NAJ anomalies is unprecedented. This increase in NAJ variance coincides with enhanced variance in the Pacific Basin and points to an increase in interannual meridional jet stream variability since the 1960s.
Jamie –
Given that my comments might seen as off-topic, I’ll leave it off with this comment…
I’ll point to a particular sentence in this OP:
Again, because you don’t want to use this approach as a complement to traditional approaches doesn’t imply that other people don’t want to use a traditional approach to complement this approach.
More often, you will hear in the press that Sandy was made much worse by climate change, that climate change was ‘to blame’ for the damage caused by Sandy, that such severe storms were far more likely to occur with climate change etc. etc.
I hear a variety of things in the press, including that one storm cannot be directly attributed to climate change, and that there is a possibility that any given storm is likely a relatively small % worse due to AGW. I don’t think it’s particularly useful to focus on any one feature of the press, as it might suggest that the press is monolithic. Treating the press as monolithic might encourage counterproductive alarmism among people who identify strongly on either side of the issue of climate change.
I’ve seen quite a range of press coverage about the linkages between extreme weather and GHG emissions. Some people on each side seem to get angst about press coverage when they see press coverage they don’t like. Is that a reflection of the press coverage, or a reflection of that their alarmism about press coverage they don’t like?
Again, I don’t see much evidence to support rejecting this approach on the basis of a concern that it will somehow tip the balance towards people inaccurately assessing the linkages between extreme weather and GHG emissions. Such a fear seems rather alarmist to me. And on the other side, I do see a problem with categorically rejecting this methodology out of some fear about press coverage. It’s rather ironic that alarmist charges about Lysenkoism are being bandied about over at Judith’s post about her concern about fear-mongering.
Do you not see that irony?
It seems to me that as long as its proponents speak about using this method as a complement to traditional attribution methods, there is little downside to its use. On the other hand, a blanket rejection of using this method as a complement to traditional methods not only seems to be based on un-founded concerns, it also seems to me to be dubious, scientifically.
@-jj
“Therefore it is scientifically acceptable to conclude that GHG warming has had probably very little influence on the jet stream pattern and thus, on extreme weather driven by dynamics.
https://onlinelibrary.wiley.com/doi/10.1002/wcc.337”
I read the paper you linked.
Interesting that it concedes that there is no statistical data that is capable of determining whether changes in arctic ice cover (a rather narrow metric) HAS had an impact on jet speed or position variability. The available data is just too noisy, short, and uncertain. So using the term “probably very little influence” seems a little, Baysian?
Instead it fall back on the ‘storyline’ method of science explanation. Constructing a ‘Just-So’ narrative that addresses the consensus from modelling and hindcasts that arctic climate change will have SOME effect on the jet stream and mid-latitude climate patterns. And manages to dismiss them because they project a variety of results. It also includes the dubious argument that because no generally accepted dynamic explanation for one change impacting another aspect of the climate system exists, it is legitimate to view these processes as independent.
As if the lack of a story means the variability can be attributed to some vague, undefined and apparently a-causal effect labelled ‘Natural’.
What ‘Natural’ variability there is discernible in the jet stream record, and others for extreme events, was all caused by a configuration of the climate system from which emerged these events. Natural events are also caused, it can warn us of just how far the climate ACN change, even if it has not recently.
Everything is connected.
A storyline promoting the idea that factor A cannot have had a reciprocal influence on factor B because the variation is ‘Natural’ is invoking a reified ghost.
Jaime’s argument seems (from the outside) cynical. All of us might reflect on the factors that may have induced his cynicism. The evil that men do, and all that. The behavior of some in the climate community has been bad and that has tainted the views of people like Jamie towards the community entire.
I am not as cynical about motives as is Jaime. But I certainly understand why he might adopt such a stance.
IIRC, it seems that the climate community eventually decided to label the behavior of those few who produced cynicism in those such as Jamie as ‘sub-optimal.’ The blogosphere is replete with the detection of such ‘sub-optimal’ behavior. Perhaps an attribution study on hardened and negative attitudes resulting from such behavior is in order.
Dave said:
That’s the point of the exercise. Any statistical significance drawn from the set of hurricanes in the GoM is limited in comparison to a database of potentially millions of seismic events on which one can draw correlations against. Exactly which of these two is the “rabbit-hole” when it comes to making a statistically significant finding?
MT AT 6:55
“11. Are extreme events over-represented and over-interpreted in the public discourse on climate change then? My position is that *it depends on the event*….
14. Nevertheless we are changing the radiative properties of the atmosphere with wild abandon.”
My position is that it depends as well on the vocabulary of public discourse, which some are changing with wild abandon :
https://vvattsupwiththat.blogspot.com/2019/06/all-hockey-sticks-thats-fit-to-print.html
Regarding jet stream attribution, one sure way that climate science will make progress is to work forward from the most fundamental behavior, and understand the mechanisms completely before trying to draw inferences from anomalies and extremes. The most fundamental jest stream is the QBO, as it has the highest symmetry and lowest dimensionality of the well-known atmospheric patterns. The recent attribution problem with the QBO is in regards to the anomaly of 2016, and whether that is due to AGW, ENSO, or perhaps a SSW event. However the fundamental behavior is yet to be pinned down as the models need to be heavily tuned by overfitting parameters until the observed cycle is matched. Matching any perturbation is then an overfitting of an already overfitted parameterization. That’s kind of dodgy imo.
Dave said:
Even aftershocks are amenable to statistical significance, as tidal forces have a high degree of specificity regarding timing and locality.
New analysis and data mining techniques can always be applied to archival data, as this study on 40 year-old Apollo data relating to attribution of moonquakes illustrates : https://www.space.com/moonquakes-moon-is-shrinking-apollo-data.html
“Winsberg, Lloyd and Oreskes appear to be suggesting that their attribution method should be used to alternatively diagnose an influence from climate change in such uncertain circumstances. I don’t see why. NOAA’s assessment seems fair and balanced to me.”
“fair and balanced” err no. The job of the analyst is to present methods and approaches without trying to put your personal thumb on the scale with regards to what is ‘fair” or “balanced”, two concepts hard to quantify. every approach has assumptions, you want to make sure that you
cover as many as practicable
This is not that hard: you have two approaches. you present them both. There is no need and perhaps no canonical method for deciding which one is “more fair” unles you have unit for “fairness” you are not really making a sound judgment about the various methods
“But, maybe I am wrong about all that?”
Oh I enjoy Jamie’s “contribution”. Frankly on forst reading Eric’s paper I wasnt a fan much of the storyline approach. Then I read Jamie’s nonsense and Judiths mischaracterization.
So that pushed me to got read the papers cited. Then it hit me.
Oh, I know this approach. Shit I even went and found the old AIAA paper where our team laid out a similar approach to understanding extreme events in war simulations.. basic forensics.
[Playing the ref, and no need to pile on. -W]
Tom,
I’m pretty sure Jaime’s not a his. Here’s the problem with the above. Firstly, the climate science community is very large. Finding a couple of rotten apples doesn’t really reflect on the large, multi-disciplinary, global community. Secondly, some of what gets touted as bad behaviour just seems to be people either turning molehills into mountains, or a complete misunderstanding of how science works. Finally, do you really think the community that Jaime may identify with has behaved impeccably?
As a side note, I always find it ironic that some who complain that the climate science community hasn’t called out bad behaviour, also criticise it for how some people (Judith Curry, Roger Pielke Jr, etc) have been treated. It’s almost as if they only want the people they disagree with to be called out. [Edit: I should add that my point here is more that calling people out can be a slippery slope; if everyone agree with your assessment, then it might be fine, but if they don’t, then who gets to decide?]
I gave up commenting here yesterday, because it turned personal and my responses were deleted (but not the insults) and mostly, the issues I raised failed to be addressed adequately. I see it’s got even more personal in my absence which is entirely true to form and Tom really should know better, being a frequent visitor and contributor over at that ‘other site’. If Mosher or any other advocate of the ‘new way’ wants to throw their hat in the ring at John Ridgway’s post, please do, and if you want to try to continue the character assault upon me personally there, please also do, where I shall be free to respond without the threat of Willard erasing my comments.
aTTP
Can you do anything about the advert that appears after every few comments, please? At least it appears on my laptop, via the internet browser I use. It’s quite distracting.
Thanks.
Jaime,
We try to avoid playing the REF in the comments. If there is anything that you regard as untoward, you can always contact me privately (although it might be good if you can calibrate it according to what might be expected on cliscep). If you think Tom’s comment was overly personal and insulting, maybe you should read it again.
They were – IMO – never going to be addressed to your satisfaction. If you had wanted them to be addressed adequately, maybe you need to consider at least acknowledging some of the points made by others, rather than simply holding what appears to be a rather narrow view of what is acceptable.
Something to consider is that one of the points made in Winsberg et al. is that a preference for one method over the other involves some kind of value judgement, whether explicit, or not. It seems pretty clear (as evidence by the points you were making) that those who largely oppose climate action prefer the detection and attribution method. Those who are comfortable with climate action also seem comfortable with the storyline suggestion.
My own view is that I think we should be willing to consider both, given that they will be complementary (despite your claim that they’re not). Of course, it’s possible that the storyline approach will imply a stronger connection between climate change and extreme weather than is warranted. The D&A method, however, could easily do the reverse. However, since I favour climate action, I think it’s worth considering both, even if there is a chance that we would conclude that there is a stronger link between climate change and extreme weather than is actually the case. Others may have different views. That these are value-free, though, is highly questionable.
Mark,
No, I can only get rid of those by paying for a different type of wordpress account.
[Playing the ref once more, repeating the last comment along the way. -W]
aTTP
OK, thanks for responding so quickly.
Paul, the rabbit hole was (a) the misrepresentation of Hough, using Mw less than 8 counterexamples to “disprove” an observation about Mw greater than 8 and (b) conflating the last straw with the load that was already on the camel’s back. I, and most geophysicists, are interested in how the camel’s back got loaded to the point where a single straw could break it. I’ll leave it there. You carry on if you wish.
Jaime,
As I said, if you have a problem, then please contact me privately. There’s no point in complaining about comments in the comments. Also, if someone has been unnecessarily personal/insulting, then I would appreciate you contacting me. I’ve been through the comments and nothing seems all that bad, but maybe I missed something.
A key point to ponder is that Winsberg et al. argue that a preference for one approach over the other is inherently value-laden. Hence, trying to argue in favour of one without at least acknowledging this seems sub-optimal. YMMV, of course.
Mark –
It may indeed be a function of the browser you’re using (or the settings in your browser, or perhaps the virus checker you use?); I get no such ads showing up in mind
> You carry on if you wish.
Another time, preferably next autumn.
[Peddling. -W]
> the climate community eventually decided to label the behavior of those few who produced cynicism in those such as Jamie as ‘sub-optimal.’
Only if the climate community includes the auditing sciences:
https://climateaudit.org/2011/02/18/limits-to-justified-disingenuousness/
In this post, the Auditor tries to coerce NG to go beyond his declared commitments and evaluate performances from various ClimateBall players.
I duly submit that the concept of suboptimality has become vernacular in the auditing sciences the same way “the Team” did.
My concept of justified disingenuousness comes from the same episode.
Joshua, thanks for the input.
I use DuckDuckGo, precisely because (or so I thought) it was good at weeding out and reducing adverts and other annoying intrusions, but the adverts are still there!
Can you recommend a different one, please?
I’m using Chrome. Again, I’m not sure why I don’t see the ads, and some people don’t like using Google because it’s pretty invasive… but the alt-right hates Google so it can’t be all bad.
Jamie –
Over at Cliscep, you wrote:
One last thing, Ken accused me basically of preferring the formal extreme weather attribution method because I’m not a fan of climate action! This is absurd.
Why is that absurd?
He’s spent enough time on this blog to know that I’m not a fan of formal extreme weather attribution either
You can not be a fan of traditional methods and still prefer them to a method which is more likely to return a result you don’t like.
in fact I would go so far as to say I think it’s largely pseudoscience!
Same point…kind of like the enemy of my enemy…
It seems to me that you’re arguing that some people favir adding a different method as a complement (which you seem to re-frame as them saying that they want to replace the traditional method – a re-framing which seems largely unfounded to me) because they prefer the outcome it returns.
Maybe that’s so. Kind of unknowable.
But wouldn’t it be unknowable whether a similar dynamic is in play with you? Or do you have some way of assuring me that your motivational mechanism works differently than those you disagree with about attribution methods?
“He’s spent enough time on this blog to know that I’m not a fan of formal extreme weather attribution either”
So Jaime is complaining that ATTP’s description of her position in not sufficiently extremist? This must be peak ClimateBall ™.
It might be best if we returned to the topic of the post.
Mark,
I do all my browsing with either Chrome or Firefox. I have the Adblock Plus extension installed on both. As long as it’s running, I very rarely see advertisements. Some sites do require adblockers to be disabled, however.
I was fascinated by this article. I had better state my credentials; I have a BSc in Zoology, my statistical knowledge does not extend much beyond normal distribution and the chi square test. I am over 70 so perhaps rather slower than I used to be. I am a climate sceptic (denier?) but open to argument. Comparing the two approaches of detection and attribution and “storyline” I was initially very suspicious of the latter, but it struck me that both may have a place; the storyline approach encouraging future research and debate.
The problem is how these two approaches are interpreted. What I see happening is that storyline approaches are seized upon by the alarmist media and accepted as scientific fact. This is perhaps understandable; what I find less acceptable is the deafening silence of scientists who ought to know better when these alarmist narratives are promulgated. My apologies if I am repeating someone else.
You might want to consider selective enabling or disabling of ABP (per site, there’s a button in the icon bar). Depending on how annoying it is. As ATTP says, you have to pay more for an ad-free site. One user won’t make a difference, but the discount WordPress gives for ad-supported sites does rely on some users clicking through to ads. I have it turned on, but I never click through to ads, so they’re not losing any traffic from me.
I’ve just tested it on Chromium (OS Chrome on Linux) and I don’t get the inline ads either way. Nor on Chrome. I do on Firefox, so maybe it depends on targeting based on cookies, browser history etc. Although I do most of my purchasing from Chromium, so I’d have though it was more fertile soil. I believe Chrome/Chromium has tighter sandboxing between websites, one reason it tends to use more memory, so perhaps there’s more information leakage in Firefox. I did turn on the extra sandboxing against the CPU security breach a year or so back, but I’m pretty sure I turned it off when the new microcode came out. It was blocking things like opening print windows from https sites.
On a phone you can use Firefox Focus, which is stripped-down, has privacy features always on and default wiping of each session. Obviously there’s a trade off there between syncing across devices, reopening old sessions, and security. There’s no such thing as a free lunch, I’m afraid.
PaulC,
Essentially both can happen. The lack of a convincing detection and/or attribution can lead to some concluding that climate change is having no impact on extreme weather events. Similarly, some of the reporting on a link between climate change and extreme weather events can be based on a fairly weak argument about how it might have influenced the event. Neither of these is ideal, but I don’t think they’re an argument against using either method.
Scientists do often speak out against alarmist narratives. They do have limited time and don’t always have the ability to promote this in the same way as the media might be able to promote such a narrative.
However, in some (many) cases what people regard as alarmist narratives are not actually alarmist narratives. Climate change is happening, it is pre-dominantly due to our emission of GHGs into the atmosphere and this is almost certainly having an impact on some extreme weather events (heatwaves and extreme precipitation being two for which there is strong evidence). Sometimes scientists aren’t speaking out because there is no reason to do so.
PaulC,
I think a good example of this was hurricane Harvey in Texas 2017, where many rainfall records were broken.
A “storyline” approach to this was taken by Michael Mann:
https://www.theguardian.com/commentisfree/2017/aug/28/climate-change-hurricane-harvey-more-deadly
PaulC “This is perhaps understandable; what I find less acceptable is the deafening silence of scientists who ought to know better when these alarmist narratives are promulgated.”
Perhaps you are not looking in the right places. Try ClimateFeedback, they even have a tag for alarmist media articles.
It isn’t really the job of scientists to police the media. Those of us (and I would include you) who have background in science can do that and leave the scientists to get on with the science.
PaulC, since your knowledge extends to the chi-square test, perhaps you ought to ask yourself this:
Do you think all public policy decisions should wait until scientists are 95% confident? And can demonstrate that to a rigorous level of numerical attribution? And actually, wait five or ten years after that to allow for a consilient literature to emerge and for it to be filtered through the likes of the IPCC and Government science advisers.
Would you apply that reasoning to threats of war or disease? Or to a fire risk in your house? Or to the risk of your pension provider going belly-up? Or to buying a house in a flood plain? To pick an example, if you lived in California, and there was brush growing right into your garden, would you clear it now or would you wait until a year when CalFire could give you a 95%-confidence prediction that the next big fire would be in your county, not three counties away?
Or would you say: CO2 is a GHG (known for 150 years), we’re increasing it (known for decades from isotopes and from coal/oil/gas sales records for centuries), temperatures are rising just as predicted from the basic science (allowing for natural variability like El Nino), every 1°C means air can carry 7% more water vapour and we’re about here already, (past there on land where the temperature increase is larger), so other things being equal, as well as it getting hotter we can expect wet places to get wetter, dry places drier, wet seasons wetter, dry seasons drier, wet days wetter and dry days drier (already observed, but attributing specific events with a less than 1:20 risk of a false positive is hard, so has only been done a few times so far). And when we look back at past rapid short-lived temperature rises of a few °C, like the Paleocene-Eocene Thermal Maximum or the Carnian Pluvial Event, we see clear geological evidence of an enhanced hydrological cycle, with storms like nothing we see today and multi-decade droughts in places which had a Mediterranean climate before and after; and 1000-km migrations of climatic zones. During the PETM the composition of ocean floor muds across the world changed dramatically, in a way that is consistent with vastly increased weathering and run-off, worldwide. And conclude that, while we can’t be precise about what specific form our own version of the PETM will take, we can be very confident that the weather and climatic changes will be hugely disruptive, at a minimum to agriculture and to much more unless we move rapidly to a Jetsons society. So maybe we should do something about it. Nothing drastic, just the gradual decarbonisation agreed to in Paris. After all, it’s a small price to pay, especially if you have children or grandchildren.
Note: The first sentence in the preceding paragraph is all known and published to beyond the requisite 95% confidence. The second is more qualitative, but in courtroom terms would qualify as “beyond a reasonable doubt”. Agreeing or disagreeing with those statements is a litmus test that distinguishes sceptics from deniers.
“I think a good example of this was hurricane Harvey in Texas 2017, where many rainfall records were broken.”
The storyline is that higher surface temperatures in the Gulf of Mexico will fuel more intense hurricanes. Yet the ocean is also such a good heat sink that it throttles rising temperature. So the related storyline is that Miami has only reached 100F once (barely, next highest is 98F) according to the met records, but Minneapolis has hit that 66 times, with a high of 106F.
So it could be that the Texas corner of the GoM is the danger zone. Like the northern Persian Gulf, the combination of water and land create a zone prone to a high heat index.
This remarkable Greenland photo highlights extreme Arctic melting

(go ahead click on it for larger view)
h/t ATTP via twitter
https://mashable.com/article/greenland-melted-sea-ice-dogs-sledding-photo/
Many thanks to all who assisted. I’ve downloaded adblock plus, and it seems to be working.
From the link Francis provided above.
“We see now that it’s melting faster than at any point in at least the last three and a half centuries, and likely the last seven or eight millennia,” Luke Trusel, a geologist at Rowan University told Mashable in December.
How, exactly, should an “extreme weather event” be delineated? Could anomalous warming be reasonably considered an “extreme weather event?” I’m not really suggesting that the current warming would merit that classification, but should there be some kind of standard deviation boundary to determine what is or isn’t “extreme?”
That’s a beautiful pic, btw.
PaulC:
Well, the scientific facts are sufficiently alarming as to make hyperbole superfluous. Do you think the consensus of climate specialists is alarmist? Some of them are alarmed, to be sure. A couple of weeks ago, the lead Editorial in Science, the flagship journal of the US scientific establishment, was titled A call to climate action:
…it is also abundantly clear that absent climate change mitigation, adaptation strategies will in many cases become overwhelmed, leading to unacceptable costs to both human and natural systems. The top priority must remain the elimination of the greenhouse gas emissions that are driving climate change…The climate crisis requires societal transformation of a scale and rapidity that has rarely been achieved. Indeed, the last time such a change took place was sparked by global economic depression and World War II. What enabled action then was a perceived existential threat and broad support in society. Today, we are faced with such a threat, but widening wealth disparities and special interests impede the needed change
Do you call that alarmism, PaulC?
PaulC complains that the media is ‘alarmist’. Since when? It isn’t remotely alarmist enough in my opinion as a scientist, given the gravity of the predicament currently facing humanity. One of the main myths that never seems to die is that most of the mainstream media is ‘left wing’. This is pure nonsense as websites like Media Lens (and their excellent books) prove constantly by analyzing output from the allegedly left-leaning Guardian and BBC. They are both establishment to the core. Edward Herman and Noam Chomsky dissected US mainstream media performance in their ground breaking book, ‘Manufacturing Consent’ in 1988, and through their propaganda model they exposed the true driving forces behind media output: corporate ownership and advertising. I will soon have a student examining mainstream and social media coverage of climate change and related themes using the propaganda model as their starting point. This leads to Paul C’s second point about the ‘deafening silence’ of scientists speaking out to counter or debunk ‘alarmist narratives’. The role of scientists of course is to tell the truth as they see it, but I see that as being extremely alarming. Whenever I speak out, it is to argue the exact opposite of what PaulC believes,
Francis –
I just read, albeit at WUWT, that the photo is misleading. Do you know anything about its provenance?
Joshua,
I don’t know if the photo misleading, but what some have pointed out is that this year’s summer melt is not unprecedented. I think it may still be slightly higher than has occurred before, but there have been previous years with quite substantial early summer melts.
Anders –
Yes, except the quote from the article indicates that while large melts this time of year aren’t unusual, this year falls into an “extreme” category.
So to go on a bit of a rant..again,.. I wonder how “extreme” is defined? Shouldn’t there be a clear delineation (e.g., two standard deviations)? And how is “extreme weather event” defined? Can a somewhat extended period of very warm weather so as to cause (some degree of? ) unprecedented melting qualify?
I suspect just like with “catastrophic” or “existential” or “pause in global warming,” or even “skeptic” or “consensus” or “alarmist,” people are engaging in these discussions without agreed upon and clear definitions of terms. That’s problematic, IMO.
There is widespread disagreement, but often people don’t even take the time to clarify what they’re disagreeing about. That could suggest that the point is often to disagree more than to actually debate the issues in play.
Joshua,
Yes, I’m also a fan of at least being clear about what one is discussing and – ideally – actually defining terms. On the other hand, even if we did so, it probably wouldn’t make much different to the quality of the dialogue.
Does this mean the dogs have not adapted to walk on water?
Joshua,
AFAIK the photo(s) are real … here’s another one …

https://www.bt.dk/nyheder/dette-billede-siger-alt-det-er-meget-unaturligt
(use Google Translate)
The ice melts every year. The only thing that changes is the time and the place. The photo is just beautiful, but it would be surprising if it were new. Melting ponds on sea ice are a known factor one needs to take into account computing the albedo of sea ice. If it were unprecedented people would also not do something which would then be quite reckless.
I would be surprised if there were a generally agreed upon definition when a phenomenon becomes extreme. As long as it is not about the average (temperature, rain rate, wind speed, etc.) you can call it extreme.
When it comes to precipitation it is quite common to distinguish between severe and extreme precipitation, but I do no think there are generally accepted definitions; every study will define it for that study. Also within the category extremes people sometimes distinguish between moderate extremes (happens a few times to once a year) and true extremes (happens every 100 years). But again I do not think there are definitions for this, the number I gave are just for illustration.
” two standard deviations”
Have to be careful here. Many extreme events such as precipitation have more fat-tail than normal statistical characteristics so that a standard deviation doesn’t make as much sense. There is a timely paper in Nature Comms this month that covers attribution of causation
J. Runge et al.(19 authors!), “Inferring causation from time series in Earth system sciences,” Nature Communications, vol. 10, no. 1, p. 2553, Jun. 2019.
This is related to other discussions in this thread.
”two standard deviations”
That is also not particularly extreme. That happens 5% of the time, so on average at least once a month.
Still many studies on changes extremes in station data, do use such definitions because you need enough data to estimate changes. Using extreme value theory you can study changes in 100-year extremes with a few decades of data, but the confidence intervals would be huge.
With which we are back to the original topic of this post. Even if you are very sure things are changing, it can take a very long time until you can show this statistically for rare events.
Paul and Victor –
Thanks. That helps.
Yes, your answers do fit with the post…As does (it seems to me) this article re tropical cyclones that Judith
excerptedcherry-picked and Anders mentioned above.The relatively low confidence in TC change detection results from several factors, including: observational limitations, the smallness of the expected human-caused change (signal) relative to the expected natural variability (noise), or the lack of confident estimates of the expected signal and noise levels.
https://journals.ametsoc.org/doi/pdf/10.1175/BAMS-D-18-0189.1
Anders –
it probably wouldn’t make much different to the quality of the dialogue.
Indeed. Chickens vs. eggs and all that.
Perhaps I was unfair in saying Judith cherry-picked?
Well, consider that…
Judith referenced and excerpted the Knutson article, but left out this part from her excerpts:
Which frames a very similar argument (IMO) to that presented by Anders in this post. Yet, in reference to Anders’ post, Judith says: “
Is it just an oversight that Judith would highlight specific parts of the Knutson paper but leave out the part where they explain the rationale for a “storyline” approach? Hmmm. I suspect perhaps not.
And it’s also interesting that by her logic, we should add the names of Thomas Knutson, Suzana J. Camargo, Johnny C. L. Chan, Kerry Emanuel, Chang-Hoi Ho, James Kossin, Mrutyunjay Mohapatra, Masaki Satoh, Masato Sugi, Kevin Walsh, and Liguang Wu to the list of scientists that could (even implicitly!) be considered part of the AGW consensus – since they apparently see a policy (and scientific) relevance to a “storyline” type of approach. Unless, of course, you think that their logic is that a “scientifically dishonest” methodology has substantial potential for informing policy-makers.
Precipitation leans to fat-tail statistics because a model of that phenomena requires a ratio distribution. Consider that a deluge featuring a very slow-moving front may linger over a location for a while. The ratio in this case is (moisture volume / speed of front). But it’s well known via stochastic math that any uncertainty in the denominator will lead to fat-tails in the distribution.
Thus we find a situation like the following, and understandably have arguments over attributions to extreme events:
The speed of the front slowed to a crawl and thus dumped all that rain. In other words, the speed goes to zero, so the ratio blows up.
A Bernoulli-Gamma model worked quite well when I had a go at downscaling extreme precipitation – you can get a probability of an extreme event by integrating the upper tail of the distribution. I’d quite like to get back to working on that again some day. Conference paper with method here, journal paper with results here. IIRC, gamma is for frontal, rather than convective precipitation.
At the recent climate CIML conference there was presentation by Dr. Claire Monteleoni (University of Colorado at Boulder) which did a deep dive into the subject of climate extremes and how AI and ML is being used today to identify, measure and predict a wide range of climate extremes. To view the talk and slides for this session go to the 3rd. video session @ the 32 min. mark. Use the drop down index to jump to topic #17, “Machine learning can shed light on climate change”.
https://slideslive.com/38917144/climate-change-how-can-ai-help-iii
That’s definitely more fat-tail than a normal distribution
Joshua: “I just read, albeit at WUWT, that the photo is misleading. Surprise, surprise! It’s WUWT that’s misleading. Unless someone was dumb enough to think it was meant to show dogs walking on water, as opposed to dogs walking on meltwater on top of ice.
Article naming photographer and location.
Researcher’s Twitter thread.
ATTP: “this year’s summer melt is not unprecedented”. Perhaps not unprecedented. But certainly unusual. Double the P90.
https://pbs.twimg.com/media/D9LAgvuWkAEhj8S.jpg:large
Of course the photo is a misleading storyline. That happens often. I took this pic a few months ago — went X-C skiing one last time and discovered a lake instead. Where is the water supposed to go?

JCH — Having known Richard Feynman I assure that he would not have been taken in.
Pingback: 2019: A year in review | …and Then There's Physics
Pingback: Extreme event attribution and the nature-culture duality | …and Then There's Physics
Pingback: Moral models | …and Then There's Physics