There’s an interesting post on RealClimate that discusses fallacies in discussing extreme weather events. It’s well worth a read. It discusses what one might expect in terms of trends in extreme weather events, and it points out – quite correctly in my view – that physics also tells us something about what we might expect. It also makes an analogy about a loaded dice. The basic point being that even a loaded dice will give what appears to be the expected result if only a small number of tosses are considered. So, just because you don’t see anything suspicious, doesn’t immediately mean the die aren’t loaded.
Roger Pielke Jr, who works in this general research area, commented to say,
Hi Guys, Thanks for your interest in our work. We went into that smokey bar and did some math on those dice. You can see it here:
which is a little odd in that, apart from the update a couple of days ago, the original post doesn’t appear to mention Roger Pielke Jr’s work. The see it here refers to a paper Emergence timescales for detection of anthropogenic climate change in US tropical cyclone loss data by Ryan P Crompton, Roger A Pielke Jr and K John McAneney. The paper concludes with
This study has investigated the impact of the Bender et al. Atlantic storm projections on US tropical cyclone economic losses. The emergence timescale of these anthropogenic climate change signals in normalized losses was found to be between 120 and 550 years. The 18-model ensemble-based signal emerges in 260 years.
My understanding of what Roger is suggesting, with his comment, is that this paper shows that even if the die are loaded, an anthropogenic signal will only be seen in US tropical cyclone losses in about 200 years. Roger finishes his comment with,
The math is easy, and we’d be happy to re-run with other assumptions. Or you can replicate easy enough.
So, I did. I spent my weekend painting the entrance hall in my house and working out how to replicate this study.
The basic data is in Table 1 from Crompton et al. (2011) which I include below. It shows the different Tropical Cyclone categories, the number of landfalling in each category between 1900 and 2005, the percent of total damage associated with each category, and the projected change in each category in the next 80 years determined by different models. Here, I’m going to consider only the CMIP3 ensemble.
The model in Crompton et al. (and which I’m trying to replicate here) works by determining for each year (starting in 2005) whether or not a Tropical Cyclone in each category could occur. Also, the likelihood of an event occurring also changes with time, by assuming a linear trend based on the projected changes from the models (in what I’m presenting here, I’m only considering the CMIP3 model ensemble). The biggest – and maybe only – difference between what I’ve done and what’s done in Crompton et al. (2011), is that they use a random number generator based on a Poisson distribution to determine the number of events in each category every year, while I’m simply using a basic random number generator. Their random number generator can allow for more than one event in a given category per year, while I simply assume that a landfalling event occurs if the random number is less than the count per year for each category.
The next step is to associate a loss with each landfalling event. This comes from the tables in the Appendix of this paper. There is a loss associated with each event during the period 1900-2005, normalised to 2005 values. One of these losses is then randomly selected if an event of the same category occurs in the model run. This way we can build up a time series of possible future losses, which I’ve done as an accumulated loss. This differs slightly from Crompton et al. (2011) since their timeseries was annual loss (I think) but I think the two are essentially consistent. The figure below is an example of a time series from one of my model runs. The 1900 – 2005 data is actual, normalised data plotted in decade intervals. From 2005 onwards, the values are generated as described above.
The process is repeated 10000 times so as to produce 10000 possible future loss timeseries. The next step is to determine when the trend will emerge. Crompton et al. (2011) do this by determining at what time 95% of the models have a positive trend. Since my timeseries is accumulated loss, what I’ve done is determine the time at which 95% of the models have a trend that is greater than the 1900-2005 trend.
Crompton et als analysis gives 2265 (i.e, 260 years from 2005); I get 2195. A little different, but broadly consistent and may well simply be that I didn’t use random numbers from a Poisson distribution to determine the number of events per year. So, given that the analyses are slightly different and I haven’t spent a great deal of time on this, I would argue that the results broadly agree. The emergence timescale of anthropogenic climate change signals in normalized losses will be around 200 years from now. So, Roger Pielke Jr is right.
Or is he? What this analysis is illustrating, I think, is the time at which virtually all models show an increased trend. So, yes, this is the time at which we would almost certainly see an increased trend (assuming the assumptions are appropriate) but it doesn’t tell you how likely it is to see an increased trend at an earlier time. Clearly if 95% of models show an increased trend by 2200, some of these should show an increased trend before 2200. This is actually fairly straightforward to determine. To do this, I take my 10000 models and I determine – for each model – the time at which the trend is greater than the 1900-2005 trend and is statistically inconsistent with this trend (at the 2σ level). I also insist that this persists (i.e., it can’t just be some blip in the timeseries). For each model I therefore have a year at which a statistically significant increased trend emerges, and I can average this and determine the standard deviation. I get 2102 ± 56. This seems consistent with the earlier analysis. From this, the time at which 95% of model would show a statistically significant increased trend would be 2102 + 2×56 = 2214.
This result seems much more nuanced than that presented in Crompton et al. (2011). It may well be more than 200 years before we’re virtually certain to be able to detect an increased trend in normalised losses, but there’s a 50% chance that it will occur before 2100 and about a 15% chance that it will occur before 2045. I would certainly argue (assuming I haven’t made some silly mistake) that this is relevant. Surely it’s not simply when we’re virtually certain to detect an increased trend, but also how likely such an increased trend is in the coming decades.
There’s also more we can do. The analysis here considers total losses. For the period 1900-2005, about half the losses come from category 4 and 5 storms and the other half from category 3, and weaker, storms. What if we repeat the analysis, but only consider certain categories. The figure below shows an example of a simulation where I’ve considered only category 4 and 5 storms (diamonds) and one where I’ve considered only category 3 and weaker storms (crosses). The 2005 datapoint is the actual losses for each subset at that time. This illustrates something that should be fairly obvious from the table I included above. The models suggests that category 4 and 5 storms should increase in frequency, while the weaker storms reduce in frequency. Therefore, with time, more of the losses will come from the more extreme storms.
If I repeat the analysis but consider only certain categories, then we would be virtually certain to detect a change in trend for the category 3 and weaker storms by 2136, and for the category 4 and 5 storms by 2130. If I consider the time for each model at which the trend will be statistically inconsistent with the 1900 – 2005 trend, I get 2036 ± 40 for the category 3 and weaker events, and 2071 ± 33 for the category 4 and 5 events. This would seem to suggest that even if we can’t detect a change in the total loss trend for 200 years or more, we could detect a change in how the losses are distributed almost 100 years earlier than this. There is also a 50% chance that we could detect such a change by the mid 21st century.
I should say that I’ve found this quite interesting. I think I’ve roughly reproduced the Crompton et al. (2011) results but, as usual, if I’ve made some kind of blunder, feel free to let me know. All the math was certainly straightforward and all the information/data was easy enough to access. I agree with the basic result that, given the assumptions, an increased trend in normalised losses will only be virtually certain to appear by sometime after 2200. However, I do think that the same analysis suggests that there is a 50% chance that a statistically significant increased trend in total normalised losses will be detectable within the next 100 years. I also think that the same analysis suggests that we will be able to detect changes in the distribution of the losses within the 21st century. Even if the total normalised losses are essentially unchanged for the next 200 years, I do think that a world in which the losses are roughly equally distributed between weaker (category 3 and less) and extreme events (as in the 20th century), is not the same as one in which most of the losses will be due to the more extreme events (category 4 and 5).
I should add that I mainly did this out of interest and because if I am going to comment on someone else’s work, I should at least try and understand it. Although I can see why understanding possible future trends in normalised losses might be policy relevant, if you really want to understand how tropical cyclones are likely to evolve, one should consider data associated with the actual events themselves, and not focus on loss or damage. Given that this has got rather long, and possibly somewhat convoluted, I would recommend reading Kerry Emanuel’s articles in 538 and a very good one on understanding the tail risk.