Record Warmth

Michael Mann, Stefan Rahmstorf and colleagues have a new paper on the likelihood of the recent warmth. What they’re investigating is the run of warm years we’ve seen recently – 13 of the warmest 15 years have happened since 2000, and 9 of the 10 warmest years have happened since 2000. They want to determine how likely this is from internal variability alone, and how likely it is if they then include anthropogenic and natural forcings.

Essentially, they generate a large number of time series and then test the likelihood of observing these runs of warmest years. For time series that are intended to represent internal variability only (estimated using the residuals after the CMIP5-estimated forced response is subtracted from the observed temperatures) it is 1-in-10000 for the 13 in 15 warmest years, and 1-in-770 for the 9 in 10 warmest years. When anthropogenic and natural forcings are included, it becomes 72% and 83%. They also considered a scenario in which internal variability was assumed to have much more persistence than is considered likely, which then increases the likelihood due to internal variability only, to 1-in-100 and 1-in-80. However, as the paper says

even using a too-conservative null hypothesis of persistent red noise, the recent observed record warmth is still unlikely to have occurred from natural variability alone.

and they conclude that

the recent record temperature years are are roughly 600 to 130,000 times more likely to have occurred under conditions of anthropogenic than in its absence.

I should probably add that they also considered individual years and found that the likelihood of these warm individual years occuring due to internal variability alone is much smaller than the likelihood of the runs of warmest years. This is because it has to actually cross some warming threshold, rather than simply have a run of warmest years in the record.

As far as I can tell, this all seems pretty obvious. Judith Curry, on the other hand, seems less than impressed, and appears to be suggesting that we should simply assume that we don’t know anything. Nic Lewis, surprise surprise, seems to think that

[i]t is a paper that would be of very little scientific value even if it were 100% correct.

You might imagine that this is because Nic also thinks that it’s a pretty obvious result. You might also be wrong. He then goes on to list various criticisms of the paper, including that the record is too short to determine internal variability, that a detailed attribution study should have been performed, that they should have considered models with lower sensitivity (as if only a few hundred to a few hundred thousand times more likely would change the overall conclusion significantly), and that there are problems with their assumptions about long memory noise.

The latter issue – which I’ll comment on briefly – is essentially whether or not internal variability could drive long-term warming or cooling. The answer to this is almost certainly “no”. You could try reading this Realclimate post. Richard Telford has written about this in the context of Doug Keenan’s claims. I’ve written about it too.

The basic issue is very simple. If you want internal variability to drive, for example, long-term warming, then the energy has to come from somewhere. It could come from the oceans, but you can’t extract energy from the oceans indefinitely and, if the temperature exceeds the equilibrium temperature, it would radiate away quite rapidly (the heat capacity of the land and atmosphere is low relative to the oceans). Alternatively, maybe some internal warming could drive a radiative response that sustains a planetary energy imbalance. The problem here is that the physical processes involved would essentially be the same as those that act as feedbacks to forced warming. So, if you want to argue for high sensitivity to internally-forced warming, you’re essentially arguing for high climate sensitivity overall, and most of our observed warming would be anthropogenic anyway – which is essentially what this paper is illustrating.

Anyway, it’s been a long day and that’s about all I can think of saying. If anyone has anything to add, feel free to do so through the comments.

Advertisements
This entry was posted in Climate change, Climate sensitivity, Global warming, Judith Curry, Science and tagged , , , , , . Bookmark the permalink.

62 Responses to Record Warmth

  1. semyorka says:

    “The basic issue is very simple. If you want internal variability to drive, for example, long-term warming, then the energy has to come from somewhere. It could come from the oceans, but you can’t extract energy from the oceans indefinitely ”
    Cannot have thermal expansion of oceans and oceans causing the warming.

  2. Cannot have thermal expansion of oceans and oceans causing the warming.

    Indeed, although that doesn’t preclude an internally-driven radiative response producing a planetary energy imbalance. What makes that unlikely is that it seems highly unlikely that we could be very sensitive to internally-driven warming, while not being sensitive to forced warming.

  3. Tom Curtis says:

    Do you have a link for Lewis’ comments on “The Likelihood of Recent Record Warmth”. The link above is to your comments on Lewis on another topic.

  4. So, if I am understanding the results correctly…the % likelihood ranges from 0.00001(ish)% without natural forcings and anthro. contributions, all the way to 72-83% when included?

  5. Windchaser says:

    Hey Tom,

    See Judith’s post at http://judithcurry.com/2016/01/26/on-the-likelihood-of-recent-record-warmth/ and scroll down to “Nic Lewis’ contribution”.

  6. John Hartz says:

    ATTP: Michael Mann has just posted a link to your OP on his Facebook web page.

    https://www.facebook.com/MichaelMannScientist/?fref=nf

    Watch out for an attack of the denier drones.

  7. Tom Curtis says:

    Coincidentally, I recently calculated the probability of given recent temperatures across any year based on Marcott et al. To do so I took the variance from the mean (representing the range of the individual realizations of the reconstruction) plus the 0.13 C additional variation introduced as noise to increase the variance to match historical precedents (see figure 3 plus discussion in Marcott et al). The total variance expressed as a standard deviation represents (if I am correct) the probability across all realizations of a year having the particular level of warmth. It does not represent the probability within any particular realization, where autocorrelation must be taken into account, and for which we do not know the mean. Here are the results:

    In a way, this just presents the information in Marcot 2013 Figure 3 in a different format. It is, however, interesting to note just how low is the probability of a year as warm as 2015 in recent millenia prior to industrialization.

    Of more interest is that “…the probability of 1990-2009 twenty year average temperatures of the 950-1900 preindustrial baseline is 0.6%. The probability of 1996-2015 twenty year average temperatures of the 950-1900 baseline is just 0.03%.” (emphasis added) For comparison, Mann et al (2016) calculate a 1 in 10,000 (0.01%) chance of 13 of the last 15 years being the warmest globally. The probabilities are of different things, and not directly comparable. Never-the-less the two methods agree in showing recent temperatures to be very improbable assuming natural variability alone, and the use of the Marcott reconstruction addresses Lewis’ concerns about an insufficiently short record to quantify natural variability.

  8. JCH says:

    Professor Curry’s Mann obsession is now beyond Pavlovian.

  9. @aTTP “If you want internal variability to drive, for example, long-term warming, then the energy has to come from somewhere.

    You’d also need to explain how the additional concentration of atmospheric CO2 due to humans burning fossils fuels is not retaining the sun’s energy in the system, like very basic physics tells us it must be. I know it’s an obvious point but it needs saying. 🙂

  10. Oale says:

    Have you taken a screen capture or an archival copy of the statements? Of course there’s always a remote possibility of their computers being hacked…

    On to OT question, are there any studies of how much of the warm blob in North Pacific is attributable to AGW?

  11. Harry Twinotter says:

    JHC.

    I agree. Dr Curry is obsessed with Dr Michael Mann. It is all quite disturbing.

  12. Harry Twinotter says:

    Dr Curry knows how to use weasel words.

    “i) Errors and uncertainty in the temperature record, and reconciling the surface temperature record (which shows some warming in the recent decades) against the global satellite record (which shows essentially no warming for the past 18 years).”

  13. Michael Hauber says:

    The paper obviously has no scientific value, and is certainly subject to a range of uncertainties which can be attacked by those who wish to do so. But whether the real answer is 1 in 60,000, or 1 in 3,000, or 1 in 5,000,00 the general answer should be obvious to anyone with a scientific clue, and I can’t imagine that anyone with a scientific clue would have been doubting global warming several years ago, and then be convinced that because such a series of warm years occurred that global warming must be true after all.

    Its kind of like expecting a photo of the earth from space to change anyone’s opinion on whether the earth is flat….

  14. Marco says:

    Of major relevance and worth a link in the OP:
    http://www.realclimate.org/index.php/archives/2016/01/how-likely-is-the-observed-recent-warmth/#comment-641555
    With a response from both Mike Mann and Stefan Rahmstorff to Curry’s criticism. Very brief, by necessity, but I especially like Stefan pointing out that one of the things she claimed were not taken into account, *was* actually taken into account, and due to the way it was handled would result in an overestimation of internal variability

  15. Marco says:

    Regarding this criticism: “that they should have considered models with lower sensitivity”
    I wonder whether this would really change anything. Models with lower sensitivity also match the historical record, and I don’t think they automatically do so by having a larger internal variability. I will gladly admit that my knowledge of GCMs is very limited, so please correct me if I am wrong.

  16. Tom,

    Do you have a link for Lewis’ comments on “The Likelihood of Recent Record Warmth”. The link above is to your comments on Lewis on another topic.

    That’s bad design on my part. Nic Lewis says that in Judith’s post (or, it’s what Judith claims he says in an email).

    Kerry,

    So, if I am understanding the results correctly…the % likelihood ranges from 0.00001(ish)% without natural forcings and anthro. contributions, all the way to 72-83% when included?

    Yes, that sounds about right. Depends which run you mean, though; the 9 out of 10 warmest years was more like 1-in-770 for internal variability only.

  17. Marco,
    Thanks, I just saw those comments on Realclimate. As far as lower climate sensitivity models is concerned, I’m not sure, but I think you may have a point. Presumably the higher climate sensitivity models that do match the historical record have higher internal variability to mask some of the forced warming.

  18. Michael,

    The paper obviously has no scientific value

    Huh?

  19. jsam says:

    So, if 2016 follows a similar pattern to 1983 and 1998, then it will be much warmer at the surface than the already record year of 2015. The Met Office 2016 annual forecast seems to suggest this, expecting a similar (but perhaps slightly smaller) jump to that seen from 1982 to 1983 and from 1997 to 1998.

    http://www.climate-lab-book.ac.uk/2016/expectations-for-2016-global-temperatures/

  20. angech says:

    To paraphrase Michael
    ” But whether the real answer is 1 in 3,000, or 1 in 60,000, or 1 in 5,000,00 the general answer should be obvious to anyone with a scientific clue,”
    and hence Prof Mann who actually said
    “roughly 600 to 130,000 times more likely to have occurred under conditions of anthropogenic than in its absence”.
    The problem is this paper is assigning a probability to a cause as opposed to an occurrence.
    The occurrence is a run years of temperatures rising.
    We can assign a probability to this. We have past temperature records.
    Easily. Ask Tamino or Lucia.
    and the probability will be in a very narrow range specific for the temperature records we use and the length of time we specify..
    The answer might be 1 in 600 or it might be 1 in 130,000 times but the answer will be highly specific to whichever number is chosen.
    If a paper is ho be of any scientific value.
    Now when you try to specify the probability of a causation you are on much shakier ground.
    You have to make specific assumptions about the causes of warming as well as assess the temperature records.
    Which set of temperature records to use. What climate sensitivity to CO2 to use. Affects of aerosols. How much CO2 extra are humans producing, How big is natural variability. Are we fully aware of changes in heat from the sun , volcanic events and degree of cloud cover and El Nino effect in this particular case.
    If we assert we know all these things we can come up with a very specific figure again and get Michael’s general answer right as well.
    If we have a high degree of doubt as to climate sensitivity and natural variability etc then we get a blow out or spread of probabilities like Professor Mann has given us.
    “The paper obviously has no scientific value” said Michael.
    I disagree.
    The paper clearly shows exactly how little is known about the major issues affecting climate change by a major proponent of the effects of Climate Change.
    I applaud Prof Mann’s willingness to show the large range of uncertainty that dogs Climate Science today.

  21. The occurrence is a run years of temperatures rising.

    No, I don’t think that is what it’s doing. It’s determining the probability of a run of warmest years at the end of a time series (i.e., 13 out of the 15th warmest occuring after 2000).

    We can assign a probability to this. We have past temperature records.
    Easily. Ask Tamino or Lucia.

    Firstly, the only temperature record with sufficient temporal resolution is the instrumental record and we only have one of those. How can we use that to determine the likelihood of 13 out of the 15 warmest years happening after 2000? It actually happened.

    You have to make specific assumptions about the causes of warming as well as assess the temperature records.

    Yes, of course. However, they used a time series that was meant to be a realistic representation of internal variability and one that assumed much more persistence than is considered realistic. Even in the latter case, there was only about a 1-in-100 chance.

    Which set of temperature records to use.

    Yes, but that’s why they said which record they used.

    What climate sensitivity to CO2 to use. Affects of aerosols.

    I don’t think this makes much difference. If climate sensitivity is lower, that would suggest less internal variability and, hence, it being even less likely that it would occur through internal variability alone.

    How much CO2 extra are humans producing

    I think we essentially know this.

    The paper clearly shows exactly how little is known about the major issues affecting climate change by a major proponent of the effects of Climate Change.

    I think you’re spending a bit too much time at Climate Etc.

    As far as I can see it, the only way you could significantly increase the likelihood of a run of warm years as we have seen, is to make assumptions about internal variability that are probably unphysical, or logically inconsistent.

  22. chris says:

    angech

    “The problem is this paper is assigning a probability to a cause as opposed to an occurrence.”

    That’s not right.. the probabilities are assigned to occurrences within the context of specific scenarios. A parallel might be assigning the probability that an individual is over 6.5 foot tall…a. in the general population and b. in the population of college basketball players (probabilities in two separate scenarios). The probability is much lower in a. compared with b. and we wouldn’t be surprised at this since we have a decent understanding of heights in the general population and in college basketball players.

    In much the same way we aren’t surprised that the probability (or liklihood) of record warmth as specified in the paper is very much higher when taking the effects of anthropogenic greenhouse gas release into account compared to the scenario where this factor is removed, since we have a pretty good handle on the contribution of natural forcings during the last hundred or so years and a pretty decent understanding of the effects of augmenting the greenhouse effect…

  23. The Very Reverend Jebediah Hypotenuse says:

    Curry:

    The Mann et al. paper is assuming that all of the warming has been caused by humans, which given our current state of knowledge is an unwarranted assumption.

    Question:
    Did Judith Curry even read the paper?

    Better question:
    Can intelligent and well-informed people just stop paying attention to Judith Curry?

    The “arguments” of the ‘defender-of-integrity’ of climate science do not deserve the implicit legitimization that critical analysis gives them.

    This isn’t even climate-ball, it’s pigeon chess.

  24. tlsmith says:

    Does anyone understand the role of correlation in the results. I mean can anyone explain it to me please? 🙂
    This year’s weather is going to be much like last year’s weather, and so on. If it remains highly correlated for a long time then inevitably 10 or 15 previous years will also be much the same. It means that it is almost impossible to determine a trend without very long time series indeed.

  25. tlsmith,
    I’m no expert at the details of the correlation in the statistical models, but from a physics perspective, the idea is – I think – that we expect the temperature this year to depend somewhat on the temperature last year. However, given the heat capacity of the land/atmosphere, you don’t really expect it to correlate over very long time intervals. In other words, if we have a slightly warm/cool year, the system should cool/warm back towards equilibrium within a year or so; you can’t sustain an anomalous temperature for a long time; well, unless something else (like our emissions, for example) is forcing the system to have a tendency to warm.

  26. tlsmith,
    Actually, I should probably add that the point of the correlations in the analysis is that you would get a very different result if you assumed that the temperature each year in an internal variability only times series, was entirely random, but bounded (in others, randomly selected to be near zero, but with no dependence on previous years) compared to what you would get if the temperatures are correlated (this year depends on last year, for example). In the latter, you might expect a run of warm years to be more likely than in the former.

  27. anoilman says:

    How does this look on the usual IPCC temperature projection graphs?

  28. BBD says:

    Like this:

    Source: Greg Laden

  29. BBD says:

    Notice that if you use updated forcings for CMIP5 instead of the obsolete AR5 runs that 2015 is bang in the middle of the ensemble mean.

  30. BBD says:

    Record warmth happens when records are broken. Obvs. So yes, record warmth.

    I think the relatively modest contribution of EN to the 2015 record has been discussed elsewhere.

  31. BBD says:

    From your Zhang et al. link, JCH:

    The effective radiative forcing (ERF), as newly defined in the Intergovernmental Panel on Climate Change’s Fifth Assessment Report (IPCC AR5), of three anthropogenic aerosols [sulphate (SF), black carbon (BC), and organic carbon (OC)] and their comprehensive climatic effects were simulated and discussed

    […]

    Experiments based on the Representative Concentration Pathway (RCP) 4.5 given in IPCC AR5 shows the dramatic decrease in three anthropogenic aerosols in 2100 will lead to an increase of ∼2.06 K>/b> and 0.16 mm day−1 in global annual mean surface temperature and precipitation, respectively, compared with those in 2010.

    Let’s hope plenty of the world’s forests and peat deposits are burning merrily by the end of the century or we could be in for record warming again.

  32. anoilman says:

    BBD: I would think that an El Nino year would be above the mean from the simulations.

  33. BBD says:

    NOAA/NCDC is above the mean, Oily.

    And you need to be patient. Let’s see what the next few years bring.

  34. JCH says:

    Yes, record warmth, and there are, really… two pipelines.

  35. Michael Hauber says:

    No scientific value – I doubt the paper will be used to advance scientific knowledge in any way. I don’t expect anyone to write a research paper that will build on this result. It can’t see it making our projections of future climate change any better. Or helping us understand the mechanisms of climate change any better. Perhaps someone might write a research paper to try and dispute the result, but I don’t expect it to have any scientific value either.

  36. No scientific value – I doubt the paper will be used to advance scientific knowledge in any way.

    Why, because you don’t like what is suggests? I actually find your suggestion a little bizarre. Firstly, there are many papers published that are never, or rarely, cited. Would be wonderful if we all only published papers that had impact, but that’s a simplistic idea of how science should work. That’s not to say that it couldn’t be improved, but the idea that a paper that has no obvious impact has no value is odd (it may only have value to those who worked on it, but they will still be influence by it and may go on to other important research). Also, I fail to see how this paper is likely to fall into that category. It’s already had impact. Not liking it, or whatever reason you have for claiming it has no scientific value, doesn’t change that.

  37. Harry Twinotter says:

    Michael Hauber.

    I am confused. Are you saying the authors should not have done the study?

    It seems a reasonable question to ask to me: what is the probability the run of warm years was due to internal variability. It is a verification question.

    I suspect you are trying to build a straw man here.

  38. angech says:

    “Firstly, the only temperature record with sufficient temporal resolution is the instrumental record and we only have one of those. How can we use that to determine the likelihood of 13 out of the 15 warmest years happening after 2000? It actually happened.”
    – More than one instrumental record. [Yes, but that’s why they said which record they used.]
    two issues
    C02 has gone up, We have produced more CO2. C02 increase “should” put the global temperature up. Hence the world should be warmer.
    No argument. No probabilities needed.
    Has it gone up. How much has it gone up. How important is this.
    The concerns of most people at your site.
    These questions are not addressed by this paper.

    In response to the likelihood of 13 out of 15 warmest years occurring after 2000 this is a fact, it happened and you know it is specious.
    The reason is that the world has been slowly heating up anyway,
    If you looked at records in 1990 you could say the same thing, If you looked at them in 1935 you could say the same thing . If you looked at them in 1880 you could say the same thing.
    You get the drift. In a warming world there will always be multiple segments where 13 out of 15 years are the warmest in that point on a rising line.
    Whether there is anthropogenic CO2 or none at all.
    Misquoting Stan Lee.
    “To link the two is a more extraordinary claim and requires extraordinary proof.”
    The range of possibility quoted seems less than even ordinary proof

  39. JCH says:

    You get the drift. In a warming world there will always be multiple segments where 13 out of 15 years are the warmest in that point on a rising line. … – angech

    Should be simple to confirm.

  40. pbjamm says:

    No fair JCH, you are asking ang(t)ech to support his claim with data.

  41. BBD says on January 28, 2016 at 7:49 pm,

    “Let’s see what the next few years bring.”

    A few years might not be long enough. Please don’t forget the NMO of Steinberg, Mann, and Miller (2015) (anyone reading this can see

    http://www.realclimate.org/index.php/archives/2015/02/climate-oscillations-and-the-global-warming-faux-pause/

    and elsewhere online for a nice and interesting graph of the PDO and the NMO). The NMO is more general than the PDO in that the NMO covers the entire Northern Hemisphere while the PDO does not.

    As I understand it, the PDO may have just recently started another multidecadal positive phase, but the NMO may have not. Therefore, to avoid a higher probability of accidently even just giving a little potential ammunition to those who don’t accept mainstream climate science, I think it best to wait until we can verify that the NMO has entered its next multidecadal positive phase, during which we should expect to see a rather strong multidecadal acceleration that could send the average global temperature strongly above the ensemble mean. Even though it might take a little more than just a few years from now for this verification, let’s please go with better safe than sorry.

    (We should of course expect that this rather strong increase in question should be nonmonotonic and that, unfortunately, those who don’t accept mainstream climate science will do what they’ve done so many times, which is to try to exploit a nonmonotonic nature of a short term or even a long term increase to try to make their case against the mainstream science.)

  42. anoilman says:

    BBD: I know…

  43. Marco says:

    Angech:
    “In a warming world there will always be multiple segments where 13 out of 15 years are the warmest in that point on a rising line.
    Whether there is anthropogenic CO2 or none at all.”

    And then you manage to say
    “To link the two is a more extraordinary claim and requires extraordinary proof.”

    Let’s just say that people with an understanding of basic physics will ask you to provide ‘proof’ for your extraordinary claim that the earth can just be warming without any known physical forcing (other than the increase in greenhouse gases) going in a direction that could lead to such warming.

    In other words, you don’t seem to understand that it is *you* who makes the extraordinary claim!

  44. BBD says:

    Keef&Amanda

    A few years might not be long enough. Please don’t forget the NMO of Steinberg, Mann, and Miller (2015)

    Sure, I don’t dispute this. It was a quick aside to OilMan, not a terribly well thought-out comment.

    Therefore, to avoid a higher probability of accidently even just giving a little potential ammunition to those who don’t accept mainstream climate science

    This desperate contrarian nitpickery has become a real problem over the years, hasn’t it? It’s a shame they don’t have any decent scientific arguments to talk about instead.

  45. paulski0 says:

    The updated effective radiative forcing of major anthropogenic aerosols and their effects on global climate at present and in the future

    Of course, the 2.5K historical cooling produced by that aerosol forcing would imply pretty much zero net anthropogenic influence on observed historical warming.

  46. BBD says:

    paulski0

    I did think the estimate seemed rather high, Faustian bargain notwithstanding.

  47. Michael Hauber says:

    My last comment is that I did not intend ‘no scientific value’ to mean no value for other purposes such as communicating to the public.

  48. JCH says:

    Michael – I don’t know if you noticed it, but Peter Thorne and some others did a somewhat similar regional analysis… I think lower 48.

  49. TonyL says:

    The claim that the odds of 13 out of 15 years warming without ACO2 is 1 in 10,000 allows us to compute the implied odds that any given year will warm without ACO2.

    P = choose(M,N) * Pw^N * Pc^(M-N)

    where P is the probability of getting N out of M events, Pw is the implied odds of a year being warmer than than the previous, and Pc is the probability of a year being cooler than the previous.

    Setting up the equation is:

    0.0001 = choose(15,13) * Pw^13 * Pc^2

    Since Pc is also 1-Pw, we can substitute

    0.0001 = choose(15,13) * Pw^13 * (1-Pw)^2

    This becomes:

    0.0001 = 105 * Pw^13 * (1-Pw)^2

    Solving for Pw, I compute the odds that a year is warmer is only about 0.37, with the odds of cooling being the remainder of 0.63.

    Is it reasonable to arrive at only a 37% chance that a given year will warm? In examining HadCRUT4, 27 of the 50 years from 1900 to 1950 were warmer than the previous. In the fifty years from 1950 to 2000, also 27 of those years were warmer than the previous. It seems the odds of a year being warmer than the previous is a steady 0.54, and this value is markedly larger than the implied odds from this claim.

  50. MartinM says:

    You’re assuming that annual temperatures are independent. They aren’t.

  51. You’re assuming that annual temperatures are independent. They aren’t.

    And, I think, that is one of the reasons why this paper was published. It was partly to address the claim that the chance of this being due to internal variability was something like 1 in 67 million. I think those calculations ignored the correlations and hence produced a much smaller probably than is actually likely. It is still small (1 in 10000 to about 1 in 1000) but much bigger than the 1 in 67 million that some were claiming.

  52. MartinM says:

    …also, it’s not 13 of the last 15 years warming, it’s 13 of the last 15 being records. Your calculation wouldn’t give you the probability of a year being warmer than the previous even if it temperatures were independent.

  53. MartinM says:

    And, I think, that is one of the reasons why this paper was published.

    Right, people were just doing naive combinatorics which, in this case, vastly overstated the rarity of such record runs.

  54. Actually, I’m not even sure this is correct

    13 of the last 15 being records.

    I think it was simply 13 of the 15 warmest years being since 2000. It doesn’t require that each of those 13 years was a record.

  55. MartinM says:

    Yes, you’re right; I managed to confuse myself over what the 15 was. 15 warmest years, not 15 most recent.

  56. I was similarly confused initially 🙂

  57. BBD says:

    [First Witch:] When shall we three meet again
    In thunder, lightning, or in rain?

    [Second Witch:] When the hurlyburly’s done,
    When the battle’s lost and won.

    And when somebody explains where all the energy is coming from.

  58. TonyL says:

    Martin, your claim of dependance is interesting, and seems credible to me. I guess I have some research to do. Thanks!

  59. Bernard J. says:

    I think those calculations ignored the correlations and hence produced a much smaller probably than is actually likely. It is still small (1 in 10000 to about 1 in 1000) but much bigger than the 1 in 67 million that some were claiming.

    Autocorrelation aside, and similar to BBD’s observation, there’s another reason why the chances of the contemporary warming being a random fluctuate are vanishingly low. That reason is simple physics.

    Thomas Bayes would have had rather a lot to say about this.

  60. Bernard J. says:

    …fluctuation

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s