Come on Andrew, you can get this.

Andrew Montford, an influential climate blogger in the UK, is – once again – suggesting that man made climate change is not clear because,

we are unable to demonstrate a statistically significant change in surface temperatures because of the difficulty in defining a statistical model that would describe the normal behaviour of surface temperatures

This seems to be based on work by someone called Doug Keenan, who argues that the Met Office admits that claims of significant temperature rise is untenable. Doug Keenan argues that significance means that the temperature rise could not be reasonably attributed to natural random variation. He then seems to argue that significance can only be determined using a statistical model. Furthermore, he suggests that there is a statistical model (driftless ARIMA(3,1,0)) that would allow us to conclude that the surface temperature could indeed be simply some random natural variation. Furthermore, he suggests that the Met Office have admitted that their statistical model is inadequate.

Well, here is the Met Office’s response and I don’t think it is quite saying what Doug Keenan or Andrew Montford are claiming. I also don’t think it’s all that complicated, so I don’t see why someone with a science degree and who runs a science blog, can’t get this. Essentially – as I understand it – the Met Office’s statistical models is indeed, in some sense, inadequate. This, however, does not mean that there is a statistical model that is adequate. It means that there are no statistical models that are adequate.

Why is this? Well, statistical models are used to determine the properties of a dataset. For example: what is the trend?, what is the uncertainty on the trend? However, they cannot – by themselves – tell you why a dataset has those properties. For that you need to use the appropriate physics or chemistry. So, for the surface temperature dataset, we can ask the question are the temperatures higher today then they were in 1880? The answer, using a statistical model, is yes. However, if we want an answer to the question why are the temperatures higher today than they were in 1880, then there is no statistical model that – alone – can answer this question. You need to consider the physical processes that could drive this warming. The answer is that a dominant factor is anthropogenic forcings that are due to increased atmospheric greenhouse gas concentrations; a direct consequence of our own emissions.

So, Andrew, I really do think you can get this. It’s not that tricky. I appreciate that it may be embarrassing to have to admit that you’ve misunderstood something so simple. The alternative, however, is that some may think that you’re explicitly trying to mislead people, and that – I assume – is not your intent. Of course, in the interest of true skepticism, I’m happy to be convinced that I’m wrong or have misunderstood something.

Advertisements
This entry was posted in Climate change, Climate sensitivity, Global warming, IPCC, Satire, Science and tagged , , , , , , , , . Bookmark the permalink.

229 Responses to Come on Andrew, you can get this.

  1. This post was getting rather long, so I’ll add a further comment. There are two simple errors with Doug Keenan’s assumptions. Firstly he says

    The Answer claimed that “the temperature rise since about 1880 is statistically significant”. This means that the temperature rise could not be reasonably attributed to natural random variation

    Well, this is wrong. Statistical significance is normally relative to some null. For example, is the trend statistically different from zero? Is the temperature today statistically different from what it was in 1880? The answer does not depend on the reason. Secondly, he then says

    In statistics, significance can only be determined via a statistical model.

    In a sense this is correct, but if you’ve defined significance as being statistically different from what you’d expect from natural random variations, then it’s wrong. To determine that, you’d also need to include some physical model. This – I think – would be what is normally called an attribution study.

    On that note, Ed Hawkins’s recent post about Signal emergence is worth a read – although, as I discovered, this is not quite the same as an attribution study.

  2. An earlier post of Doug Keenan at Bishop Hill was a badly mistaken criticism of radiocarbon dating. (That led Nic Lewis to write a post at Climate Audit, where he didn’t accept fully Doug Keenan’s claims, but that was also erroneous or at least misleading.)

    Doug Keenan seems to have problems in understanding statistical methods.

  3. johnrussell40 says:

    Picking away at one line of supporting evidence in order to ‘prove’ there’s no human-caused warming is a typical denial MO. But what about the fact that atmospheric CO2 has increased from 270 to 400 ppm since 1850 due to fossil fuel burning? Clearly, unless they also deny the greenhouse effect—which the more serious pseudo-sceptics don’t do because they know it will undermine their credibility—however much they quibble about statistics, how could the observed warming be anything other than largely man-made? It’s the multiple lines of evidence which result in a rock-solid scientific consensus.

  4. john,
    That is a standard problem. This one is particularly irritating because it’s essentially a silly circular argument. “Yes, we’ve warmed. However, we can’t tell, using a statistical model, if that warming was natural or not. Also, the only way we can tell if it is statistically significant is using a statistical model. Therefore we don’t know if it is natural or not.” You’d like to think that people might eventually be embarrassed to have made such an argument. You’d also probably be wrong.

  5. An earlier post of A at X was a badly mistaken criticism of radiocarbon dating. (That led B to write a post at Y, where he didn’t accept fully A’s claims, but that was also erroneous or at least misleading.) A seems to have problems in understanding statistical methods.

    What a beautiful summary of the climate “debate” outside of the scientific literature. Thanks, made my day.

  6. Or more general: An earlier post of A at X was a badly mistaken criticism of K. (That led B to write a post at Y, where he didn’t accept fully A’s claims, but that was also erroneous or at least misleading.) A seems to have problems in understanding L.

  7. Victor,
    I wrote that in expectation that many readers can identify those posts, and through that where the name of Doug Keenan has come up before.

    I may have been misled by my own case. I thought that I have seen the name before, and checked, where that actually took place.

  8. John Mashey says:

    ATTP: don’t hold your breath.
    1) Recall that Monford’s “The Hockey Stick Illusion” has a horizontal line on the cover to represent the shaft of the hockey stick.
    2) And Montford was quite happy to quote an un-evidenced hearsay found in my favored dog astrology journal, used for a key point, and possibly rising to defamation of Jon Overpeck, since there was never the slightest proof of his involvement.

    See Stoat’s The battle of the graphs, which also bears on this. on p.25, Montford displays yet another image that was not quite the same as IPCC(1990) F.g.7.1(c)) (legend at bottom, capitalization, (c) deleted, dashed line at left, AD’s.)

  9. John,

    don’t hold your breath.

    Don’t worry, I’m not 🙂

  10. > Doug Keenan seems to have problems in understanding statistical methods.

    Richard Muller reached a different conclusion:

    What he is saying is that statistical methods are unable to be used to show that there is global warming or cooling or anything else. That is a very strong conclusion, and it reflects, in my mind, his exaggerated pedantry for statistical methods. He can and will criticize every paper published in the past and the future on the same grounds. We might as well give up in our attempts to evaluate global warming until we find a “model” that Keenan will approve — but he offers no help in doing that.

    http://neverendingaudit.tumblr.com/post/11763136868

  11. Willard,
    So, he’s actually published some papers. That is a bit of a surprise. I guess he has produced a moderately clever way to ignore global warming. Insist on an impossible standard and then point out that everything everyone else has done must, therefore, be wrong. Maybe one could call it the “naysayers’ scientific method”?

  12. > Maybe one could call it the “naysayers’ scientific method”?

    No. I call this “auditing”.

  13. Willard,
    Yes, although doesn’t “auditing” allow for the possibility that you won’t find a problem?

  14. > [D]oesn’t “auditing” allow for the possibility that you won’t find a problem?

    No, because not finding a problem is also a problem. It’s an even worse problem. Someone, somewhere, is hiding something.

  15. Yes, I forgot about the “neverending” aspect to auditing.

  16. Oh, and I notified our beloved Bishop of your post:

    > As I have mentioned previously, I have put it to Walport that we are unable to demonstrate a statistically significant change in surface temperatures because of the difficulty in defining a statistical model that would describe the normal behaviour of surface temperatures, a claim that seems to have the support of the Met Office.

    The problem, of course, is to show the relevance of that (and such) claim in the grand scheme of scientific things:

    https://andthentheresphysics.wordpress.com/2014/07/02/come-one-andrew-you-can-get-this

    Comments and corrections are welcome. Concerns, are also appreciated, as always.

    Thank you for your concerns.

    Jul 2, 2014 at 9:56 PM | willard

    A response will appear as soon as our beloved Bishop finds the round Tuit, no doubt. I could have tried to notify him via Twitter, but our beloved Bishop blocked me.

  17. Willard,
    Thank you. I did have a lengthy Twitter exchange with Andrew about this topic a few days ago. We didn’t really reach agreement then (although I think he may have thought we had). It was that exchange and his most recent post that motivated this. In my view, this is not really something about which there should be much disagreement, whatever an over-confident mathematician might think.

  18. > In my view, this is not really something about which there should be much disagreement, whatever an over-confident mathematician might think.

    Perhaps, AT, but you should not be overconfident about your lack of creativity, e.g. Charlie here:

    Factors which influence climate could be
    1. Cosmic radiation.
    2. Energy output of the sun.
    3. Frequency of output from sun.
    4. Variation of spin if earth’s axis.
    5. Distance from Sun.

    […] Surely Walport needs to disprove items 1 to 5 as well as prove CO2 is having an effect. […] I would suggest that no statistical model is suitable until we have defined the actual temperature record for at least the last 2 M years..

    I think there should be a third dot at the end of the emphasized bit.

  19. Willard,
    Indeed, but I was just referring to the ability of statistical models to determine physical causes (i.e., they can’t). That – I would think – should be indisputable. Of course, the actual physical cause could well be disputed, but that would seem to be a different issue. Of course, you are right though. I really shouldn’t be over-confident.

  20. Andy Skuce says:

    Willard:No, because not finding a problem is also a problem. It’s an even worse problem. Someone, somewhere, is hiding something.

    Anecdote: a friend was the finance manager for an American company operating a joint venture with a Russian partner. His Russian counterpart was assiduous in asking for every receipt and every bank statement. After one such meeting, over vodka, the Russian confided: “You know, you are the best, I just can’t figure out how you do it” “Do what?” “Take money and leave no trace” “But I am not stealing any money!” “Everybody steals money, why else would you do this job?”

    Are there any cases in which The Auditor audited The Team, found nothing wrong and reported the good news? (That’s not entirely rhetorical, I don’t follow CA closely any more and have missed many episodes.)

  21. guthrie says:

    Bishop Hill has a science degree? Which science? Zoology?

  22. guthrie,
    Chemistry, I think.

  23. Joshua says:

    ==> “Are there any cases in which The Auditor audited The Team, found nothing wrong and reported the good news?”

    My guess that has happened at least as frequently as The Auditor has audited “skeptics,” (although probably a smidgeon less frequently than he has found “skeptics” engaging in deception and behaviors analogous to child molestation and/or Nazism).

  24. guthrie says:

    Chemistry? Chemistry?!
    If ever there was an example of when someone should be stripped of their degree for showing no sign of having learnt anything from it, this is one.

  25. johnrussell40 says:

    According to Wikipedia [ http://en.wikipedia.org/wiki/Andrew_Montford ], in chemistry; although after study he went on to work in accountancy and then as a writer. His science education seems to play little part in his understanding of climate.

  26. guthrie says:

    Whereas my understanding of climate science is enhanced and informed by my chemistry degree, which at least meant I had a base level of knowledge of physical science such that I could relatively easily digest what I read and compare it to how I know the world works.

    Oh dear, that’s where I was going wrong, relying on the coherence of science. No, I should treat each area as a separate entity with no relation with any other, that way I can rant on about how climate change isn’t our fault because X. Or Y. And certainly Z.

  27. John Mashey says:

    Regarding AGW, there is substantial evidence that other factors can totally overpower any science background, from nothing up to well-published Members of the US National Academy of Sciences or the Royal Society. See Another Silly Climate Petition Exposed(2009), which got at least one Member of NAS (Will Happer) displeased with me. That analyzed 200 members of the American Physical Society, most with PhDs in physics, including at least one physics Nobelists. Of course, the signers were less than 0.5% of the membership, as most physicists know better. The demographics were interesting … heavily skewed to old guys.

    Anyway, even a strong science background and career in it is insufficient to guarantee anything, although it certainly helps.

  28. Montford’s remark should be read in context, viz. the recent papers by Beenstock and Hendry and the older debate between Perron and Watson, all of which suggest that our statistical understanding of the instrumental record is less clear-cut than commonly believed.

  29. Oh come on, Richard, you can get this. There’s no need to read Montford’s remarks in any kind of context. There is no statistical model that can explain the increase in surface temperature. There is no statistical model that would allow us to conclude that we haven’t warmed. What Keenan (and Montford) are suggesting is nonsense. Either they don’t understand physics (or basic science, for that matter), or they’re trying to mislead. There isn’t really a third option; and you trying to excuse this mumbo-jumbo by pretending there’s some kind of context, is equally nonsensical. You could of course choose to help Andrew to understand this, so I wonder why you seem to be choosing not to. It could be that you also don’t understand basic physics (well, sure, you probably don’t, but you should be able to understand the limitations of statistical models) or you’re happy to see Montford (and Keenan) present a nonsensical argument that will mislead those who have no real interest in understanding this in any detail.

    How are you going with trying to find the missing 300 reject abstract? #FreeTheTol300

  30. jsam says:

    Beenstock? Shurley shume mishtake here. “Econometrics is a hammer which econometricians apply to all objects, but it’s ALWAYS the science that rules.”
    http://rabett.blogspot.co.uk/2013/02/open-for-comment.html

    For more commentary on the paper see http://www.earth-syst-dynam.net/4/375/2013/esd-4-375-2013.html.

    Call me a Beenstock sceptic but his affiliation with the IEA seems unhealthy too. http://www.iea.org.uk/events/climate-change-are-we-building-bad-economic-policy-on-bad-science

  31. jsam,
    Interesting. I guess Richard has some sympathy with those who need to submit a paper to multiple journals in order to get it published.

  32. The real point is that there is actually little unaccounted for variability in the global temperature record. To the untrained or naive eye, certainly a GISS or HadCrut time series looks noisy, but within that noise, we can extract volcanic signals and the ENSO signal, amongst others. The residual then has much reduced noise and hence it becomes statistically impossible to achieve the secular warming trend by simulating a random walk.

    Doug Keenan must be really torturing the data.

    The crew at Lucia’s Blackboard gave up on trying to do that after they figured out how difficult it was to generate an AR model that looked like a ramp. And they are now engaging in legal banter and plotting against Mann to pass the time.

  33. John Mashey says:

    “affiliation with the IEA seems unhealthy too. ”

    (If the following seems strong, search the IEA website and the LTDL. I have.)
    From Familiar Think Tanks Fight For E-cigarettes

    “In the UK, the GWPF is climate-focussed, but well-connected to other think tanks, including the UK’s Institute for Economic Affairs, searchable for “tobacco”, “global warming” or “e-cigarette”

    Try This search at LTDL, i.e select BAT, and then:
    institute for economic affairs

    Put another way, like many US think tanks, IEA has a looong history taking money from tobacco companies, to help them prosper,, which they can only accomplish by addicting youth to something that will kill many, slowly, agonizingly. (But that’s the NHS’ problem, not IEA and BAT.)

    Clearly, this bothers no one affiliated with IEA, and by comparison,confusing people about climate is ethical child’s play.

  34. John Mashey says:

    Oops, typo, Institute of Economic Affairs, not for. ..
    and Brits are lucky, jsut like us in US:
    http://www.charitycommission.gov.uk/find-charities/ look up institute of economic affairs

  35. lapogus says:

    I’d say that given the GISP2 and Vostok ice core data, Andrew Montford and Doug Keenan make a fair point that the late 20th century warming (or to be more accurate, run of mild winters) is not statistically, or historically significant – http://snag.gy/tJ7z6.jpg

    Let me know when all this extra CO2 in the atmosphere can mean that I don’t have to put the heating on in my (well insulated) house, in JULY.

  36. If the case would really be that the recent warming was unexpected and without physical explanation, it would make sense to do a statistical analysis of the type where the properties of the time series are inferred from earlier history and the recent rise tested for consistency with that history. That’s, however, not at all the situation. Warming caused by CO2 releases was predicted, before it was visible. Many other features of the time series are understood. To take the prior understanding into account significance of the warming signal must be studied based on rules of Bayesian inference. That doesn’t result in objectively unique well defined confidence levels, but that leaves little uncertainty on the significance of the signal.

    Another way of expressing the same thing is that it’s more reasonable to ask, how strong the warming signal is than to ask whether it’s existence is proven by the data. What’s known from other sources must again be taken into account in making that estimate. The certainty that the alternative of no signal can be excluded is equal to the certainty that the coefficient is positive in this approach.

  37. BBD says:

    Oho. The IEA. The grandfather of Think Tanks. Antony Fisher (a name to research). There is a *must read* back history for aficionados of the secret history which I commend to you all, but especially John Mashey and ATTP. A teaser:

    The Think Tank that Antony Fisher set up was very different. It had no interest in thinking up new ideas because it already knew the “truth”. It already had all the ideas it needed laid out in Professor Hayek’s books. Its aim instead was to influence public opinion – through promoting those ideas.

    It was a big shift away from the RAND model – you gave up being the manufacturing dept for ideas and instead became the sales and promotion dept for what Hayek had bluntly called “second-hand ideas”.

    To do this Fisher and Smedley knew they had to disguise what they were really up to. In 1955 Smedley wrote to Fisher – telling him bluntly that the new Institute had to be “cagey” about what its real function was. It should pretend to be non-political and neutral, but in reality they both knew that would be a front.

    The IEA would masquerade as a “scholarly institute”, as Hayek had suggested to Fisher, while behind that it would really function as an ideologically motivated PR organisation. It was, Smedley wrote:

    “Imperative that we should give no indication in our literature that we are working to educate the Public along certain lines which might be interpreted as having a political bias. In other words, if we said openly that we were re-teaching the economics of the free-market, it might enable our enemies to question the charitableness of our motives. That is why the first draft (of the Institute’s aims) is written in rather cagey terms.”

    And:

    Back in 1947 [Antony] Fisher had read an article in Reader’s Digest by an Austrian economist called Friederich Hayek. It was a summary of a book Hayek had written called the Road to Serfdom and it set out to prove scientifically that any attempt by politicians to plan and organise society so people could be free and have a better life would inevitably produce the opposite – the destruction of freedom and democracy

    So one day Fisher plucked up courage and went to see Hayek at the LSE in London where Hayek was a professor. Fisher asked Hayek for advice – should he go into politics to try and stop the oncoming disaster?

    Hayek told Fisher bluntly that this would be useless because politicians are trapped by the prevailing public opinion. Instead, Hayek said, Fisher should try and do something much more ambitious – he should try and change the very way politicians think – and the way to do that was to alter the climate of opinion that surrounded the political class. Fisher wrote down what Hayek said to him.

    “He explained his view that the decisive influence in the battle of ideas and policy was wielded by intellectuals whom he characterised as the ‘second-hand dealer in ideas’.”

    Hayek told Fisher to set up what he called a “scholarly institute” that would operate as a dealer in second-hand ideas. It’s sole aim should be to persuade journalists and opinion-formers that state planning was leading to a totalitarian nightmare, and that the only way to rescue Britain was by bringing back the free market. If they did this successfully – that would put pressure on the politicians, and Fisher would change the course of history.

    Antony Fisher was gripped by this vision. But then all his cattle died of Foot and Mouth. He got compensation from the government though (which unkind people might say was a subsidy) and went off on a trip to America.

    In New York Fisher met another right-wing economist called “Baldy” Harper who introduced him to two new ideas. One was the concept of the “think tank”, the other was broiler chicken farming.

    Stranger than fiction.

  38. @Wotts
    I’m with Perron on this (and thus with you), but it would be foolish to ignore Beenstock, Hendry and Watson.

  39. Richard,

    and thus with you

    Okay, good.

    it would be foolish to ignore Beenstock, Hendry and Watson.

    Indeed, it would typically be foolish to ignore things that may be relevant. That doesn’t really excuse ignoring physics, though.

  40. BBD says:

    Richard

    I can happily ignore Beenstock because he is spouting arrant crap:

    The failure of climate models to explain the halting in global warming since 1995 begs the question whether they are sufficiently reliable to justify carbon abatement policies, which harm economic growth.

    The 5th IPCC Review breaks new ground by attributing sea level rise to global warming despite the fact that there is no scientific evidence for this assertion. Moreover, tide gauge measurements show that sea levels are not rising globally.

    You don’t pay attention to politicised rhetoric misrepresenting science. Remember what the IEA is:

    The IEA would masquerade as a “scholarly institute”, as Hayek had suggested to Fisher, while behind that it would really function as an ideologically motivated PR organisation.

    Remember.

  41. BBD says:

    More about Beenstock:

    His interest in climate change began after using the Stern Review as teaching material. This led to publication of research showing that global warming in the 20th century is unrelated to carbon emissions. In subsequent research he shows that claims that global sea levels are rising are incorrect.

    Yup, he’s a flat-out physics denier. So he’s an ideologue. Best ignored. Foolish *not* to ignore such people.

  42. Eli Rabett says:

    There are two seminal discussions of this matter, one, at Bart’s on curve fitting exercises, including Beenstock (there are other posts over there on the same issue), the other more recent one at James’ about Nic Lewis and C14 dating. Tom Fiddaman in the last comment after everyone had agreed with why Lewis’ method was ballocks, put the wood to the ball

    Even though Nic’s argument strikes me as technically correct as far as it goes, . . . the prior . . . is intuitively repugnant.

    Translation: a statistical analysis which does not take the physical boundaries of a system and its measurement into account is useless at best, usually misleading and often wrong. Beenstock hits the trifecta.

  43. Martin A says:

    Why is this? Well, statistical models are used to determine the properties of a dataset. For example: what is the trend?, what is the uncertainty on the trend? However, they cannot – by themselves – tell you why a dataset has those properties. For that you need to use the appropriate physics or chemistry. So, for the surface temperature dataset, we can ask the question are the temperatures higher today then they were in 1880? The answer, using a statistical model, is yes. However, if we want an answer to the question why are the temperatures higher today than they were in 1880, then there is no statistical model that – alone – can answer this question.

    I think there may be some misunderstanding of a key point here. If you have a statistical model for the process that generates a random signal, you can use this to decide whether what you are observing is:

    – simply the signal fluctuating as normal
    – the signal is no longer fluctuating as normal because something has changed significantly and what you are now seeing is unlikely to have been generated by a process described by your model.

    This is standard stuff in (for example) detecting the seismic signature of a nuclear detonation against background seismic activity. Or the noise of a nuclea submarine against the background of normal undersea noise. When your tests show that something that no longer has the normal statistical characteristics, you can infer that something has changed.

    My understanding of Andrew Montford’s point is that, in the absence of a statistical model for climate variations in the absence of anthropological effects, you can’t say whether the recently observed changes are out of the normal range of statistical variability. I think you agree with him on the nonexistence of adequate statistical models.

  44. @BBD
    I know you’re fond of smearing people’s names (while hiding your own identity). Beenstock is a fine scholar. Watson is better. Hendry is better still. If these gents put a question mark to something, you’d better sit up and pay attention.

  45. The discussion of C14 dating included three ways of looking at the prior. Both Keenan and Lewis argued for the Jeffrey’s prior, but in different ways. Keenan wanted to use it in full, Nick only to limits of the confidence interval accepting that the that the full PDF that results from the Jeffrey’s prior didn’t make sense. Radford Neal, James Annan, and myself argued against all use of Jeffrey’s prior in this application The choice of prior is not strictly a technical issue, but choosing a prior that gives obviously nonsensical results cannot be justified.

    In one way of thinking the position of Nic Lewis was understandable, as the alternative that I consider correct tells that one alternative value should be excluded from the confidence range while another that’s equally and possibly less compatible with empirical data is included. This somewhat paradoxical outcome results from the requirement of connectedness of the confidence range (i.e. we give only one upper and one lower limit for the range).

  46. Eli Rabett says:

    Pekka, what you said is exactly the point. Bart destroyed Beenstock by pointing out that

    The earth’s energy imbalance as measured from space and as deduced from adding up atmospheric and ocean heat content is actually positive: More energy is coming in than radiating back into space (***). This directly contradicts that the increase in global average temperature would be random (since in that case we would expect a negative energy imbalance)

    In other words the assumption that the global temp anomaly was a random walk was garbage in

  47. BBD says:

    Richard

    I know you’re fond of smearing people’s names

    I point out facts. I know this is inconvenient for your narrative, but you will have to learn to deal with it without making false claims, eg. that I “smear”. Facts are not “smears”.

  48. Martin A.,

    in the absence of a statistical model for climate variations in the absence of anthropological effects, you can’t say whether the recently observed changes are out of the normal range of statistical variability. I think you agree with him on the nonexistence of adequate statistical models.

    No, I don’t really agree with him on this. The fundamental point is that there is no statistical model that – alone – can tell you whether or not the observed changes are out of the normal range of variability. You cannot determine this without some kind of physical model. My point is that there are no adequate statistical models (they don’t exist), therefore the argument is fundamentally circular. If you want to understand if the variations are outside the range of natural variability, you need to do an attribution study which combines statistical analysis with physical models. You can’t do this with a statistical model alone.

  49. Richard,

    This led to publication of research showing that global warming in the 20th century is unrelated to carbon emissions.

    If Beenstock really said this (or did research claiming this) then you really shouldn’t sit up and listen, you should sit down and ignore.

  50. BBD says:

    If even I can understand that the claim that C20th warming is a random walk violates conservation of energy, how is it that these supposedly outstanding scholars cannot?

  51. BBD says:

    Beenstock et al. (2012)

    Abstract. We use statistical methods for nonstationary time series to test the anthropogenic interpretation of global warming (AGW), according to which an increase in atmo-
    spheric greenhouse gas concentrations raised global temperature in the 20th century. Specifically, the methodology of polynomial cointegration is used to test AGW since dur-
    ing the observation period (1880–2007) global temperature and solar irradiance are stationary in 1st differences, whereas greenhouse gas and aerosol forcings are stationary in 2nd differences. We show that although these anthropogenic forcings share a common stochastic trend, this trend is empirically independent of the stochastic trend in temperature and solar irradiance. Therefore, greenhouse gas forcing, aerosols, solar irradiance and global temperature are not polynomially
    cointegrated, and the perceived relationship between these variables is a spurious regression phenomenon. On the other hand, we find that greenhouse gas forcings might have had a
    temporary effect on global temperature.

  52. ATTP,

    I don’t think that you can be that categorical. In many cases a long time series can be obtained and it’s statistical properties found to vary little up to a moment, where the time series suddenly is inconsistent with that historical behavior. That allows for concluding that the new values are exceptional.

    The problem with the temperature time series is that it’s not long enough for that taking into account all the autocorrelations that have been present over all time scales of the instrumental data. For this reason the valid arguments involve physical understanding and are dependent on that.

  53. Pekka,
    I didn’t think I was being all that categorical.

    In many cases a long time series can be obtained and it’s statistical properties found to vary little up to a moment, where the time series suddenly is inconsistent with that historical behavior. That allows for concluding that the new values are exceptional.

    Sure, but that is a distinct issue. You can certainly use statistical models to try and see if one period is statistically similar to another or – as you say – to determine if there is a change in behaviour. The point I would make, though, is even that does not tell you why they’re similar or why there is a change. For that you need to include some kind of physical model.

    For this reason the valid arguments involve physical understanding and are dependent on that.

    I agree, and I believe that that is really all I’ve been suggesting.

  54. @Wotts
    Ignoring someone like Beenstock would be an act of ideology rather than scholarship.

  55. jsam says:

    Bishop Shill is all a bit Mrs Mertonish. Publish a load of old tosh and declare “let’s have a heated debate”. https://www.youtube.com/watch?v=Lj-9lSEBBm0&feature=kp

  56. Richard,
    Really? Why? To be clear, I was simply suggesting that if someone has really said or done something as silly as Beenstock appears to have done, then ignoring them may be sensible. There are lots of really clever people in the world. Ignoring those who don’t appear to understand what they’re talking about seems entirely reasonable. Also, if I were to – unfairly – ignore someone like Beenstock, I’m sure there’s someone else equally clever that I could pay attention to.

  57. Your blog post says “here is the Met Office’s response” and links to a PR piece. The PR piece, however, is not the Met Office response; rather, it is a PR piece about the Met Office response. The actual Met Office response is a briefing paper written by the Met Office Chief Scientist, Julia Slingo (in consultation with five other Met Office scientists):
    http://www.metoffice.gov.uk/media/pdf/2/3/Statistical_Models_Climate_Change_May_2013.pdf
    This makes it clear that Andrew Montford and I have been representing things accurately. The PR piece is misrepresenting the briefing paper.

    Additionally, your penultimate paragraph is confused on the issue of statistical significance. The concept of significance is easy to understand. If you would like to read about this, one place to start is here:
    http://www.informath.org/AR5stat.pdf

    I hope that in the future you will research issues more thoroughly before taking a strong position on them.

  58. Doug,
    Well, that was a constructive contribution.

    I hope that in the future you will research issues more thoroughly before taking a strong position on them.

    Ditto. And, maybe think about learning some physics. Seriously, what you’re suggesting is nonsensical and the sooner you realise that, the better. You really can’t use statistical models to understand why the data is behaving the way it is. Simply suggesting that it could be a random walk (when it really cannot) is silly (and it would really help if someone from the Met Office would state this more clearly). This is not a complicated concept. Maybe think about it a bit more before you choose to make condescending comments on blogs.

  59. Doug,
    I’ll quote paragraph 2 of the document to which you link.

    Statistical models seek to assess the statistical properties of a specific set of data, in this case the global mean surface temperature timeseries. The models mentioned in the recent Parliamentary Questions are mathematical constructs that are not rooted in the fundamental laws of physics. In comparison with global scientifically based climate models, they are too crude to capture the complexity and non-linearity of the holistic climate system, its internal variability and its physical response to external forcing agents. It should be noted that the Met Office does not rely solely on statistical models in its detection and attribution of climate change.

    Hmmm, so far the PR piece seems pretty consistent with this document. Are you sure you’ve read it properly and understand it. You should probably bear in mind that the Met Office are likely to understate things somewhat. They probably won’t actually say “Doug Keenan doesn’t really know what he’s talking about”. Try reading between the lines a little.

  60. The Bishop Hill blog post is based on a statement made by the Met Office: that the rise in global temperatures is “significant”. The statement is untenable, I pointed that out, and the Met Office has effectively acknowledged that I was right.

    You are now using a rhetorical technique, to misdirect people from that issue. Your technique is to cite some other evidence–based on physical simulations–for the global-warming hypothesis. Those simulations are indeed evidence for the hypothesis. But that of negligible relevance for the issue of the BH post–which is that the statement made by the Met Office about significance is false.

    I see that there is now a post about this at Bishop Hill:
    http://www.bishop-hill.net/blog/2014/7/3/where-there-is-harmony-let-us-create-discord.html
    Follow on from there.

  61. jsam says:

    Hypothesis? Seriously?

    “Some scientific conclusions or theories have been so thoroughly examined and tested, and supported by so many independent observations and results, that their likelihood of subsequently being found to be wrong is vanishingly small.
    Such conclusions and theories are then regarded as settled facts.
    This is the case for the conclusions that the Earth system is warming and that much of this warming is very likely due to human activities.”
    http://www.nap.edu/catalog.php
    And note that the above National Academies paper is available for free download after a free registration. No purchase necessary. And the quote is from pages 44 & 45.

  62. Rob Nicholls says:

    I’ve been thoroughly entertained by this article and the comments.

    (although it’s a shame it’s such a high stakes game. Also, it must be constantly infuriating and draining for anyone trying to inject sanity into the proceedings. In those respects, I like Bill Watterson’s original, CalvinBall, better.)

    I particularly enjoyed the quote from Andrew Montford at the top of the article. It made my day. Thank you ATTP for highlighting this.

  63. Bishop,
    This is an attempt at a response to Andrew’s post

    Andrew, you say,

    I think Anders’ mistake is to assume that Doug is going down a “global warming isn’t happening” path. In fact the thrust of his work has been to determine what the empirical evidence for global warming is

    Well, one piece of empirical evidence is that all components of our climate system are gaining energy. If you want to consider surface temperatures only, then even that shows increasing energy (temperatures increasing). That’s the evidence. Nice and simple. Playing statistical games with a dataset, doesn’t tell us anything about what’s happening or why. It allows us to answer various questions about that dataset. What is the trend? What is the uncertainty? Is the trend statistically different from zero? Are the temperatures higher today than they were 100 years ago? Statistical models can answer those questions. They cannot tell you why the data has these properties. That’s why you need to include physics and chemistry.

    In some sense I don’t really understand how you can conclude that Doug Keenan is simply trying to find empirical evidence for global warming. Even statistical models tell us that we’re warmer today than we were in 1880. So, if that’s all he’s trying to do, the answer is clearly “yes, global warming is happening”. As I understand it, he’s actually trying to determine if the surface temperature dataset is statistically different from a random walk. It may well not be statistically different from a random walk, but since the evolution of our surface temperature is not a random process, this is a rather silly question to ask and showing that the answer is that it is not statistically different from a random walk does not allow us to conclude that it might be natural.

  64. Doug,

    The Bishop Hill blog post is based on a statement made by the Met Office: that the rise in global temperatures is “significant”.

    Well, I disagree and I fail to see how this is not an entirely reasonable statement to make. Your definition of “significant” appears to be quite different to the standard definition – in a statistical rather than rhetorical sense. Also, maybe you could actually point out where the Met Office has agreed with this, because that appears to be a rather generous interpretation of what they’ve said. All I can find is

    Sophisticated statistics are used to demonstrate the significance of recent changes in the climate system.

    which does not appear consistent with your claim that they’ve acknowledged that you’re right. Have you heard of the term “quote mining”?

    I’m not playing rhetorical games. If you want the truth. I’m ticked off that people who don’t seem to understand basic physics think they can use statistical games to argue that something that is clearly happening may not be happening (or that there’s no evidence for it happening). It’s incredibly frustrating and it always amazes me that organisations like the Met Office put up with this kind of nonsense in the way that they do. This is truly a very simple issue and if you don’t get it, you really should consider taking a step back and thinking about what you’re doing.

  65. @Wotts
    Instead of dissing Beenstock, you may try and engage with his work.

    Hendry did, but his best effort to refute Beenstock created a bigger headache.

    You could of course also rely on Tol & de Vos’ pre-emption of Beenstock.

  66. Richard,
    I’m not specifically trying to diss Beenstock. Maybe I will engage with his work. I was simply pointing out that some people say silly things and, if so, ignoring them is an entirely reasonable choice to make.

  67. verytallguy says:

    Bishop and Keenan are, of course, playing climateball.

    In this case it manifests itself in the form of a self fulfilling prophecy:
    (1) The temperature record is insufficiently long pre AGW to define expected natural variation fully
    (2) The temperature record alone must be used to attribute global warming
    (3) Therefore (drum roll…) ‘ of course manmade climate change is not “clear”.’

    They also implicitly insist on a familiar fallacy, that only modern era temperature records constitute “empirical” evidence,

    …his work has been to determine what the empirical evidence for global warming …

    This is at best cherrypicking – insisting on looking at only one measure – and at worst simple denial – denying that (for instance) proxies of past climate are “empirical”.

    We then get the punchline

    I think Anders’ mistake is to assume that Doug is going down a “global warming isn’t happening” path.

    Of course not. Perish the thought.

    Publicising a model which requires the climate system to break the law of conservation of energy to claim a lack of significance of the warming signal absolutely does not constitute implying that warming isn’t happening.

  68. BBD says:

    I’d like to repeat VTG because it is the crux of the matter (my emphasis):

    Publicising a model which requires the climate system to break the law of conservation of energy to claim a lack of significance of the warming signal absolutely does not constitute implying that warming isn’t happening.

    It’s physics denial.

    This needs to be kept front and centre. Never mind all the diversionary verbiage emanating from the pseudosceptics. It’s just waffle. It is more confected non-argument designed to create the false impression of uncertainty.

    Aside from being nonsense, it is intellectually dishonest.

    You cannot get around conservation of energy with bullshit.

  69. BBD said:


    I point out facts. I know this is inconvenient for your narrative, but you will have to learn to deal with it without making false claims, eg. that I “smear”. Facts are not “smears”.

    In economic models, fiction is as important as the truth. Lies are part of game theory, and game theory is what runs many financial markets.

    In the earth sciences, nature does not lie, so all these econometrics models are rubbish when used in an improper context.

    That is where these people like Tol and Beenstock are coming from. They really believe that you can manipulate the truth.


  70. As I understand it, he’s actually trying to determine if the surface temperature dataset is statistically different from a random walk.

    Here is a bit of information that may be of some help to destroy Keenan and Beenstock type of arguments. One type of random walk that has some clear physical constraints is the Ornstein-Uhlenbeck random walk. This is also known as red noise. The characteristic of O-U random walk is that it can be modeled as a random walk sitting inside a potential well, and the height of the potential well provides resistance to extreme departures from the mean. A red noise process is therefore not a martingale and it cannot depart very far away from the mean, depending on the selection of the drag factor.

    A natural process like ENSO can be debated as to whether it best fits a red noise model or some quasi-peridiodic deterministic model. ENSO when measured via its SOI or some proxy characteristic clearly has bounds and has historically reverted to a mean of 0 (ZERO) on scales of 100+ years. This means that at the worst, ENSO is a red noise process, like water sloshing randomly in a bucket, with a forcing not quite enough to let it leave the container, or probably more likely (from what I have been finding, see my handle) that it is actually a Mathieu-equation type of quasi-peridiodic motion that is described in texts on hydrodynamic sloshing.

    Would Beenstock, Keenan, or Tol ever consider these kinds of statistical physics/deterministic models? Probably not, as they are economists and don’t believe in physics. They are taught game theory whether they realize it or not, and in game theory, deception is as important as the truth.

  71. There’s no need to claim anything about what Richard thinks, Web.

    ***

    I tried to post this at our beloved Bishop’s:

    > Seems to me that what the Bish said is completely in line with the absence of any adequate statistical model for surface temperature.

    Rejecting all models as inadequate is not the same as accepting that no adequate models exists. In the former case, it’s a rejection based on an appeal to perfection. In the latter case, it’s an acceptance based on common sense.

    To claim that both stances are “completely in line” with one another seems to be farfetched. In fact, here’s how Richard Muller characterizes the first endeavour as an “exaggerated pedantry for statistical methods”:

    > What he is saying is that statistical methods are unable to be used to show that there is global warming or cooling or anything else. That is a very strong conclusion, and it reflects, in my mind, his exaggerated pedantry for statistical methods. He can and will criticize every paper published in the past and the future on the same grounds. We might as well give up in our attempts to evaluate global warming until we find a “model” that Keenan will approve — but he offers no help in doing that.

    http://neverendingaudit.tumblr.com/post/11763136868

    Interestingly, this excerpt comes from an email correspondence that Douglas released. We still don’t know if he asked Richard before releasing that correspondence.

    I think that same point applies to Martin A’s comment.

    ***

    In any case, thank you for your comments, Micky H and Martin A. I’ll forward them to AT, who is not me.

    First rule of ClimateBall ™: make sure you identify properly who you’re trying to hit.

    Mysteriously, it failed to appear. I’m confident this will soon be corrected. But not overconfident, for that would be bad.

    You have comments over there for you, AT.

    ***

    But I see that Douglas is here. Have you asked Richard before releasing your correspondence with him, Douglas? Also, if you have any comments about his accusation of “statistical pedantry,” that would be nice.

  72. @WHT
    O-U models have been widely used in economics for a number of decades.

    Above, I referred to Watson but I of course meant Stock.

  73. Willard,

    You have comments over there for you, AT.

    I’ve finally gone over there. Going fine so far, but it’s only been a few minutes 🙂

  74. “It may well not be statistically different from a random walk, but since the evolution of our surface temperature is not a random process, this is a rather silly question to ask”

    It’s called a statistical MODEL. As Doug says, you need to understand what this means before giving your opinion on it and dismissing it as ‘silly’. Doug says he hopes you will research things more thoroughly – he’s probably not familiar with the style of your blog.

  75. jsam says:

    Is the use of O-U models in economics supposed to be a recommendation?

    http://en.wikipedia.org/wiki/Ornstein%E2%80%93Uhlenbeck_process

  76. Paul,
    Yes, I know what a statistical model is. It’s a model based on statistics. Such a model cannot be used to understand the underlying processes that are represented by a data set (or do you dispute this – I really am happy to be corrected if you or Doug were actually willing to put some effort in). This doesn’t seem all that complicated, so either I’m missing something obvious, or you are.

    dismissing it as ‘silly’

    Well, there are a number of ways to engage in this topic. Try to be constructive and hope others do the same. Hasn’t worked particularly well. Try to be more dismissive and hope that others will convince you that you’re wrong. Also, hasn’t worked very well. So, I just do what seems right at the time.

    Doug says he hopes you will research things more thoroughly

    He can certainly hope. Of course, there’s only so much time in the day so once it appears that something is indeed silly (and noone’s really convinced me otherwise yet) it seems easier just to proceed as though it is and hope that if I’m wrong someone will put the effort into convincing me that I am.

  77. > You are now using a rhetorical technique, to misdirect people from that issue.

    The issue is that what is conveyed by Douglas and our beloved Bishop with their “the statistical models are inadequate” is quite incompatible with what AT and the MET Office convey. To claim that both “stances” are compatible is the main misdirection of that ClimateBall ™ episode. The trick here is to focus on “claims” and forget about what they are meant to imply.

    (I too prefer CalvinBall, Rob.)

  78. BBD says:

    It’s called a statistical MODEL.

    Of a fairy dust-powered climate system that defies the law of conservation of energy.

    The pseudosceptics appear blissfully unaware how stupid this latest attempt to confect FUD makes them all look.

  79. Oh, and if anyone wants to research the “yes, but random walk,” this has been debated to death on an old Amazon thread a while back:

    You continue to claim that you are optimizing a model even after Vaughan and I have repeatedly explained that you have no model. A model has parameters with physical meaning. You have one parameter – a seed for the mathematical generation of pseudo-random numbers. You are playing with numbers – no reality involved. You have not performed model identification. There is no model. DO YOU UNDERSTAND?

    http://www.amazon.com/tag/science/forum/ref=cm_cd_pg_pg$1?_encoding=UTF8&cdPage=$1&cdSort=oldest&cdThread=Tx3TXP04WUSD4R1

    The Vaughan in question is Vaughan Pratt.

    ***

    I expect Paul Matthews and Douglas Keenan to read the whole thread before commenting again.

  80. BBD says:

    Thank you, Willard!

  81. Andrew Dodds says:

    In order to disprove physics we had to ignore it.

    Sounds vaguely familiar..

    In any case, it’s another riff on the ‘If we apply [insert statistical technique here] to the [insert climate related dataset here] for no particular reason and with no constraints on parameters, we find that [insert statistical technique here] completely explains [insert climate related dataset here] therefore physics is wrong, QED, theme.

    What’s that? My coffee is warming itself up due to a purely random stochastic process? Well, whaddyaknow.

  82. jsam says:

    Shorter still, also works for economists. http://xkcd.com/793/

  83. L Hamilton says:

    The need for physical knowledge to inform the statistics is certainly important; Lean & Rind (2008), or Foster & Rahmstorf (2011) deserve mention for statistics that kept the physics well in view, as Keenan’s model does not.

    Regarding the ARIMA(3,1,0) specification — the “1” refers to first differencing, which removes any linear trend. Haven’t we seen that before?

    If one works instead with the undifferenced data, say an ARMAX model with year as predictor and ARIMA(3,0,0) disturbances, then an upward trend — the coefficient on year — is indeed statistically significance. A more physics-informed FR11-type model (also containing time trends, or in an alternative specification CO2 among the predictors) makes better sense still.

  84. Yes, the Lean & Rind, and Foster & Rahmstorf work do deserve a mention. WHT, in an earlier comment, pointed out that much of the random nature of the surface temperature data can be attritbuted to things like El Nino events.

  85. Hmmm. Seems I can’t post this:

    > Which statistical model do you think is the best match for temperature reconstructions?

    Thank you for the tu quoque, Philip Ritchens. It means a lot to me. Here’s what the Auditor says on such occasion:

    I think that you can have a perfectly good sell recommendation without necessarily having a buy recommendation in mind[.]

    http://neverendingaudit.tumblr.com/post/34253462351

    So I’d say that it might be time to sell the “yes, but random walk” stock. I’d even go so far as to suggest selling short the “inadequate” equivocation, and put that ClimateBall ™ episode to rest.

    Hope this helps,

    w

    PS: If Douglas would so kind to declare once and for all that he released his correspondence without Richard Muller’s consent, that would be nice.

    There are two comments published since I sent that one. Perhaps you ought to start publishing my own contributions at our beloved Bishop’s, AT. If you approve of them, of course.

    I’ll comment on the first response Douglas ever addressed to me later today, basically repeating the fallacy:

    (1) Standard models are inadequate to interpret temperatures.
    (2) Random walks are also inadequate to interpret temperatures.
    (3) Random walks are as good as standard models to interpret temperatures.

    The problem is that random walks are inadequate in the sense of “not even wrong”. If your criteria for adequacy leads to suggest random walks as meaningful models for temperatures, I suggest that the problem is more your criteria for adequacy.

    More so when this criteria for adequacy rests on statistical pedantry, as Richard Muller suggested.

  86. John Mashey says:

    Not for the first time, I wish again that John Tukey were still around to squash statistical silliness trying to eradicate science.

  87. John Hartz says:

    Maeanwhile, back in the real world…

    The Disaster We’ve Wrought on the World’s Oceans May Be Irrevocable by Alex Renton, Newsweek, July 3, 2014

  88. > They are taught game theory whether they realize it or not, and in game theory, deception is as important as the truth.

    I’d rather distinguish game theory from gaming theory. While ClimateBall ™ certainly belongs to gaming theory, that it belongs to game theory is still an open question.

  89. Hmmm. Seems I could post this:

    > While you are here […]

    Nice move, shub. Let me see if I can use that one. Let’s see:

    While you’re here, dear Douglas, have you asked Richard Muller before releasing your correspondence with him?

    Also, if you have any comments about his accusation of “statistical pedantry,” that would be nice.

    ***

    If you’d acknowledge that your [shub’s] endorsement of Very Tall’s master argument may not lead where you thought it would, that would be nice.

    http://www.bishop-hill.net/blog/2014/7/3/where-there-is-harmony-let-us-create-discord.html

    Stranger and stranger.

  90. jsam says:

    Uh oh. They’ve started on the Latin over at BH. Can Lawd Monckton be far behind?

    Kill the thread.

  91. Rob Nicholls says:

    It has been said that you cannot change the laws of physics. This inflexibility seems very much against the spirit of climateball. Perhaps that’s why physical models aren’t allowed? (please correct me if I’m wrong). Sorry if this is off-topic.

  92. Rob,
    Having spent much of the day on Bishop-Hill engaging with those who comment there, it certainly seems that some are trying very hard to change the laws of physics. Radiative physics, what’s that?

  93. > [I]t certainly seems that some are trying very hard to change the laws of physics.

    Not at all, AT. It’s just that faced with the choice of betting on laws of physics or latin expressions like post hoc ergo propter hoc, some may prefer older adages.

    If you want to know what PhysicsBall ™ look like, Rob:

    http://www.amazon.com/How-Laws-Physics-Nancy-Cartwright/dp/0198247044

  94. Eli Rabett says:

    Hydrinos, it’s all hydrinos.

  95. guthrie says:

    Hmmm, interesting short arguments in the reviews section. I take it Cartwright doesn’t write the book in the appropriate physics mathematical symbols, in which case the title might be correct.

  96. basicstats says:

    When physicists become overconfident! No, Ornstein-Uhlenbeck is not random walk or Brownian motion (red noise apparently). OU is Brownian motion with friction. Would dumb old economists know about such ‘deep’ ideas? Well, since OU is the most basic model for dynamic mean-reversion, yes they would. Indeed, a textbook on cointegration referenced by Beenstock et al has it listed in the index!

  97. I don’t adding formal symbols will prevent some philosophers to think that laws of physics are mostly idealizations, guthrie.

    Here’s a bio sketch:

    Her research interests include philosophy and history of science (especially physics and economics), causal inference and objectivity and evidence, especially on evidence-based policy. She is currently involved in a number of interrelated research projects at LSE: ‘Evidence for Use’ (funded by the British Academy) and ‘Choices of Evidence: tacit philosophical assumptions in debates on evidence-based practice in children’s welfare services’, with Eleonora Montuschi and Eileen Munro (funded by the AHRC), both at the Centre for the Philosophy of Natural and Social Science, and a project on Modelling Mitigation, at the Grantham Research Institute on Climate Change and the Environment.

    https://www.dur.ac.uk/philosophy/staff/?id=10659

    I would not mess with stuff like history of science with her. Well, I would, but. Grrr.

  98. BBD says:

    ATTP

    Radiative physics, what’s that?

    Well, apparently your “physics” isn’t correct, despite a lifetime of study and professional qualification in same. This must be a blow. You must curse the day you exposed yourself to the intellects vast, cool and unsympathetic on the internet. Perhaps this is why few scientists are willing to venture into the lists with the Black Knights of the True Knowledge.

  99. BBD says:

    Sorry Willard, we crossed

    But you are the very person to ask: how can a law lie? It may be flawed, but as a non-sentient construct, how can it be capable of conscious dishonesty?

    [emoticon redacted]

  100. John Hartz says:

    ATTP: Fodder for a future post:
    Overconfident predictions risk damaging trust in climate science, prominent scientists warn by Roz Pidcock, The Carbon Brief, July 2, 2014

  101. Steve Bloom says:

    This is the crux of the argument:

    Take the city of Lagos, for example. Mora et al. predicted the city would begin experiencing an “unprecedented” climate in 2043, give or take two years either side.

    Using a method that the authors say better captures real-world variability, the Hawkins letter says that point is likely to occur between 2024 and 2052 but that they can’t be more precise than that 28 year window.

    Srsly.

    But as these are GCMs that among other deficiencies lack proper representations of slow feedbacks, the smart money is on all of these scientists attaching too long a time frame to these changes (unless, that is, someone wants to try to argue that these feedbacks are as likely net negative as positive).

  102. Steve Bloom says:

    I am inspired:

    Could one alliteratively lay down a lay about lying laws while lying in a loo?

  103. Look like you looked into the book, Steve. Lead yourself into the intro. She lays all her cards there, and mentions lots of alluring stuff, like the radiometer on p. 5.

    If you like causal laws, you’ll like it.

    Srsly.

  104. Tom Curtis says:

    Steve Bloom, the smart money (IMO) is on errors in both directions, with the net errors averaging out. If they did not, there would be a larger discrepancy between ECS determined from paleo data, and from models.

  105. Steve Bloom says:

    Except, Tom, ECS derived from natural conditions is a poor guide to our future. Even were it not for the unnatural pace of things, the climate territory we’re headed into has no precedent in the record (the mid-Pliocene comes closest, but that was at relative equilibrium as contrasted to our fast transient). But my specific point was that the GCMs don’t incorporate those slow feedbacks, and most of the ones I know about (e.g. permafrost, soil carbon and rainforest dieback) are positive. In fact, the only negative one I can think of off-hand is CO2 fertilization. Any others?

  106. previously Pekka said:


    The problem with the temperature time series is that it’s not long enough for that taking into account all the autocorrelations that have been present over all time scales of the instrumental data.

    Sometimes you have to revel at the way a “pure” statistician thinks. The purist wants to remove all the autocorrelations in the data because he finds deterministically forced behavior “uninteresting”. I often see the phrase “the data is autocorrelated” used as a black mark, which I find amusing.

    To a pure statistician, red noise is simply a type of autoregressive (AR) model, devoid of meaning. However, to a physicist, a red noise model is an Ornstein-Uhlenbeck process, tracing its lineage back to when Einstein was formulating his models of Brownian motion.

    That is what AT is getting at with his pleas for more physics and less statistical manipulation. There is real physical meaning to the noise that we are seeing, but people like Keenan do not have an aperture wide enough to see this.

    Let’s get back to statistical mechanics and treat the autocorrelations as the interesting results, suffused with meaning. My opinion of course.

  107. basicstats,
    This wasn’t intended to be a physicists vs statisticians post. There are clearly many physicists who mess up their statistics, and many statisticians who think that all you need to understand a dataset is some statistical model.

    I suspect WHT understands the O-U process since it was described as a random walk inside a potential. Given that friction provides a restoring force, I suspect that these are just two different ways of saying the same thing. I also think that that example may well be a statistical model that is constrained by a physical process (friction) hence is not really purely a statistical model, which is what we’ve be criticising here.

  108. verytallguy says:

    Willard,

    Re

    Very Tall’s master argument may not lead where you thought it would

    Indeed.

    I’d just note that whilst you seem to enjoy the game over there, I’ve absolutely no interest in attempting to engage in a conversation where the likes of

    anyone who behaves like Anders isn’t a scientist – he’s just a corrupt activist liar – and a fool.

    is allowed to pass by denizens and owner alike sans comment.

    Actually, I’d suggest that Anders engagement with such a forum is actually counterproductive as it normalises such behaviour.

  109. VTG,
    I missed that one. Well Andrew did at least ask people to be polite. Not sure they really listened 🙂

  110. verytallguy says:

    Anders,

    respect for trying – after seeing that near the top I’ve not read the thread, so I’ve no idea how it went.

    I do feel that the routine abuse heaped on scientists both in blogs and also the mainstream media has the effect of shifting the Overton Window, whether deliberately or defacto.

    My personal view is that the only thing worth doing in any forum this is happening is to point it out and refuse to engage further on those terms. I guess you setting up this blog and “trying to keep the discussion civil” is perhaps a reflection of similar thoughts.

    Others I feel have different approaches – BBD, Sou and Stoat seem to enjoy giving as good as they get, Willard engages but strictly on his own terms etc.

  111. VTG,
    It went (is still going a little) fine in some sense. Didn’t achieve much but it wasn’t all that unpleasant – although my skin is somewhat thicker now that it was in the past.

    I think you and I have similar views. I quite like what BBD, Sou and Stoat do, but it’s not in my character to do be quite that blunt, too often at least. Willard has a style that I would like to be able to emulate (in the sense that it has a remarkable consistency), but don’t think I quite have the skill.

    What I really wish is that more people would simply speak their mind, without necessarily being unpleasant. The Met Office document, for example, is reasonable clear, but sufficiently vague that some are interpreting it as agreeing with Keenan. Ideally they should have made it much clearer that they really don’t.

  112. Going back to the original question of Lord Donoughue and the answer to that we see neither the question nor the answer refers to the reason of the warming, but only to the presence of a long-term upward trend in average global temperatures.

    The answer is based on assumed AR1 noise over the period 1880-2011 (and periods starting at later dates).

    Based on advice from Doug Keenan a further question was made insisting on comparison with driftless ARIMA(3,1,0), obviously knowing that ARIMA(3,1,0) is flexible enough to be more consistent with the data than the simple linear trend+AR1. The Met Office gave the correct answer that such a comparison is essentially meaningless and that the numbers that it was forced to calculate show nothing of real interest, and taking them seriously is highly misleading.

    It’s, of course, possible to study, how much can be concluded from the temperature time series based purely on statistical time series analysis, but the conclusion that not much, means only that this particular approach has a low power. Nothing can be concluded about the power of other methods that use additional input. Finding that one method has little power means that the emphasis should be on other more powerful methods. Those alternative methods include also Bayesian inference built on a set of simple models of the carbon cycle and global energy balance. Unfortunately such methods involve some subjective input, but the results should still be quite conclusive. (I haven’t done the full analysis. Thus this claim is based only on looking at the data.)

  113. Pekka,

    The Met Office gave the correct answer that such a comparison is essentially meaningless and that the numbers that it was forced to calculate show nothing of real interest, and taking them seriously is highly misleading.

    Yup, that was pretty much my conclusion too. I wonder why Doug Keenan and Andrew Montford seem to think otherwise?

  114. > I’ve absolutely no interest in attempting to engage in a conversation where the likes of [insert your favorite insult] is allowed to pass by denizens and owner alike sans comment.

    I disagree about that one, Very Tall: it is such behavior that authorizes me to request Douglas to comment on Richard Muller’s claim that he’s into “statistical pedantry.” Don’t forget that politeness is a way to say more with less. Let’s see if I can illustrate that point.

    When all you hear is the sound of crickets and distant wordless growls, you can suspect climate zombies. The “yes, but random walk” might be the most direct way to justify Allen’s expression “climate zombie.” Do climate zombies follow a random walk?

    What you deplore at our beloved Bishop’s also occurs here, including this very thread. It’s very difficult to eradicate. It may even be impossible, as it would require a non-argumentative tone (think Math Overflow). I thought of offering AT to moderate personal attacks, but it might lead to unintended consequences, besides the fact that AT is too cheap to pay me. A true Scotsman!

    It might more expedient that we embrace our inner ClimateBall ™ player.

  115. > I wonder why Doug Keenan and Andrew Montford seem to think otherwise?

    I suggested at our beloved Bishop’s that he needs to agree:

    Interestingly, “the issue” is not quite explicit in what says Douglas or our beloved Bishop. Some, but not me, might even say that’s because they need to agree or, as Douglas suggested at AT’s, it’s “a rhetorical technique, to misdirect people from that issue”. Nullius used the same trick with his “neither models are valid” a bit earlier.

    By chance Nullius came here and let the zombie out of the bag, by claiming that the issue is “whether we understand the physics well enough to be able to eliminate the possibility of any that we *don’t* know about”

    Think about this for a second. When you know enough about issue I, you could eliminate the possibility of anything that we don’t know about! Wouldn’t that be amazing to know that much? We’d turn into Gods.

    Purple Gods, with thousands of arms longer than Slender Man’s, and as many invisible hands.

  116. Andrew Dodds says:

    Steve Bloom –

    Potential negative feedbacks:

    Increased desert area increasing albedo (sand being more reflective than grass/trees)

    More atmospheric moisture increasing snow cover.

    Greater precipitation in mountainous areas increasing weathering and hence CO2 drawdown

    Breakdown of Antarctic bottom water formation causes ocean stagnation, surface layers heat up and radiate more.

    Bleached bones of the dead reflect more sunlight than living bodies.

    Evaporation of photo dissociation of the oceans leads to the earth drying out and cessation of water vapour feedback.

    (Hmmm. Possibly getting carried away here..)

  117. verytallguy says:

    Willard,

    yes, but implications.

    Allow me to exemplify. The implication of accepting the ad nauseum (1) demonising (2) of reputable scientists is that you are choosing to endorse chosen techniques of propaganda (3).

    And of course, the implication of playing ClimateBall(TM) is that there is a game to be played, a debate to be had. Maintaining that pretence of debate is the single most important outcome that climate deniers crave.

    As to embracing our inner ClimateBall player, I’m not convinced that means anything more than enjoying a good argument.

    (1), (2), (3) http://en.wikipedia.org/wiki/Propaganda

  118. John Hartz says:

    VTG: I agree with your assessment.of the merits of Climateball . In many respects,, Climateball is nothing more than a reality excape mechanism. Gumming stuff to death gets us nowhere in the real world where actions,, not words,, are needed to mitigate against, and adapt to, manmade climate change.

  119. > The implication of accepting the ad nauseum (1) demonising (2) of reputable scientists is that you are choosing to endorse chosen techniques of propaganda (3).

    Then propaganda there is in this very thread, and in most where John Hartz comments, if you ask me, besides the cheer leading, which also belongs to ClimateBall ™.

    More generally, any comment that targets another player is a ClimateBall ™ move that should be deleted. This applies to the first sentence of that comment.

  120. any comment that targets another player is a ClimateBall ™ move that should be deleted.

    I quite like the idea of that, but I doubt I have the energy or incentive to actually implement it. Would be good to bear it in mind and – periodically – be reminded of it as a goal.

  121. John Hartz says:

    Question of the Day: Has Climateball evolved into an ideology in its own right?

  122. Steve Bloom says:

    I’d as lief lie low in a lake of lye, Willard.

  123. BBD says:

    Steve, I’m impressed and slightly envious of your alliterative genius, but Willard still wins the Internet (today, at least):

    Wouldn’t that be amazing to know that much? We’d turn into Gods.

    Purple Gods, with thousands of arms longer than Slender Man’s, and as many invisible hands.

  124. Steve Bloom says:

    I am a Lilliputian to his Leviathan, BBD.

  125. BBD says:

    Aren’t we all?

  126. Indeed, that’s certainly how I feel.

  127. BBD says:

    He’ll have to go. Somebody get Rachel.

    Then I can start using emoticons again.

    🙂

  128. I notice that Willard’s new Contrarian Matrix project is being promoted by Michael Tobis. I have to admit that I enjoyed the dialogue that Michael presented.

    As an aside, my PhD supervisor had a poster at an AGU meeting a fair number of years ago, and he presented it in the form of a Socratic dialogue. I don’t think it was appreciated quite as well as he had hoped 🙂

  129. Eli Rabett says:

    Purple Gods, with thousands of arms longer than Slender Man’s, and as many invisible hands.

    Shub, also the eyeballs.

  130. Rachel M says:

    He’ll have to go. Somebody get Rachel.

    Who? Willard? If he keeps playing the ref I might have to ban him.


  131. basicstats says:
    Indeed, a textbook on cointegration referenced by Beenstock et al has it listed in the index!

    Have to remember that economists will borrow any math from physics they can. I recall reading that aether theory is still somewhat popular in economic models despite the fact it was discarded in physics by the 1800’s. At the time of reading that, I thought it made sense because all that economists can do is come up with a heuristic to explain some behavior, while the world of equations, including those from physics theories, provides an archive of cheap heuristics that they can plunder from.

  132. jyyh says:

    oh, one-trick-ponies. in this country, maths, stats, and some parts of philosophy are counted in as science, but I hear some other countries are more sensible. also, in english, and in most ‘civilized countries’, there’s the misleading term of ‘rationalism’ that already in it’s principles abandons observation as evidence, it’s no wonder people connect ‘rational’ with ‘rationalism’, though they’re pretty much opposite terms wrt to chaotic responses commonly accepted happening in various physical systems. ‘things as they should be’ is not the same as ‘things as they are’, and though science proposes laws ‘these things are in this way as they should be’ it’s not to be confused with the rationalism’s ‘these thing’s are not this way as they should be’. It’s pretty early morning here so my philology here might have some translation issues, so if that doesn’t make sense skip it. Hah.

  133. Pingback: Adventures on the Hill | And Then There's Physics

  134. @BBD: If even I can understand that the claim that C20th warming is a random walk violates conservation of energy,

    Oh, sure, BBD. If God played dice he’d be violating conservation of energy.

    So quantum mechanics is rubbish, eh?

  135. @Rachel: Who? Willard? If he keeps playing the ref I might have to ban him.

    I wouldn’t be here if willard hadn’t tipped me off. I only publish in those journals and comment at those blogs where I can stir up trouble, so for those not looking for trouble banning him could be a Good Thing.

  136. Vaughan,

    So quantum mechanics is rubbish, eh?

    No, but I think our 20th century warming isn’t simply a quantum fluctuation. I think that would require Planck’s constant being much larger than we currently think it is 🙂 .

    Just to avoid confusion. Rachel was joking. Willard is always welcome here and his contributions are appreciated.

  137. jyyh says:

    ending my participation here with the old philosophist joke:”There are two types of rationalists, those who accept sensory evidence.”

  138. I think that would require Planck’s constant being much larger than we currently think it is.

    How does that follow? Planck used his eponymous constant to bridge the awkward gap between Wien’s UV model of insolation and the Rayleigh-Jeans IR model. Had some other bridge been the correct one, insolation within the visible spectrum might have been quite different, with a profound impact on climate. Ostensibly microscopic quantum fluctuations can reveal themselves in macroscopic ways.

    I would be fascinated to see a proof that randomness violated one conservation law without violating all physical laws including the Pauli Exclusion Principle, speed of light, etc Defining “random” to mean “lawless” trivially has that result. Is there a less trivial definition of “random” that violates energy conservation in particular, as opposed to violating all laws? And is “lawless” the definition of “random” used by those claiming temperature is indistinguishable from a random walk?

    The definition of “random” I prefer is based on suitable variants of the Kolmogorov complexity of finite strings, with e.g. “analytic” in place of “computable” in its definition (the latter is an implausibly large class for physical purposes). Such a definition permits an assessment of randomness of a time series (expressed as a bit string so as to take precision of readings into account) in terms of its compressibility. A string is random when it cannot be compressed significantly, and is nonrandom when it can be compressed to a suitable fraction of its length. The better the compression, the better the theory of the string permitting that compression.

    A relativized extension of this allows compression to be reduced to other time series. For example you may have a time series for CO2 that is ostensibly random, i.e. you don’t know how to compress it, but which can be used to compress a time series for temperature based on a theory of how CO2 governs temperature. Or the AMO index instead of CO2. A good relativized compression of this kind can be taken as a justification of such a relationship.

  139. Vaughan,
    I was joking. I was simply suggesting that the excess energy in our climate since the late 1800s cannot simply be a quantum fluctuation 🙂

    I’m not quite following the rest of your comment. It’s certainly possible that one could describe various time-series as being random in some sense, but that still would change the radiative influence of CO2. The issue that I have with what Doug Keenan is suggesting is that he seems to be arguing that the temperature time series is not inconsistent with a random walk. That may be true but doesn’t then mean that the processes that caused this change in temperature are natural and not anthropogenic.

  140. A pure random walk is a Martingale process (i.e. gambler’s ruin) and it will eventually wander off to infinity. I think that would violate conservation of energy.

    Modified random walks such as Ornstein-Uhlenbeck are bound and revert to the mean.

  141. I am quoting this great tweet of yours on Climate Etc to upset the “denizens”

    It's strange that any discussion with a contrarian about climate science invariably ends up being about economics. Why is that?— There's Physics (@theresphysics) July 4, 2014

  142. That may be true but doesn’t then mean that the processes that caused this change in temperature are natural and not anthropogenic.

    Quite right, but presumably DK’s point is merely the weaker one that the natural-vs-anthropogenic question can’t be reliably decided on the basis of data that is so noisy as to be indistinguishable from a random walk. I would agree that anything stronger would be unreasonable.

    A pure random walk is a Martingale process (i.e. gambler’s ruin) and it will eventually wander off to infinity. I think that would violate conservation of energy.

    What got my attention here was the specificity of the violation. Why energy? Is there any physical law not violated by a process that wanders off to infinity?

    Also, since the expected distance from the origin is only logarithmic in the wandering time, the process is not going to be terribly far from the origin for any plausible choice of the end of time.

    Mathematicians have a bad habit of extrapolating known physics far into the unknown, e.g. quantum computing.

  143. Vaughan,

    DK’s point is merely the weaker one that the natural-vs-anthropogenic question can’t be reliably decided on the basis of data that is so noisy as to be indistinguishable from a random walk. I would agree that anything stronger would be unreasonable.

    It’s my understanding that his argument is stronger than that. There are a number of variants, but it seems to go something like this.

    • We can’t determine if the warming is significant.

    By significant, he appears to mean significantly different from a natural random walk. Well, that seems like a rather odd definition of significant since it seems clear that the warming since the late 1800s is statistically significant (you probably don’t even need to apply a full statistical model to be convinced of this).

    • He then seems to argue that the only way to determine significance is using a statistical model.

    Well, if you have a well-defined null and simply want to determine if the data is consistent – or not – with this null, then this may be true. However, if you want to determine if it is statistically consistent with a natural (rather than anthropogenic) process, then you need some kind of physical model. Essentially, you need to do some kind of attribution study.

    So, essentially, DK seems to be completely ignoring any physics and then seems to use statistical arguments to suggest that we can’t tell if the warming is natural or not. Given that without a physical model you don’t actually know what the influence of natural effects would be, this argument seems trivially wrong.

    Some of the posts here, may provide some context.

  144. I’ve noted Griffiths p115:

    “… It is often said that the uncertainty principle means that energy is not strictly conserved in quantum mechanics– that you’re allowed to “borrow” energy deltaE, as long as you “pay it back” in a time deltaT ~ hbar/(2*deltaE); the greater the violation, the briefer the period over which it can occur. There are many legitimate readings of the energy-time uncertainty principle, but this is not one of them. Nowhere does quantum mechanics license violation of energy conservation, and certainly no such authorization entered into the derivation of Equation 3.151. But the uncertainty principle is extraordinarily robust: It can be misused without leading to seriously incorrect results, and as a consequence physicists are in the habit of applying it rather carelessly.”

  145. Concerning QM I’m somewhat skeptical of statements of the type

    There are many legitimate readings of … , but this is not one of them.

    It’s quite possible that one legitimate reading is unknown to the person, who makes that statement. Physicists agree widely on concrete results obtained from QM, but not necessarily on statements of that nature, and for reasons that do not make one interpretation strictly inferior to another.

    One reason for that is in the language. The natural languages have not been developed based on full understanding of QM. Therefore the relationship of QM with the language used to describe it is not well defined, and what’s obvious about this relationship for one physicist is wrong for another. Philosophers of science have tried to improve on that but with limited success.

    There’s much less disagreement on the formulas than on, how their meaning can be explained using words.

  146. > Is there any physical law not violated by a process that wanders off to infinity?

    Audits violate laws of conversation and nothing else.

  147. > DK’s point is merely the weaker one that the natural-vs-anthropogenic question can’t be reliably decided on the basis of data that is so noisy as to be indistinguishable from a random walk. I would agree that anything stronger would be unreasonable.

    Well, just this morning Nullius switched back the pea from the “pure statistics” to the “random physics” thimble:

    As I explained at length earlier in this thread, the ARIMA “random walk” model is no less physically valid or meaningful than a linear trend (or linear trend plus AR(1)). There’s even a simple physical interpretation of it, although its relationship to reality is necessarily approximate/oversimplified, even if true.

    The output being ARIMA(3,1,0) says that the net heat flow in or out of the system in a given year follows a 2nd order differential equation subjected to random shocks (due to cloudiness, for example), the output of which is integrated with any damping/nonlinearity too small to be resolved on the timescale of the data we have. All of physics is about differential equations. That’s a large part of why ARIMA models are used to model stuff.

    http://www.bishop-hill.net/blog/2014/7/3/where-there-is-harmony-let-us-create-discord.html?currentPage=5#comments

    I’m not sure Douglas J. Keenan endorsed Nullius’ explanation, but he sure moves from the pea under the “accountable politics” thimble when he says:

    That scientists can go around doing that with impunity is a huge problem: there needs to be accountability.

    How Douglas can go from an argument that ends with “no models are justified the way I like” to “scientists need to be accountable” is left as an exercise to the readers.

  148. Willard,
    I was in the process of commenting there again, but after re-reading Doug’s “is 6 bigger than 5” analogy, I really can’t see the point. It really seems like he’s arguing that one should not ask the question “is 6 bigger than 5”, because 7 is also bigger than 5.

  149. John Hartz says:

    Willard: Are you deliberately attempting to transfer the discussion going on over at BH’s onto this comment thread? If so, why?

  150. Thank you for your concerns about my motivations, John.

  151. Talking of random walks, wandering off into QM smacks of counterproductive mission creep, so I must apologize in advance for continuing it by extending Pekka’s point about interpretation of language as follows.

    Interpretation in QM is deeper than merely physicists inadvertently talking past each other due to inconsistent usage. The question is not whether Heisenberg’s matrix mechanics or Schroedinger’s wave mechanics is the correct understanding—in December 1926 Schroedinger gave a formal equivalence (understood today as a duality) showing they were quantitatively identical. The question is how does this joint Heisenberg-Schroedinger picture relate to our intuitions about reality. The several interpretations of QM put this question under the spotlight by explicitly addressing and fleshing it out.

    A second deeper matter of interpretation is raised by Heisenberg uncertainty itself. This is sometimes cast as an inability to know the precise values of two conjugate variables simultaneously, as though they had precise values which nature has coyly hidden from mere mortals however clever their experiment. The atheistic nature-denying view denies that it is merely a knowledge question and takes the stronger position that no pair of conjugate variables can have a pair of values to better than a certain joint uncertainty. Time and energy, angular momentum about two orthogonal axes, etc. are not even precisely defined when considered jointly!

    While one might infer that conservation of energy must therefore itself be as undefined as that which is required to be conserved, it should be borne in mind that conservation is not defined at one instant but over time. Conservation of energy over two nanoseconds is definable to twice the precision of its definability over one nanosecond.

    In this sense Heisenberg uncertainty poses no theoretical, let alone practical, obstacle to conservation of energy. Perhaps some future experiment will reveal a failure of conservation of energy, but it won’t be Heisenberg uncertainty’s fault.

  152. John H.,
    I doubt much will move over here. Also, Willard did ask if I minded and I don’t.

  153. Vaughan,

    so I must apologize in advance for continuing it by extending Pekka’s point about interpretation of language as follows.

    No real need to apologise. Interesting wanderings are more than welcome.

  154. Vaughan (with implied permission of ATTP),

    QM states are not given by values in some coordinate space, they are states in a very different space (vectors in a Hilbert space). The Schrödinger representation and Heisenberg representation are just two different ways of describing these states. Dualism is not in the difference of these representations as both imply the full dualism, which is an issue in interpreting QM using concepts of classical physics and language restricted to classical physics.

  155. John Hartz says:

    Willard: You’re welcome.

  156. ATTP, you left out the “presumably” in front of my “DK’s point is merely the weaker one”. I put it there because anything stronger is demonstrably illogical. As a strengthening of my “anything stronger would be unreasonable”, based on what you attribute to Keenan I see now that I should not have given him the benefit of the doubt as to his ability to reason logically.

    This question of statistics vs. physics in climate has come up in similar forms in other contexts.

    1. Chomsky’s famous critique of behaviorist psychology, of which Skinner was a prominent advocate. Roughly speaking, whereas behaviorism studies input-output relations as observed experimentally, rationalism takes the mechanism responsible for that relation into account. Whereas statistics plays an essential role in quantifying input-output relations in language behavior, it need play no role in the sort of rationalist account of language preferred by Chomsky.

    2. The Minimum Message Length approach to quantifying the quality of a theory. This can be understood as a quantitative elaboration of Occam’s Razor: shorter theories are better. A nice account by Chris Wallace of this development can be seen here, including the roles of David Boulton, Ray Solomonoff, Andrey Kolmogorov, Greg Chaitin, and Jorma Rissanen (but not William of Occam) in its development. Its relevance here is that it is a completely model-free approach to evaluating theories.

    Being a precisely defined approach makes it easier to raise a precise objection to MML. Given two observationally equivalent black boxes, meaning that they enjoy the same input-output relation as measured by all statistical tests, there is no reason to assume their inner mechanisms are of the same complexity: one may be simple and the other complicated such as a Rube Goldberg contraption with a simple externally observable behavior.

    If you are handed the box with the complicated mechanism and asked to explain it, the MML approach will get it wrong by preferring the simple mechanism. This will happen even if you have some independent non-MML-type reason for preferring the complicated mechanism.

    The MML justification of a theory of anything should in general be the justification of last resort. If you reasonably expect a certain behavior, the basis for your expectation should take priority over an explanation of the behavior whose only merit is its succinctness.

    Occam’s Razor as a justification for a theory has no scientific basis, any more than does MML.

    In the case of climate, if laboratory experiments show that CO2 absorbs IR, and analysis of the lab results indicate that they can be extrapolated to a planetary scale, then the correlation of temperature and CO2 becomes more than a mere statistical phenomenon. It is the expected behavior based on both theoretical and empirical understandings of the relevant physics.

    When the AMO, the influence of solar cycles, and the ocean-caused delay in climate response predicted by Hansen et al in 1986 are all taken into account, with parameters estimated as usual by fitting, and with CO2 based on ice cores before 1958 and the Keeling curve after, multidecadal global land-sea temperature since 1850 beautifully tracks 3 times the log base 2 of CO2, see e.g. my recent AGU talk on this. Furthermore there is no sign of a recent pause in this tracking, which completely disappears after factoring out the AMO and the solar influence.

    When experiment confirms a prediction sufficiently precisely, statistics becomes sterile. If careful measurement shows a baseball to closely follow a parabolic trajectory, little is added by quantifying the goodness of fit in terms of its standard deviation: one can easily see the quality of the fit by eye. The same goes for an excellent fit of observed to expected temperature, and that’s what we have today with HadCRUT4 vs. CO2.

  157. Vaughan,
    Apologies, I wasn’t meaning to mis-quote you or infer anything by that.

    Occam’s Razor as a justification for a theory has no scientific basis, any more than does MML.

    Yes, I agree. I’d always rather interpreted Occam’s Razor more as a guide for model development (don’t make it more complicated than you need to) rather than as a simple mechanism for discriminating between different models (unless, of course, you already know that one has unnecessary complications).

    Thanks for the link to the talk. I’ll have a look at that.

  158. @Pekka: Dualism is not in the difference of [the Schrodinger and Heisenberg] representations

    True that. It’s in the difference between a ket as a QM state and a bra as an operator (functional) thereon.

    Whether you consider bras to come from the same vector space as kets is determined by whether you’re a physicist or a mathematician. The physicist would ask, how could they not be the same given that there’s only one separable infinite-dimensional Hilbert space? The mathematician would rebut this with “They’re only the same up to an unphysical isomorphism”, namely complex conjugation, which is unphysical because it reverses time.

  159. KR says:

    Richard Tol – Beenstock’s papers use the wrong test for the data (the ADF, explicitly, only produces useful results on randomness plus a potential linear trend; they applied it to periods with non-linear forcings obtaining erroneous results – see here and here), they ignore volcanic forcings, and their analysis concludes among other things that warming is _negatively_ correlated with methane (” temperature varies … inversely with the change in rfCH4.”), a completely, utterly unphysical result.

    The climate is not a random walk, unbounded wandering from previous states, Richard. It is a physical system constrained by the conservation of energy, which makes it trend-stationary. Running economic analysis while ignoring physics goes beyond foolish.

    It’s difficult to see how that work could be more wrong. Beenstock et al;’s work shouldn’t be ignored due to ideology, but rather Beenstock et al should be ignored because it is blithering nonsense.

    The fact that you recommend such work does not speak well to your understanding of science. It fails basic sanity tests, much as do your missing 300 consensus rejection abstracts. No further reason for ignoring the work is required.
    #FreeTheTol300

  160. Arthur Smith says:

    While the discussion may have left this topic, I felt the need to correct a comment by Vaughan Pratt here – ” the expected distance from the origin is only logarithmic in the wandering time, the process is not going to be terribly far from the origin for any plausible choice of the end of time.”

    As far as I was aware, the expected distance from the origin for a random walk increases with the square root of time, not logarithmically. Not as bad as linear, but far from the logarithmic situation over long periods of time. The case against a regular random walk description of Earth’s temperature or any other reasonable bounded physical parameter is pretty strong.

  161. Arthur,
    Yes, as I understand it, the rms distance goes as the square root of the time.

    The case against a regular random walk description of Earth’s temperature or any other reasonable bounded physical parameter is pretty strong.

    Indeed, very strong.

  162. > Beenstock et al;’s work shouldn’t be ignored due to ideology, but rather Beenstock et al should be ignored because it is blithering nonsense.

    I disagree. It makes for a good exercise. It’s like the “do you think a blanket warms?” kind of stuff.

  163. Willard,
    In the sense that all published work plays some kind of role (if only to illustrate what you shouldn’t do or why something is wrong), then sure.

  164. the expected distance from the origin for a random walk increases with the square root of time, not logarithmically

    Thanks, Arthur. log(0) = minus infinity failed to trigger the warning bell it should have, likewise for log(1) = 0.

  165. John Hartz says:

    Judging by the following article, the proper use of statistics appears to be a hot topic within the scientific community.

    Major Scientific Journal Joins Push to Screen Statistics in Papers It Publishes by Richard Van Noorden and Nature magazine, republished by Scientific American, July 6, 2014

  166. John H.,
    Yes, it is a big thing and many physicists, for example, do get their statistics wrong. There are, however, some who are excellent statisticians, especially those who work with big data sets. Of course, I’m sure there is much that physicists could learn from expert statisticians (and already do), but there do seem to be some statisticians who try to criticise a field without understanding the underlying details. In a sense it works both ways but, my biased guess, is that there are many more physicists who are skilled statisticians, than there are statisticians who are skilled physicists.

  167. In many fields of science research can be done in a way that takes into account the requirements of well understood standard methods of statistical analysis. In those cases it’s important that the appropriate practices are applied throughout the research from the beginning to the end. Often in physics – and in almost all of climate science – that’s not possible, but the data has unavoidable deficiencies as input to the standard statistical methods. In those cases both the physicists (or climate scientists) and professional statisticians may understand only part of the essential issues, but a extended collaboration between them might result in a better outcome.

    The physicists might have ideas that lead to new non-standard methods while the statisticians might be able to help in avoiding pitfalls so typical for a scientists trying to invent more and more powerful methods for a difficult statistical analysis.

  168. John Mashey says:

    1) Good statisticians who get involved enough in an application area are invaluable.

    2) Then there are folks like Wegman&Said or McShane and Wyner.
    See discussion p.67- in Strange Scholalrship. See especially (on right side of page), quotes from (statistician) Jim Berger.
    AS I Noted there:
    “The last comments are akin to the issues of generalists-vs-specialists mentioned in W.5.2. Statisticians must learn enough science to be useful, and scientists need to know when to ask for statistical help, if they can get some, which may not always be possible.”

    W.5.2 (p.144) notes:
    “Generalists may know some widely used mathematical and computing techniques without knowing the literature and terminology of a specific field that uses such techniques. Occasionally, they may jump into an application field, but study the literature insufficiently to be able to produce credible, impactful results. They may reinvent techniques already widely used there, whereas specialists may tend to the inverse, reinventing mathematical techniques well-known in other fields’

    3) Bell Labs was lucky to be able to have a whole lab full of mathematicians and statisticians, like John Tukey, Joe Kruskal, John Chambers (S…R), etc. Other people knew it might be a good idea to consult them on occasion.

  169. Beenstock et al should be ignored because it is blithering nonsense.

    Check with Willard but I’m not sure that’s a winning Climateball move. 😉

    If Beenstock’s main objection is that AGW is founded on poor statistics, then would it not be better to point out that this is a straw man? AGW is not founded on statistics at all, it was predicted in the 19th and early 20th centuries based on physics and paleoclimate, many decades before human CO2 had reached measurable levels, when no statistical evidence was available.

    Over the last half century CO2 rose from 320 ppmv to 400 ppmv, a level not seen for millions of years. As shown here temperature (green plot) over the same period trended up (blue trend line) 0.755 °C (0.151°C/decade, click on Raw Data and see the line 7 from the bottom). The temperature plot for the preceding century shows relatively aimless wandering, and without the CO2 rise would likely have continued to decline for some decades after 1960 before rising again. No statistics is needed to understand this picture.

    If there were an alternative explanation for this sudden sharp rise in temperature accompanying the sharp rise in CO2 (red plot in the above), it might be reasonable to suggest that the correlation with CO2 is a mere coincidence. However the earlier prediction, in combination with the absence of an alternative explanation, makes it very unlikely that this is simply a coincidence.

    One also does not need statistics to see clearly from the graph that temperature has been rising throughout the last half century, and that the so-called “pause” is nothing but an artifact of looking too closely at just a small part of the temperature record.

  170. David Young says:

    It is often the case in my experience that statistician involvement when experiments are designed is helpful and in some fields such as medicine almost a requirement these days. That’s a good thing I think. When one opposes statistics and “physics” one sets up a false dichotomy and encourages bad science and bad statistics. Statisticians are very helpful in combating confirmation bias and positive results bias to which scientists are not immune. Denial of these biases is seeming less and less credible to me in general given the strong recognition of them in medicine and its many sub fields. It seems to me that climate science has an unfortunate history in this regard that tends to cloud the discussion.

    So Pekka makes the best point of this entire thread and I wholeheartedly agree with that point, even though Pekka as always says it more politely and nonconfrontationally.

  171. What’s more important than a statistician is a statistical mechanic — the grease monkey of physics.

  172. Someone (Disraeli? Mark Twain?) once said, “There are lies, damned lies, and statistics.”

    The percentage of the public capable of appreciating this catchy bon mot dwarfs that capable of understanding even the most elementary principles of statistics.

    If the scientific community wants to decide whether faster than light (FLT) communication has been achieved, or whether the Higgs boson has been sighted, statistics is supposedly an effective tool for passing such judgments.

    And apparently also an unreliable one.

    The faster-than-light neutrinos of the OPERA experiment were judged reliable at 6 sigma. That is, the probability a mistake had been made was one in half a billion. In other words, essentially impossible.

    Nonetheless the impossible happened. Against (almost) all odds an elementary mistake had been made, and the OPERA observation is no longer accepted.

    Back in early July of 2012, laboratories claiming to have observed the Higgs particle, a zero-spin positive-parity boson, were reporting a confidence level of 4.9 sigma but only for a Higgs-like particle. By the end of July this had increased to 5.9 sigma at ATLAS and 5 at CMS. By December both ATLAS and CMs were prepared to claim 7 sigma confidence.

    These numbers defy intuition. Intuitively no one should expect to see a violation of a 6 sigma confident proposition, yet it happened. And intuitively no one should expect an increase from 5.9 sigma to 7 sigma to make the slightest practical difference to confidence, yet to judge from the laboratory reports it did for the Higgs-like particle.

    And what of the Higgs boson itself? Stephen Hawking allowed that he’d lost his bet on its existence, but at that time how certain was anyone of this Higgs-like particle’s spin and parity? Or even today?

    With examples like these, how is the public supposed to make any more sense of statistics than of quantum mechanics? There are serious violations of intuition here.

    The appropriately named Marilyn vos Savant turned out to have a flair for probability if nothing else (she believes bicycles remain upright by the gyroscopic action of their spinning wheels). A phalanx of mathematics Ph.D.s unsuccessfully attacked her correct reasoning about the Monty Hall problem, making the point that the ability to get your mathematics thesis past your reading committee says nothing about your competence in matters statistical.

    Perhaps a statistical case can be made for the need for statistics. But who among those who instinctively distrust statistics for all of the above reasons including Disraeli/Twain’s is going to accept such a case?

    When a case can be made that some scientific fact is obvious without statistics, far better to present that fact as being obvious than to allow oneself to be dragged into statistical doubt by the doubt-mongers.

    Admittedly this can lead to inconsistency: two mutually inconsistent facts might both be demonstrably “obvious.” Arguably therefore this could be grounds for retreating to the shelter of statistics.

    The problem there is that people don’t understand statistics.

    Far better is to find some nonstatistical reconciliation of the apparent inconsistency. Only when this proves impossible should one resort to statistics, and even then only for the benefit of those equipped to handle statistics at the requisite depth. For the public at large such inconsistencies will forever remain intrinsic paradoxes, as conceptually incomprehensible as those of quantum mechanics.

  173. David,
    The problem of confirmatory bias is real, but so is also the possibility of exaggerating the problem, and the influence of these uncertainties on rational decision making.

    Vaughan made the point others including myself have also made in this thread. It seems to be also the point Anders tried to make in his original post, although he could have made that more clear. The point is that the warming from additional GHGs is a strongly supported result of physical understanding and predicted before being observed. When that’s the case, the right question is not, whether the effect is true, but how strong it is, and what are its further properties. Those questions must be studied and have been studied using different statistical methods than the question: “Does the time series contain something exceptional over the latest decades?”

  174. Vaughan,

    Multi-sigma deviations have, indeed, been misused as measure of the certainty. People have been saying that elementary particle physicists insist on a statistical significance described by five sigma or so, but they surely would not do that:
    – if they were sure that the errors are solely statistical, and
    – if they would be sure that the distribution is gaussian.

    Insisting on some high number of sigmas makes some sense, when the main additional uncertainty comes from the shape of the PDF. It’s, however, an irrelevant measure, when the additional uncertainty is due to systematic errors as the size of possible systematic errors has no obvious relationship to factors that determine statistical significance of the results.

    I have my personal recollections of several errors in applying statistics. One case is the split A2 meson discussed as one example here. A good friend of mine spent a year or so studying theoretical ideas that might lead to a resonance of such properties, before it was realized that the whole split was an artifact of erroneous processing and statistical analysis of the empirical data.

    Another case was a paper a reviewed for some journal (perhaps Nuclear Physics B, perhaps some less known journal like Acta Physica Polonica). The claimed observation had a fundamental significance as high as that of the faster than light neutrinos. The paper claimed a significance of five sigma or so. I got curious and started digging. The statistical test was on run length distribution of events divided in two classes. First I realized that they had a wrong formula for the standard deviation and thought that’s the explanation, but calculating the correct standard deviation went in the wrong direction. Thus the effect was now at eight sigma. It took some more time to find an error in the estimate itself. That was clearly in the right direction, but I couldn’t calculate the size of the correction. I haven’t heard about the observation since. Thus I assume that I found the right explanation.

    One of the main sources for confirmative bias is that people make an effort if search of error, when they dislike the result, but put much less scrutiny in verifying the analysis, when they like the result. I spent several days in reviewing one article, because the claim was dramatic and explicit, but highly counter-intuitive. The error was difficult enough to spot for getting trough in an analysis that confirms results more in line with expectations.

  175. > The problem there is that people don’t understand statistics.

    Including statisticians, if we believe Deborah Mayo. People don’t get logic either, at least if we go as far as to contemplate the modus tollens:

    A well-known phenomenon in the empirical study of human reasoning is the so-called Modus Ponens-Modus Tollens asymmetry. In reasoning experiments, participants almost invariably ‘do well’ with MP (or at least something that looks like MP – see below), but the rate for MT success drops considerably (from almost 100% for MP to around 70% for MT – Schroyens and Schaeken 2003). As a result, any theory purporting to describe human reasoning accurately must account for this asymmetry. Now, given that for classical logic (and other non-classical systems) MP and MT are equally valid, plain vanilla classical logic fails rather miserably in this respect.

    http://m-phi.blogspot.ca/2012/12/the-modus-ponens-modus-tollens.html

    Context relevance is very important for our inference engine, and both logic and statistics go against it.

    ***

    Once upon a time, Eli ambulated with Socrates and talked about stats:

    http://rabett.blogspot.com/2013/02/on-priors-bayesians-and-frequentists.html

    I’m simply plugging it in because there’s a link to a very impressive presentation by Michael Jordan whom I think Vaughan knows. There’s also a machine learning perspective that explains statistical perspectives that could be intuitive to a population or an audience that calibrates instruments more often than they pick balls from an urn.

  176. @PP: One of the main sources for confirmative bias is that people make an effort if search of error, when they dislike the result, but put much less scrutiny in verifying the analysis, when they like the result.

    So true. This might explain why the proofs of Kempe (1879) and Tait (1880) of the Four Colour proposition were accepted (to wide acclaim) for over a decade—by then it was the expected outcome. Had they proved the opposite (constructively or otherwise) people might have taken a harder look right away. (Martin Gardner gave a 200-vertex counterexample in 1975, barely a year before a computer finally gave a correct proof of the theorem. It had to be pointed out to those who took Gardner’s word for it that this was the April issue.)

    Confirmation bias furnishes “settled science” with inertia: new theories can have a devil of a time pushing their predecessor off its pedestal. Yet at the same time confirmation bias fuels crackpot theories, support for which is readily embraced by their crackpot supporters.

    A further complication is that the line between a crackpot theory and a genuine advance can be hard to decide: a few of the former eventually cross it, and thanks to an over-enthusiastic media there is also flow in the other direction, at least in the public’s mind, witness cold fusion, the FLT neutrino, etc.

    Max Planck famously offered the most pessimistic rendering of the inertia of settled science: a new theory triumphs not by convincing its opponents but because the opponents eventually die, and a new generation grows up taking it as self-evident. (I based my slightly loose translation here on a saying of my own: “The self-evident is merely a hypothesis that is so convenient, and that has been assumed for so long, that we can no longer imagine it false.” My translation is however nowhere near as loose as the more pithy and memorable “Science advances one funeral at a time.”)

    Pigheadedness itself has become a popular topic of scientific investigation, witness
    the Yale Cultural Cognition Project led by Dan Kahan, the work of
    the Melbourne Business School’s Cornelia Fine, etc.

    Personally I’m not willing to wait for my opponents to die, especially given how many of them nowadays are younger than me. I greatly prefer rhetoric founded on sound science, in turn founded on sound logic. And preferably transparent logic, a criterion I don’t consider statistics meets.

    Charles Saunders Peirce consciously made the decisive move from physics to logic, turning down the possibility of a regular physics position at Harvard in favor of a temporary lectureship in logic at the brand new Johns Hopkins University. Others since him such as John Reynolds, John Baez, and Prakash Panangaden have followed a similar career trajectory from physics to logic in the service of computer science.

    It seems to me that our understanding of climate could likewise benefit from more attention to its logical underpinnings. And not just Boolean logic, which as willard points out immediately above puts just as much faith in Modus Tollens as Modus Ponens, as well as the even more problematic Peirce’s Law, [(P → Q) → P] → P, which to my knowledge has never been appealed to in a natural science article.

    Personally I prefer weaker logics such as action logic, to which action algebra is as Boolean algebra is to Boolean logic. Action logic draws intuitively obvious distinctions such as between “open-door ∧ walk-through-door” vs. “walk-through-door ∧ open-door”. Even more complicated examples still agree with intuitition: for example “bet-on-Seabiscuit → (Seabiscuit-wins → get-paid)” and “(bet-on-Seabiscuit ∧ Seabiscuit-wins) → get-paid)” are treated as equivalent while both are differentiated from “Seabiscuit-wins → (bet-on-Seabiscuit → get-paid)” and “(Seabiscuit-wins ∧ bet-on-Seabiscuit) → get-paid”. And unlike Boolean logic, action logic even caters for indefinite repetition, treating “P ∧ (P → P)* ⊢ P” and “(P → P)* ⊢ (P → P)” as equivalent formulations of the induction rule, if P is true initially and then a P-preserving action is repeated indefinitely, one may infer that P will still be true afterwards. Action logic is a conservative extension of the equational theory REG of regular expressions, yet unlike REG it is axiomatizable by finitely many equations. Furthermore star is (formally) reflexive transitive closure in every model of its theory, unlike REG some of whose models interpret star nonstandardly.

  177. John Hartz says:

    Would the following statement be a legitimate “take-away” from this thred:

    Time is not a driver of climate change. Time does, however, provide the benchmarks for the things that do drive climate change.

  178. OPatrick says:

    I feel like a teenager sniggering at the back of the class saying this, but it’s hard to ignore the modus tol-lens and its translation as “the way that denies by denying”. Might they perhaps be related?

  179. BBD says:

    Vaughn Pratt

    It seems to me that our understanding of climate could likewise benefit from more attention to its logical underpinnings.

    What, you mean things like GHGs cause hyperthermals like the PETM, ETM-2, MECO etc?

    That sort of cause and effect stuff?

    Or is climate science missing something here?

  180. Pingback: We need a better class of climate “skeptic”! | And Then There's Physics

  181. Vaughan,

    Check out this argument, which I believe is quite efficient against proponents of inaction logic:

    So let me get Douglas’ argument. Unless one can prove that it’s impossible to find a counterexample of a better fit to the data, any choice of model is unjustified. Is that what Douglas and Nullius are arguing here?

    A “yes” would suffice. A “no” would not. A “no” would need to be padded with “here’s my or Douglas’ argument”, followed by an argument , not textbook platitudes.

    I’m not sure this can be translated into an action algebra, for you can still hear the concert of crickets at Bishop’s. I’m not sure I could repeat indefinitely my request that Douglas tells us if Richard Muller’s email has been released without permission.

    This ClimateBall ™ move has been inspired by John Oliver:

    http://planet3.org/2013/04/24/facing-ridiculous-claims/

  182. Pingback: Another academic shows they are clueless on climate. | ScottishSceptic

  183. David Young says:

    Yes Pekka, you are right about this. My point is to raise another related issue that the involvement of statisticians from the beginning is a good thing and it is a requirement for credibility in medicine, but not in other fields yet anyway. This is the best way to address the issue of the proper use of statistics. It also avoids all the irrelevant appeals to “physics” which is not very relevant and the not very meaningful appeals to “statistical significance”. There are many physical laws, such as radiative physics, which are very certain and well known, but complex systems stubbornly resist the proper application of the “physics” to their complexities. For example vorticity is exactly conserved by the Navier-Stokes equations. That tells us really virtually nothing about how aircraft wakes will behave. It may tell us that our numerical methods are wrong, but not much about the actual complex truth.

  184. You’re more than very right about this, Pekka.
    But you’re so non-confrontational when you say it.
    You’re also very polite.

    But Navier-Stokes.

  185. dhogaza says:

    “For example vorticity is exactly conserved by the Navier-Stokes equations. That tells us really virtually nothing about how aircraft wakes will behave. ”

    Ahh, but physics makes us confident that the vortices are caused by the airplane passing through the air, not a random walk of a bunch of air molecules.

  186. Steve Bloom says:

    “Unless one can prove that it’s impossible to find a counterexample of a better fit to the data, any choice of model is unjustified.”

    That’s brilliant. I realize now that they have invented an entirely new kind of statistics, which I hereby dub infrequentism.

  187. David Young says:

    The point about vorticity is that indeed the vortices are caused by the airplane and the wind, and the ambient turbulence, etc., etc. But this tells us nothing about a safe following distance and that’s what is really important. And the simple “physics” tells us virtually nothing about that. It’s a complex question and statistics plays a very important role in determining that. Complex systems are complex and simple laws of physics usually tell us very little of importance. One would never think about addressing this issue without professional statisticians involved from the very beginning. Really good aerodymanicists know that. Those that believe in simple “physics” are best kept a safe distance away.

  188. dhogaza says:

    Tell me, then, do aircraft controllers respond to this lack of knowledge as to the extent of vortices and the difficulty of computing a safe following difference because quote-unquote physics is of little help …

    1) follow the precautionary principle and keep trailing airplanes far behind the lead aircraft for safety’s sake

    -or-

    2) invoke the uncertainty monster and suggest the trailing airplane crawl right up the lead aircraft’s tailpipe?

    “One would never think about addressing this issue without professional statisticians involved from the very beginning. Really good aerodymanicists know that. Those that believe in simple “physics” are best kept a safe distance away.”

    I suppose I should be really scared of the fact that numerical models play such a prevalent role in the design of modern aircraft, eh?

  189. “That tells us really virtually nothing about how aircraft wakes will behave.”

    When all you have is a hammer ….

    I wonder why DY doesn’t volunteer to help us out in predicting El Ninos — check out http://AzimuthProject.org or http://johncarlosbaez.wordpress.com

  190. David Young says:

    dhogaza, I hope you are just being sarcastic and so far as I can see you didn’t really respond to the point, which I think still stands. Of course the precautionary principle is in play here. Models play a role in aircraft design, but in general a much more limited role than is portrayed in the popular media or in the minds of the public which generally gives far more credence to the modeling aspect than is really justified or than the regulatory agencies give them. I have given you some references on a previous thread if you are interested in a science discussion. “Physics” by itself especially simple and uncontroversial physical principles can tell us very little of real interest about complex systems generally.

  191. DY,
    But the same applies in climate science. Our sensitivity to increased atmospheric CO2 emissions is not based only on climate models. There are many ways to estimate how we will likely warm. Climate models allow us to probe what might happen in more detail. Of course, they are not perfect. Of course they’re all “wrong”. Of course, they’ll get better with time. Of course, there are numerical issues that they may not yet have solved. Arguing, however, that they have problems therefore we should do nothing (as you seem to be implying) just seems rather nonsensical.

  192. David Young says:

    I don’t think I ever said we should do nothing. You and many others here seem to make that assumption, perhaps based on key words that you dislike. My assumption is that we need to know what the sensitivity is and that to do that we need better science and to pay more attention to areas of uncertainty and to better use of statistics, which I would argue is a peculiar weakness of a lot of the literature of climate science. I am unsure of the value of GCM’s. My suspicion is that they are not of much value outside a very narrow range around our current climate and even there are inaccurate. The ice ages seem to me to bear a superficial resemblance to fluid flows near stall, there are multiple steady state solutions and tipping points are common. So, it is wrong a priori to rule out big changes, even though even the ice ages seem to produce slow changes on human time scales.

    As Pekka has pointed out, the issue of what to do is a very complex question that goes very deep in the nature of the human condition and is not going to be settled simply based on science. One thing is certain and that is that adaptation is valuable, regardless of the outcome. And sensible policy about low lying areas and building is called for. It is dangerous to build in low lying areas, both because of sea level rise and because of storms. Even this simple step is however not being taken in any sensible way. In the US, the government continues to subsidize building in these dangerous areas. My precautionary principle is that disastrous scenarios have usually proven to be wrong and a more likely outcome is gradual change. Governments should plan for these changes because they are inevitable and will have significant consequences.

  193. DY,

    I don’t think I ever said we should do nothing.

    Well, that’s certainly how it seems. Well, maybe not “nothing”, but “not very much”.

  194. verytallguy says:

    DY,

    My precautionary principle is that disastrous scenarios have usually proven to be

    wrong

    David, be serious. This is not a precautionary principle, it’s wishful thinking. A purely Panglossian approach and utterly unscientific.

    My assumption is that we need to know what the sensitivity is.

    We do know what it is. It’s between 1.5 and 4.5. IPCC AR5

    Equilibrium climate sensitivity is likely in the range 1.5°C to 4.5°C

    The number would be roughly the same without the existence of the GCMs of which you are sceptical.

  195. BBD says:

    David Young

    There are many physical laws, such as radiative physics, which are very certain and well known, but complex systems stubbornly resist the proper application of the “physics” to their complexities.

    #YesButPaleoclimate

    For the nth time.

    GHGs produce hyperthermals.

    For the nth time.

    You are spouting rubbish and ignoring this corrective.

    For the nth time.

  196. ” better use of statistics, which I would argue is a peculiar weakness of a lot of the literature of climate science. ”
    The recent Ed Hawkins et al comment, see his climate lab book blog, also at Doug McNeall’s blog, pointed out a good example of this. A paper by Mora et al made a very basic stats error (basically, muddling the spread of values with error in the estimate of the mean). Yet this elementary error escaped the notice of the many authors of the paper and the reviewers, and it was published n Nature.

  197. Paul,
    Strange how you would take an error in a single paper and draw broader conclusions. Sorry, did I say “Strange how you would”? I meant, “Not surprised that you would”.

  198. I was not drawing broader conclusions, was I? I was presenting an example.

    One of the recommendations of a report a few years back was that climate scientists should work more with statisticians. Did you know that?

  199. BBD says:

    I was not drawing broader conclusions, was I? I was presenting an example.

    Disingenuous? Moi?

  200. BBD says:

    One of the recommendations of a report a few years back was that climate scientists should work more with statisticians. Did you know that?

    Of course you aren’t implying that there’s a fundamental and widespread problem with climate science arising from a fundamental and widespread misuse of statistics. Of course not.

    Jeez.

    Except this is simply insinuation. Go and find a fundamental problem with climate science instead of all this contentless whispering and hissing from the sidelines. We are all sick of it. So go on. Come back with a substantive, paradigm-shifting paper under your belt or be prepared to be mocked until you drop.

  201. It’s not an insinuation. Seems like poor BBD is also in need of a holiday.

    Here are some quotes from the Oxburgh report, probably from their statistician Prof David Hand.

    “Although inappropriate statistical tools with the potential for producing misleading results have been used by some other groups, presumably by accident rather than design, in the CRU papers that we examined we did not come across any inappropriate usage although the methods they used may not have been the best for the purpose”

    “It is regrettable that so few professional statisticians have been involved in this work because it is fundamentally statistical.”

    “We cannot help remarking that it is very surprising that research in an area that depends so heavily on statistical methods has not been carried out in close collaboration with professional statisticians”

  202. Paul,

    I was not drawing broader conclusions, was I? I was presenting an example.

    Ahh, I see. So, the last sentence wasn’t intended as implying something with regard to reviewing or the Journal Nature?

    One of the recommendations of a report a few years back was that climate scientists should work more with statisticians. Did you know that?

    I believe I had heard that before. Not a fundamentally bad idea, but should ideally be statistician who have some understanding of basic physics or else we could end up with climate scientists being forced to say things like yes, but random walk.

  203. Paul,
    To be fair, you actually appear to be trying to make some constructive comments. If so, I apologise if my initial response to you was somewhat snarky.

    One thing I’ll add about the whole “professional statisticians” claim is that – sometimes – people who write such things fail to realise that many scientists are pretty good statisticians already. I think we do have to be careful of assuming that climate scientists can’t do statistics and that they need the help of statisticians. Collaborating across disciplines is a good thing. However, assuming that one discipline knows nothing of a particular technique and that another discipline will be the saviour of the other is – in my view – a dangerous attitude.

  204. BBD says:

    Paul

    Amazingly, you have ignored what I wrote. Please demonstrate the fundamental problem with climate science arising from a misunderstanding or misuse of stats.

    Can’t do it? Then your “argument” is dust.

  205. I don’t believe that such errors in statistical analysis that could have been corrected by competent statisticians have affected important results of climate science much.

    Such errors of statistical analysis are ubiquitous in almost all fields of science. in some fields the errors may have had serious repercussions, but the difficult issues of climate science are somewhat different. There’s climate relevant data that’s used through a full statistical analysis, but in most cases the results are not sensitive to weaknesses in the statistical part of the analysis. The main uncertainties are elsewhere, professional statisticians cannot help much in solving them. (The multi-proxy analyses may be an exception, but I do not think that their results are essential for any major conclusions. Their role has been highly exaggerated in the public discussion.)

  206. The main uncertainties are elsewhere, professional statisticians cannot help much in solving them.

    Yes, that’s probably a good point. There’s not much use in fine-tuning some statistical method if the uncertainties are dominated by other factors.

  207. Marco says:

    This reminds me of the McShane & Wyner paper, two statisticians, and Eduardo Zorita’s comment on that paper:
    http://klimazwiebel.blogspot.dk/2010/08/mcshane-and-wyner-on-climate.html

    I especially like the ending: “In summary, admittedly climate scientist have produced in the past bad papers for not consulting professional statisticians. The McShane and Wyner paper is an example of the reverse situation.”

    There also is an interesting comment from Hans von Storch in that thread:
    “In the series of “International Meeting on Statistical Climatology” we have over the years tried with limited success to bring the communities together; bringing “real” statisticians into the process did not often result in real successes (even though there were a number of successful imports), mainly because most found it difficult to understand the specifics of climate science (such as inability to do experiments; the ubiquituous dependence across time and space).”

    Of course, there also were the comments on the McShane & Wyner paper, with Tingley showing that the supposed oh-so-good Lasso method…actually wasn’t all that good. Amazing, no, how two professional statisticians made a mess of the statistics?

  208. dhogaza says:

    “. There’s not much use in fine-tuning some statistical method if the uncertainties are dominated by other factors.”

    Uncertainties regarding cloud feedbacks, for instance.

  209. dhogaza says:

    DY:

    “My precautionary principle is that disastrous scenarios have usually proven to be wrong”

    That’s what the engineers building the Lockheed L-188 said until the wings started falling off.

    Or those who built the De Haviland Comet until they experienced hull failure.

    Fortunately, the problems with the latter were solved by statisticians, with material scientists exploring the physics of metal fatigue having very little, if anything, to offer.

  210. dhogaza,
    All sorts of uncertainties. I suspect the uncertainty in aerosol forcing is the biggest we have at the moment.

    From what I read, they solved the De Haviland Comet issue by simulating repeated cabin pressurization cycles until cracks appeared. Is that statistics?

  211. Don’t get me wrong, I am all for statistics where it is needed, done by professionals, for professionals.

    Explanations for the public by professional science communicators of the elementary principles of global warming cannot benefit from bringing state of the art statistical methods to bear on them. Those methods are not needed there, the principles can be explained without them, and the intended audience is ill-equipped to benefit from them.

    It is hard to imagine modern physics or electrical engineering without complex numbers. Yet Martin Gardner’s widely ranging mathematical articles never once depended on an understanding of them. (I know that because he told me so when I suggested to him that his articles could benefit from them where appropriate.)

    Which is harder to understand, complex numbers or statistical methods?

  212. @ATTP: There’s not much use in fine-tuning some statistical method if the uncertainties are dominated by other factors.

    In support of the point that statistics can benefit the science if not the public, quantifying those uncertainties could well benefit from suitable methods including statistical.

    As a case in point, prior to 1970 the Atlantic Multidecadal Oscillation or AMO (which had not even been recognized until 1994) was demonstrably the largest single contributor to global land-sea temperature variation since 1850. When modeled as a 60-year sinusoid peaking in 1880, 1940, and 2000, its upward swing from 1970 to 2000 is added to, and therefore masks the magnitude of, the anthropogenic rise or AGW.

    Teasing apart the AMO and AGW therefore calls for some delicacy. While the fair-mindedness of Solomon might split that baby equally, confirmation bias favors a different split. Bob Tisdale has written many comments, posts, and a book favoring 100% AMO (which he attributes to a ratcheting up of temperature by repeated El Ninos) and therefore 0% AGW. Naive arguments for the influence of CO2 such as the plot I pointed to here yesterday could easily be construed by advocates for concern as favoring the opposite extreme.

    I view this question of allocation between AMO and AGW as one of the two hardest problems in the empirical determination of equilibrium climate sensitivity based on variation of CO2 within recent centuries, the other being the delaying effect of oceanic mixing pointed out in 1986 by Hansen et al. (Aerosols and clouds are also hard but I’m not convinced their respective uncertainties make as big a difference to the difficulty of estimating sensitivity as the AMO contribution and ocean delay; moreover the cloud contribution can be counted among the feedbacks.)

    Both problems can be seen in my AGU slides. Slide 29 juxtaposes half the AMO index with the residual of HadCRUT4 after detrending by estimated CO2-forced warming. Whereas the index itself rises appreciably higher in 2000 than 1940, the residual does the opposite.

    Is the AMO index a fair indication of what it would have been with no change in CO2, or has AGW exaggerated its rise in 2000?

    While I had hoped to explore this in my talk by looking at lower values than 1.93 °C/doubling for observed climate sensitivity in order to evaluate the resulting picture for each such value, I only had 15 minutes for the whole talk, and test runs showed that this and other things I’d hoped to point out just wouldn’t fit.

    A point I would have made is that lowering the 1.93 number has the side effect of moving the peak of the residual (blue curve) to later than 2000, due to the residual inheriting some of the upward slope of LOW (the low frequency component of HadCRUT4, identified in the talk as essentially AMO + AGW). This conflicts with the location of the AMO’s peak, which appears to be at 2000.

    Where statistics can help here is in judging the significance of such considerations. For example what is the true AMO, defined as the AMO index after correcting for anthropogenic influences appropriately determined? Is there any statistical justification for an ongoing 60-year period? And so on.

    (Slides 29-30 address the other problem, how much delay does ocean mixing cause, where I propose a specific relation s(d) between delay d and sensitivity s. The two problems are interconnected because this relation depends on how the AMO-vs-AGW split is resolved.)

  213. In answer to the question, what would a logic of climate look like, one might start from pre-logic, where the concerns long before Aristotle and Euclid were how to avoid fallacies and how to account for paradoxes both real and imagined. I would therefore kick off the subject with a study of climate fallacies and paradoxes.

    The next stage would be to develop a suitable logic of climate that is sensitive to both the fallacies and the paradoxes. The first steps would be to identify the concepts, the terms for them, the propositions expressible with those terms, their interpretations, and the relationships between those propositions that the interpretations entail, i.e. their logic.

    One could then divide the subject into two parts, one suited to human reasoning, the other to computer reasoning. The former would encourage informality in the service of understandability of reasoning about climate, the latter would encourage precision in order to permit verification of computer programs for checking formal reasoning about climate.

    That’s a very rough outline that obviously could use a lot more fleshing out.

  214. BBD says:

    Thanks Vaughan. I hadn’t really grasped what you were driving at with your earlier comment.

  215. John Hartz says:

    I believe that the following annoucement by the WMO has a direct bearing on this discussion because it demonstrates once again how complex the Earth’s clinmate system is and how rapidly it is changeing.

    Scientists urge more frequent updates of 30-year climate baselines to keep pace with rapid climate change, WMO Press Release No. 997, July 9, 2014

    PS – I cannot help but wonder how many statisticians are employed by the WMO and how many of them contributed to the findings contained in the statement.

  216. John Hartz says:

    Also, if I recall correctly, the BEST analysis was conducted by a world famous statistiscian — although I doubt that he took into a count the complexities of the vortexes created by airplanes in flight.

  217. dhogaza says:

    John Hartz:

    The berkeley earth’s team does include a stats prof, Charlotte Wickham, but I don’t know if she was involved in the original effort. You may be thinking of Richard Rohde, a physics PhD rather than an academic statistician but one whose “expertise includes the analysis of large data sets, with estimates of statistical and systematic effects.”

  218. John Hartz says:

    dhogaza:

    Thanks for the clarification. I was indeed thiking of Robert (not Richard) Rohde. Here’s what Berkley Earth posts about him:

    Robert obtained his Ph.D. in experimental/theoretical physics in January 2010. His expertise includes the analysis of large data sets, with estimates of statistical and systematic effects. Robert is the co-author (with Richard Muller) of a series of papers on the analysis of biodiversity in the fossil record. His Ph.D. thesis was on The Development and Use of the Berkeley Fluorescence Spectrometer to Characterize Microbial Content and Detect Volcanic Ash in Glacial Ice. Robert is the author and creator of “GeoWhen”, now used as the main reference link for the International Union of Geological Sciences. Robert is also the founder of Global Warming Art..

  219. John Hartz says:

    The following statement resonates very well with my thoughts on what we are up against on the climate change front. .

    “Conducting irreversible experiments with the only planet we have is irresponsible. It would only be rational to refuse to do anything to mitigate the risks if we were certain the science of man-made climate change is bogus. Since it rests on well-established science, it would be ludicrous to claim any such certainty.”

    Climate sceptics are losing their grip Op[-ed by Martin Wolf, Financial Times, July 8, 2014

  220. > I was not drawing broader conclusions, was I? I was presenting an example.

    Well, PaulM, your “example” was about “a peculiar weakness of a lot of the literature of climate science.” Substantiating both “a lot” and a “peculiar weakness” might have been nice.

    Before delving into fallacies and paradoxes, I think an inaction logic should take into account moves such as PaulM’s. Something like this:

    – From a general claim about climate science, reply “and Mike”.
    – When questioned about the relevance of that move, reply “It’s just an example”.

  221. Tom Curtis says:

    Vaughn Pratt:

    “As a case in point, prior to 1970 the Atlantic Multidecadal Oscillation or AMO (which had not even been recognized until 1994) was demonstrably the largest single contributor to global land-sea temperature variation since 1850.”

    On the contrary. The AMO, modeled as a persistent 60 year cycle in North Atlantic temperatures cannot be demonstrated to exist even now, with the period and magnitude or temperature fluctuations in indices of North Atlantic temperature fluctuating from century to century. There is even some evidence that the period and intensity of the AMO is dependent on external forcing, and might be best thought of as an amplification of (particularly regional) forcing due to a heightened temperature sensitivity in the North Atlantic. It may also be an internal variation in temperature that responds to temperature (analogous to a forced pendulum), or a chaotic fluctuation in temperature in the North Atlantic that is merely oddly coincident with changes in external forcing over the twentieth century.

  222. Pingback: Play the Ball | And Then There's Physics

  223. jsam says:

    Discuss “Statistical analysis rules out natural-warming hypothesis with more than 99 percent certainty”

    http://phys.org/news/2014-04-statistical-analysis-natural-warming-hypothesis-percent.html

  224. (Sorry, I only just now returned to this thread and so missed the following.)

    @Tom Curtis: The AMO, modeled as a persistent 60 year cycle in North Atlantic temperatures cannot be demonstrated to exist even now

    Indeed. I was making no claim of persistence, but was considering only the period since 1850. I should however have said largest natural swing since the variance of the AGW signal dwarfs all natural swings combined.

    There is even some evidence that the period and intensity of the AMO is dependent on external forcing.

    Last year I ran across strong evidence that the AMO originates below the surface, and presented it as Part 1 (slides 3-11) of my AGU Fall Meeting 2013 talk GC53C-06. The observation is that during the first two upwards trends of the (weighted) sum LAND + SEA, the difference LAND − SEA trended down: sharply for 1860-1880, not so sharply for 1910-1940. For 1970-2000 both the sum and the difference trended up.

    I claim that this shows that during the first two rises in global surface temperature the direction of heat flow at the sea surface was from sea to air. Hence that portion of the AMO cannot be attributed to radiative forcing, whose heating effect on the ocean would have to flow the other way, from air to sea.

  225. Pingback: Matt Ridley, you seem a little too certain! | …and Then There's Physics

  226. Pingback: Really, Benny Peiser, really? | …and Then There's Physics

  227. Pingback: Personal attacks on Met Office scientists | …and Then There's Physics

  228. Pingback: Some advice for the Global Warming Policy Foundation | …and Then There's Physics

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s