I’m at a bit of loose end today. I should really be out of the office, as there are important things happening, but I have no control over them and so have come in anyway. I have, however, looked through my notes for tomorrow’s lecture more times than I need to, and have proofread, edited, and changed my recent paper so many times that it no longer says the same thing it did when I completed the first draft (it’s my first proper single author paper and so I’m going through the “what if the referee says I’m talking complete bollocks” phase). I do need to do a bit of marking, so will get around to that at some stage.
My attempt to withdraw somewhat from the online climate debate also mostly failed. It was partly because I’m really not good at sticking to my resolutions, and partly because it turned out that there are things that can happen that make the online climate debate – rather than being the most stressful thing in one’s life – a fairly nice distraction.
So, I have found it somewhat interesting that Steven McIntyre has taken an interest in things I’ve said about his paper. To be fair to Steven, my understanding of his paper has changed a great deal since I made most of the comments he’s highlighted. I still think his paper is being misinterpreted by some (which was one of the points I was suggesting in one of the comments he’s highlighted) but I don’t really care. Our current understanding of our millenial temperature history is not based on papers published about a decade ago (or longer), but is based on papers published much more recently (which, broadly speaking, show the same kind of temperature history as illustrated in Mann, Bradley & Hughes, 1998). I also find it somewhat ironic that Steven is accusing others of playing ClimateBallTM while misinterpreting my comment, still going on about a paper published 16 years ago, and both suggesting (in his post) that Michael Mann’s work was fraudulent and that Wahl & Ammann committed plagiarism (although I couldn’t work out how he concluded that).
Sometimes people suggest that I should go and comment on posts like those written by Steve, but you just need to see JeanS and Stephen Mosher’s responses to Nick Stokes, to see why I don’t. Don’t get me wrong, there’s nothing fundamentally wrong with being a jerk. Many people are and I can be one myself at times (this might be one of them), but I certainly don’t need to interact with them. If I had Nick Stokes’s patience, maybe I could; but I don’t. I could choose to respond in kind but I have no great interest in doing that. I might if I thought it could actually achieve something, but it almost certainly wouldn’t. I also have no real problem with such people commenting here, but then they’d need to put up with whatever moderation decisions happen to be made.
Anyway, this is just a post to kill some time and to make some observations. FWIW, I went to an interesting talk recently about Dansgaard-Oeschger events, that I may write about in due course. They’re interesting as they appear to be internally forced, and so there is much they can tell us about internal variability. I had also booked tickets to another climate science related talk, but that was cancelled because the speaker had to go to the UN meeting last week. So, that’s all from me for the moment. I will probably have more to say in due course 🙂
In fact Steve McIntyre’s most recent post has got even more interesting as he is accusing Nick Stokes of sliming him (ignoring that the title of the Nick Stokes post that he doesn’t like was mimicking his own post title). The interesting thing is Nick Stokes’s comment which highlights, in my opinion, the key point. All of this “mining for hockey sticks”, “finding hockey sticks in random data”, refers only to PC1 (see Figure 2 of McIntyre & McKitrick 2005 for confirmation). It does not apply to full reconstructions. Even Steven McIntyre’s own work confirms that a full reconstruction returns a hockey-stick-like profile irrespective of whether short centering is used or not (there may be interesting questions about the details, but that doesn’t change the overall picture).
This is essentially what I was getting at in my comment from ages ago when I suggested that people misinterpret his 2005 paper. It refers primarily to PC1 not to full reconstructions. Since it is the reconstruction that is really of interest, this is not necessarily all that relevant (other than highlighting that you need to understand the implications of your choice of centering).
The 2005 EE paper makes perfectly clear that the reconstructions has, in essence, two basic steps. First there is the reduction of multiple supposed procies to a small number by the use of what turned out in the tree ring case to be short-centred “principal components”. This procedure ensured that the so called PC1 had a hockey stick shape which arose mainly from the bristlecone pines. The second step is the regression step where the PC1 is used as a RHS variable “explaining” temperature. This step effectively weights the RHS variables by their correlation with temperature in the instrumental period, meaning in practice that the reconstruction is dominated by the bristlecones pines and Gaspe. Of course if you just include the bristlecone pines and Gaspe in the regression without doing any PCA you get essentially the same results. But the point of the PCA was supposed to be that the PC1 exhibited the dominant pattern in the tree rings. It turns out that the Mannian PCA does not do that. It exhibits the behaviour of he bristlecone pines and Gaspe, which in a proper PCA dominates the fourth PC, explaining a small proportion of the total variance. So MBH1998 and 1999 only work if these peculiar trees – selected by the original collectors of the data because of their supposed sensitivity to CO2 and uncorrelated with local temperature -are good proxies for Northern Hemisphere global temperature. What the short-centred analysis does is to ensure that the hockey stick in the bristlecones makes it to the regression step, and gives its position there rhetorical force as a dominant tree ring pattern. Conscientious readers of M&M 2005 EE have known this since 2005 and it’s time you caught up.
I notice my “jerk” comment has travelled across to climateaudit and possibly had the response that might be expected. I’ll add something that may be conciliatory but may make things worse. There are certain people who are such jerks (and you can substitute numerous other terms if you wish – possibly the word that all news readers feared when Jeremy Hunt became minister for Culture) that I have no interest in even mentioning them. They’re beyond any hope as far as I’m concerned. There is another more interesting group who appear quite knowledgeable, might be interesting to talk with, but seem to often resort to behaving like jerks for reasons unknown. I sometimes think that pointing this out might help people to recognise that maybe they could have more interesting exchanges with others if they toned down their rhetoric somewhat (I say this with full acknowledgement of irony). It doesn’t always (rarely?) works – although it has a habit of working more in real life than on blogs, I think.
Well, being a jerk seems to be the only constant seen coming from McIntyre and his minions, so no surprise here.
I’m not sure I’m the one who should be catching up. The reason I said I don’t really care is that if I want to understand our millenial temperature history I’ll go and read the more recent literature. Since you appear to be the one who has caught up, maybe you’d be good enough to tell us what more recent reconstructions tell us about our millenial temperature history. Maybe you could also explain the error in this statement
And again, because there seems to be some resistance to acknowledging the current state of knowledge regarding millennial climate change:
PAGES 2K Consortium (2013) vs MBH99
Yes, I was considering including a link to something like that, but was confident one of my commenters would do it for me if I didn’t do it myself 🙂
I think there is an argument to be made for continuing the ClimateBall(TM) games with the Auditor and his crew and any others that want to play with the hockey shtick.
Those opposing the implications of AGW identified the crucial significance of the MBH98 hockey stick. That it communicated the basic message of AGW more powerfully and simply than all the words and scientific citations in the IPCC reports, especially to those without an insight into the science.
It is an iconic summary of AGW accessible to all.
Which is why it has been the subject of attack for 15 years.
But as analysis of public perception has shown there is a blow-back effect. If you spend a lot of your time trying to refute the main point made by the other ‘side’ the unintended consequence is that you give much more prominence to the very concept you are trying to reject.
Worse for the ‘side’ trying to depose the hockey stick, the objections they make are arcane and far beyond the comprehension of almost 100% of the audience. You need far more sophisticated statistics than PC1 to detect how many people understand the esoteric maths involved.
True, you can get a long way by just appending the word ‘fraudulent’ to ‘hockey stick’ and hope the association sticks.
But it is a weak position, easily refuted and it STILL strengthens the perception in the public perception that there IS a real hockey stick in climate science, a change in the climate unprecedented in the last thousand years at least.
Whatever doubts the deniers may try and engender about the recent climate records, the very act of attacking them, especially the first iconic exposition of the exceptionality of AGW, the more they promote and enhance the public sphere the idea of a hockey stick.
An unmistakable divergence from the relative stability of the climate during the emergence of human agriculture, civilisation and the industrial revolution.
Indeed, that’s probably true. The other thing that I thought of was that given our instrumental temperature record, any reconstruction that does not have a “blade” is clearly wrong. The more interesting question regards the variability in the period prior to the mid-1800s. Although there clearly is variability prior to the mid-1800s, the reconstructions pretty much all suggest that we’re now warmer than we’ve been for at least 1000 years and that the rate we’ve warmed is faster than at any time in the preceding 1000 years. Any lengthy discussion about PCs and other details might be interesting, but it doesn’t change that basic picture.
I thought I might also comment on the issue related to the subroutine hosking.sim which, I think is used to generate the persistent red noise. There are two issues. One is that the input appears to be the original MBH98 data. The other is that there is no easy explanation for what hosking.sim actually does. When Nick Stokes mentioned this, one of the responses was this comment, which is essentially saying “here’s the source, work it out”. Well, it’s not normally that easy and I certainly can’t tell from the source code what it will do given an input of MBH98 data (i.e., will it really return persistent red noise with no trend?). I guess if I had the time and the inclination I could, but I don’t.
So, noone has even bothered to even try and explain what it does and everyone is arguing that the source code is available, therefore it can be worked out by those who aren’t sure. I certainly find that poor (if someone is unsure about something I’ve done, I try to explain it, I don’t simply send them lines of source code) but it’s also rather ironic given what the very same people suggest Mann and others should be willing to do.
JeanS judiciously makes sure you may have a point, AT:
So JeanS is just a jerk because he’s giving back. But to whom, and how does it relate to you? As is often the case with auditors, their ClimateBall ™ moves make them lose focus on their targets.
AT refers to the Auditor, and mentions JeanS’ treatment of Nick. JeanS responds that he’s just “giving back” to Nick. Does that mean that the Auditor was simply “giving back” to you?
Considering the Auditor’s recent gamesmanship proved yet again that he was the fiercest player in the history of ClimateBall ™, JeanS’ appeal to retaliation may be cautioning your remark.
Since JeanS cautions giving back, he can’t deplore being called a jerk. This may explain why JeanS appeals to pride. The number of comments by JeanS elsewhere than at the Auditor’s may not speak very well for
thehis own courage, if we accept this as a valid measure. Considering the number of comments by the Auditor at Nick’s, it may even suboptimal
In any case, Nick Stokes reminded what JeanS just said:
That was after Nick kept reminding that JeanS was sliming Nick over something that could have been clarified had JeanS not failed to fish out a comment that was left in moderation for more than twelve hours.
And, in the great spirit of ClimateBall ™, JeanS fails to realize that he has toward Nick a reaction that looks quite similar to yours toward the fiercest player in the history of ClimateBall ™, and perhaps ClimateBall ™ in general.
ClimateBall ™ – Courage and Valor
Here’s the paper that describes the method used by hosking.sim. Judging from the abstract I’d say that it produces a series with properties much like the input series, but then I’m way not a statistician.
I love it when they break out the “coward”…even better than Mosher’s “liar”. Perhaps they’re concerned that someone might think that they are only concerned with the science?,
Rodents ate your link, RN.
I’ve actually been trying to find the paper myself, but even the journal seems to have a dead link.
Indeed and when someone shows a positive correlation between bravery and scientific ability, I might start to get concerned. Until such time, cowardice is fine with me 🙂
PDF of Hosking 1984, which describes the method used in hosking.sim is downloadable here.
Details of the R package here.
I don’t know if that helps at all, as it’s all geek to me.
And for good measure, a partial explanation of hosking.sim by Eugene Wahl is here. Only partial as he was seeking help understanding it entirely, so anybody confused by what it does is in good company.
I will add that I am very impressed with Jean S’s non-cowardice. Sitting at your keyboard and calling people names is such a brave thing to do. Such bravery is deserving of a Medal of Honor.
Another buddy of Stevie Mac’s – Ross McKitrick – once called a journal editor (that he’s presumably never met) a “groveling, terrified coward.” I guess that hanging out with Steve just inspires people to attain such heights of bravery?
And finally, here is the second paper (Percival, 1992) referred to in the description of the R-package.
I haven’t read the other comments here, but I thought I’d leave a message that isn’t necessarily part of this conversation: ATTP, climate science debates are going to continue to frustrate you. If you can’t tear yourself from them, then at least try to make headway with the community of people who agree with you. Everyone wants to see their ideas appreciated. I think we should set a date for another climate march, or something else remotely useful, instead of trying to convince people who will never be convinced. It does things to our brain, if you know what I mean.
Link has been thrown up.
As usual, I’m not competent to comment on the math, but the distinction between PC1 and PC4 and 5 has been the subject of quite a number of comments at climateaudit in the last couple of weeks. I gather it was something they also discussed long ago. The upshot of their comments seems to be that Mann along with everyone else originally started with PC1 and 2. Then when that led to trouble with M&M he tacked on up to PC5. That normally wouldn’t give much of a contribution, but then you follow procedures afterwards that select for the “blade”, and PC5 becomes the main signal. Then Mann claimed that there’s some rule for these procedures that says to select the first five PCs, a rule which they claim over there could not actually have been used by Mann and wasn’t. In short, an ex post facto methodology designed to protect the hockey stick.
If I understood it right.
Anyhow, your points has been addressed there, several times, but you need to look around in the comments and stuff.
I can see why they are somewhat frustrated (the recent treatment of Nick Stokes was just absurd, though.) They did all this a decade ago, and near as I can tell everyone else except Mann eventually agreed that Mann’s methodology was statistically unsound, and everyone stopped using it, as they should’ve. And here is everyone reposting the same issues, and “wondering why Steve McIntyre won’t admit his egregious mistakes, see (link that proves it beyond a doubt that I personally didn’t work out the math in detail but it sounds good and I trust them), what a hypocrite he is!” [If, by the way, Nick Stokes was there a decade ago and was answered and is now leading a bunch of newbies down a garden path, I withdraw my earlier comment – he would deserve what he gets and they are right to be sick of him.]
Try this. It claims to illustrate that for short centering you keep 2 or maybe 3 PCs. For standard centering you keep 5 or maybe 6 PCs.
As far as the blade itself is concerned, the instrumental temperature record tells us that if a reconstruction doesn’t have a blade (i.e., a rise of about from the mid-1800s to the late 20th century) then it’s wrong.
I’m less frustrated than I may seem 🙂
> If, by the way, Nick Stokes was there a decade ago […]
Research and report, miker613:
Please leave counterfactuals to metaphysicians.
Thank you Rattus N and Tom Curtis.
Way beyond my comprehension too, but that has nothing to do with the public amenity of providing links 😉
The contrarian war on MBH98/99 is a dead issue except to contrarians.
The rest of the world is now on PAGES 2k (2013).
End of story.
Principal Component Analysis is a data reduction technique wherein you try to boil a great deal of data down to the essentials. Without, mind you, _discarding_ those essentials.
This requires the application of significance testing for your PC’s, to determine whether they are meaningful and whether they should be kept. Short-centering for that particular dataset results in 2 or 3 PCs; it has the benefit that one of the major sources of variation (recent rises in temps) is clearly extracted and isolated, not rather spread through multiple PCs.
MM05 used a different centering methodology, which when PCs were extracted left mostly yearly variations in PC1. Standard significance testing (which _was_ applied in MBH, you are completely wrong there) for that centering shows 5 or 6 significant PCs. But McIntyre and McKitrick did not apply standard significance tests, kept only two PCs, and threw the hockey-stick in the significant PC4 away. This is a basic error in PCA on their part. They then spent a great deal of time comparing their PC1 with MBH’s, wholly inappropriate since when comparing different basis functions for the same space you need to look at comparable dimensions – they should have compared their PC4 with MBH’s PC1.
If you _actually perform_ the required test, and include significant PC’s, MM05’s analysis shows a hockey-stick. It’s in the data.
I recommend again looking at Wahl and Ammann 2007 comparing MBH and MM05., That examines short-centered versus full centered (no difference in conclusions), presence or absence of various proxies (no significant difference – where there are large differences in MM05 they are also ranges that fail validation – validations tests MM05 _did not perform_), weighted or unweighted proxies (no difference in conclusions), etc etc.
The long and short of it is that McIntyre’s objections come from basic errors in PCA, have been addressed, and have been shown to be unfounded. If you correct MM05’s errors, you see a hockey-stick. Time to move on.
> I’ve also just noticed that [the Auditor]’s suggestion that something I’d said had raised an issue that might be relevant to Steyn’s allegations of fraud, has made it into Steyn’s most recent post. Now that is ClimateBall ™ excellence.
Indeed it is, AT, but you have to admit that how miker613’s baiting you away from the topic of your post is quite beautiful too.
Here’s how JeanS could have reacted regarding miker613’s peddling:
Here’s how to resolve something the non-ClimateBall ™ way:
After a bit of clarification:
After a bit more clarification:
What fun JeanS would have had with that anonymous commenter!
That the fiercest player in ClimateBall ™ history still tries to stay away from editorial adjectives may explain his hostile takeover on the concept of ClimateBall ™.
“The contrarian war on MBH98/99 is a dead issue except to contrarians.” :O
Apparently you’ve missed the recent new war on contrarians who disapprove of MBH98/99. Though I’m not sure how, given that you are posting a comment at one of the battlegrounds.
Steve Milesworthy seems to entertain a different theory than JeanS’ courageous one:
Lots of theories.
MikeN – Your self-identification as a contrarian is now noted. 🙂
ATTP, I don’t think we need the realclimate link; McIntyre’s most recent post discusses this in detail. And as he points out and gives links, he (and Mann and Wahl and Ammann) knew this, discussed it, published it back then. Everyone knew it; it was on the conclusions drawn from it that they differed. IMHO, Stokes is the one being silly here to claim that McIntyre was reluctant to discuss it. And if Stokes was there back when it happened, my adjective “silly” is not strong enough.
> Stokes is the one being silly here to claim that McIntyre was reluctant to discuss it.
What “it”, miker613?
You’re peddling again, and now about something that should be directed at Nick himself.
Why don’t you go tell to Nick that he’s being silly about “it”?
Just glanced at the latest CA post – once again McIntyre somehow fails to say _anything_ about testing his PC’s for significance. Let alone validating the statistics of his reconstructions W/O various proxies. And accompanied by the usual insinuations of fraud on Mann’s part.
Not surprising – if he actually were to test the significance of his principal components, and include 5 as having passed the same thresholds as MBH’s 2 PCs (from a different basis extraction), he would again see a hockey-stick.
Nothing new, nothing meaningful.
But if that is true, then there’s nothing to discuss. Figure 2 in McIntyre & McKitrick (2005) shows the distribution of HSI for short centered and standard centered and shows that short centered produces non-zero HSI when standard centered does not. People interpret this as suggesting that short centered produces hockey sticks from random noise. However, what KR, and the RealClimate post (and Nick Stokes) are pointing out is that standard centered moves the signal to a different PC and Figure 2 in McIntyre & McKitrick (2005) only considers PC1. If the RealClimate post (and KR) are right, then all that McIntyre & McKitrick (2005) is illustrating is that the hockey stick does not appear in PC1 when standard centering is used. However, given than standard centering appears to require keeping 5 or 6 PCs, this doesn’t mean that short centered is wrong. It’s just different.
The basic picture I’m getting is that there is nothing wrong with what MBH98 did. They simply used a different centering resulting in a different number of significant PCs and that if they’d used standard centering would have had to keep more PCs. So, what’s the big deal. Steve McIntyre could stop this instantly if he acknowledged the above (at least I think he would as this is what I take Nick Stokes to be saying).
I’ve also just noticed that Steve McIntyre’s suggestion that something I’d said had raised an issue that might be relevant to Steyn’s allegations of fraud, has made it into Steyn’s most recent post. Now that is ClimateBallTM excellence.
“The long and short of it is that McIntyre’s objections come from basic errors in PCA, have been addressed, and have been shown to be unfounded. If you correct MM05’s errors, you see a hockey-stick. Time to move on.” KR, as McIntyre’s links show clearly, he discussed all this in detail back then, getting exactly the same results as Wahl and Ammann. He explained then and explains now why (in his opinion) they’re wrong anyhow. In short, what you’re doing is just reading one side of the issue, not knowing or caring that there were responses to those points, and thinking that you know all about it. “Time to move on.”
This is the bit I’m not seeing. For example, he refers in a recent post to Figure 2 of M&M05. But, this only shows a comparison of PC1. If standard centered moves the signal to a different PC, how does Figure 2 show that anything is wrong?
Also, if almost all reconstructions today produce results similar to MBH98, it’s a remarkable error. Not only did it give broadly the same result as newer methods using more and different proxies and different methods, but there are plausible arguments as to their method not actually being wrong. Sometimes coincidences aren’t really coincidences.
“Steve McIntyre could stop this instantly if he acknowledged the above.” As his post and links show pretty conclusively, he acknowledges all of it – and always did. I don’t think that it’s his fault that you and Stokes and all are misunderstanding the point of disagreement (and please, I’m not the one to try to address it. See the discussions there, and/or read the two commission reports that both rejected Mann’s method, or elsewhere. But please stop discussing Nick Stoke’s “discovery” as a gotcha when McIntyre published the same thing a decade ago.)
As I said, I can understand his folks finding all this really annoying.
BBD: And jokes about Al Gore. They really need to talk about Al Gore.
I don’t know why. I mean he’s only my hero that I idolize, and I will follow to the ends of the earth! 🙂 [For you deniers with no ability to understand humor, this a joke with heavy sarcasm.]
Anders, is there a point to explaining math to someone who doesn’t understand math?
Recent work substantially validates MBH98/99. This isn’t difficult to understand and yet on you go as if nothing had been said. It’s as if you didn’t understand the words.
Let’s try again:
Recent work substantially validated MBH98/99. There’s nothing left to say.
I don’t really get this. I don’t actually care. As far as I’m concerned MM05 and MBH98 are largely irrelevant. I was actually trying to have a bit of a break when Steve dragged some comment I’d made almost a year ago into one of his posts.
TBH, if what Steve thinks agrees with what I’ve said in my comments above I’m more than happy to move on. In fact, I’m happy to move on anyway as this only really has academic interest to me. I’ve learned some things about the details that I didn’t understand before. That’s quite a good thing. Other than that, my view of our understanding of our millenial temperature history is unchanged (which is what is really important).
MikeN – If you fail to perform significance testing in PCA, you have made a basic error with the technique. If you perform those tests, and reconstruct failing to include components of significance, you have made a _really_ bad mistake.
MM05 made just that mistake, nd McIntyre’s poxt hoc excuses have never as far as I can see ‘fessed up to making that undergraduate level error. Sad, really.
It’s not “just reading one side of the issue”, it’s knowing what PCA involves, and how to do it correctly. Dropping significant PCs is an error on the order of extracting a linear regression from data (OLS), not realizing that regression has a +/- uncertainty, and then making claims contradicted by that uncertainty. Something that happens again and again regarding the temperature record, unfortunately.
If you leave out components that describe significant portions of the variance, you _cannot_ reconstruct the original data with any fidelity – that’s just what MM05 did, and what McIntyre continues to (incorrectly) claim he was somehow justified in doing.
“Recent work substantially validated MBH98/99.” Uh, no. You are changing the subject; we are not discussing the results of MBH98, but the method. The method remains incorrect, and recent results (assuming you mean PAGES2K and the like) don’t address it.
“if what Steve thinks agrees with what I’ve said in my comments above I’m more than happy to move on. In fact, I’m happy to move on anyway as this only really has academic interest to me.” That makes sense. But I think it might have more than academic interest to Steyn; perhaps the recent resurgence in interest in a dead issue is an attempt to pre-emptively defend Mann against claims of incompetent data analysis.
It might also have more than academic interest to McIntyre, as people have been making accusations about his honesty in hiding this devastating refutation of his work – a “refutation” that he published himself and discussed ten years earlier. It causes me to lose some respect for Nick Stokes (who apparently knew that) and all those who have trailed in his wake. The second group was I guess misled. But they don’t like McIntyre at all, so the most they say now is, Well, who cares about this anyhow; how boring.
I kinda get whiplash between the sets of comments on how MBH was absolutely fine and M&M was refuted, and the ones that say it’s all ancient history and it’s so petty to bring it up again.
I beat you to it, note the timestamps
Also I meant to say thank you for the Rohling link on paleo sensitivity.
Willard, I will get back to you, very busy, please bear with me.
And once again for the contrarians (I will keep posting this one until the mods get sick of seeing it)
The discussion here is quite bizarre. It is quite clear that the Mann regression step for the 140 step depends almost completely on the bristlecone pines and Gaspe for its hockeystick result. I hope we can all agree on that. So the question is whether ite Mannian PC1 is an appropriate proxy. The short-centred so-called PCA done by Mann, which I am not aware has been used by anybody else and is not PCA as conventionally understood, claims that the bristlecones are a good procy because tehy represent the dominant pattern in the North American tree rings. This is clearly false. Proper PCA outs the bristlecone patterns in PC4, a relatively insignificant pattern among all the tree rings. The PC4 is basically just the bristlecones. So including the PC4 gives you the MBH result as MM stated quite clearly in EE2005, but that does not mean it’s sensible. It depends obn the bristklecones/PC4 being a good proxy and there is substantial evidence that it’s not. ooking at verification statistics does not make them a good proxy either. If the bristlecones are reflecting CO2 as the Idsos believed or the aftereffects of mechanical damage as McIntyre thinks more likley they will be poor proxes even if they correlate with temperature in that part of the instrumental period used for training. The correlation could easily be spurious. None of this is difficult and I find it hard to believe that you are still flogging this dead horse.
As for the other reconstructrions, many use theMannian PC1, many use correlation screening which will artificially amplify from a set of red-noise series which have no average trend )and have been constructed to be trendless) those series which just happen to be going up at the end thus producing hockey sticks from a data set whose simple average is flat. Some reconstructions even use proxies like the Tiljander lake sediments whose modern behaviour is known to be the direct result of humna activities and not temperature at all, but still use that modern bahaviour to calibrate the proxy. So I’m waiting for some hard evidennce before I make my mind up about how modern temperatures compare with supposed global medieval warm periods or little ice ages.
If you think this is bizarre, you clearly don’t read blog comments very often.
mikep – “So I’m waiting for some hard evidennce before I make my mind up about how modern temperatures compare with supposed global medieval warm periods or little ice ages.”
See here, Air Temperature, Global and Hemispheric. The data is in.
Again, Wahl and Ammann 2007 examined this – removing the proxies in question give different 15th century results, but those fail to pass validation, and hence are not meaningful. It reduces the length of time you can make conclusions from. However, further work has confirmed the results (IPCC 2007 Fig. 6.10), independently indicating that those proxies _are_ indeed reasonably accurate indicators of paleotemperature.
if it’s all so obvious, then you’ll obviously be able to point to the published reconstruction which shows a true reflection of the obvious uncertainties as you perceive them.
Until a contrarian does that, I’ll take pages2k as the current state of the art in this, thank you very much. Which agrees with Mann.
If you think that mikep’s comment is innocent, AT, you don’t read the auditors very often.
mikep just tried to inject Tiljander in the discussion.
You are being hacked.
I hope miker613 understands the hacking metaphor. He says he’s a programmer, after all.
Perhaps I should speak of social engineering instead of hacking.
He might be a programmer, but that’s not what I’d call him.
Carrick posted an interesting graph of an ensemble of all proxies over there:
I think it’s an exaggeration to say that modern reconstructions have confirmed MBH results. The handle is very different from MBH’s claimed, much bendier. _As predicted_ by M&M, MBH kept no signal except for the “blade” – and the blade is only there because they selected for it.
Unless you’re a politician and only care about “Highest temperature in the last thousand years!”
MikeN – Funny, but “Highest temperature in the last thousand years” is exactly what the ClimateBallers are complaining about. Are you saying that the contrarians are politicians? ‘Cause some of them are actually lobbyists, which is rather related…
miker613: Without even questioning your graph…
You’ve just posted a graph which shows that Global Warming is causing skyrocketing temperatures. A process which used to occur over centuries (according to your graph) is now a freakish vertical assent.
Just so you know… Dirac functions cause many linear systems to explode.
You are changing the subject again. We are talking about MBH98.
I claimed its result was wrong. You claim that it was confirmed by recent reconstructions. Well, yes, if the only result you care about is a single yes-or-no question, and MBH shows a flat handle because its method is mostly guaranteed to produce one – yeah, you have a 50% chance of getting the right answer. Doesn’t do much to prove that MBH was presciently right about anything, though.
But as I mentioned above, there are those who care about more than that one question. In fact, the entire recent discussion has been about a different set of issues.
I think you’re changing the topic a little. The issue isn’t about whether or not MBH98 is still exactly right today (as far as I can tell the broad result from MBH98 is about the same as the result today – as people have illustrated by showing the comparison between MBH98 and Pages2K). The issue, if there really is one, is whether or not MM05 actually shows a problem with the MBH98 method. As far as I can tell it doesn’t, given that the main comparison is with PC1 only and others have shown that short centering and standard centering require retaining a different number of PCs.
[Mod: This comment has been removed by the moderator]
“You’ve just posted a graph which shows that Global Warming is causing skyrocketing temperatures. A process which used to occur over centuries (according to your graph) is now a freakish vertical assent.” Jeepers. Can we stick to the subject?
I’ve seen hockey stick graphs before. Pretty much everyone agrees that global temperatures are rising from CO2. Pretty much everyone agrees that the proxy reconstructions have very little ability to catch fast changes. Pretty much everyone agrees that earth has had even bigger temperature shifts before (probably not as fast.) What they disagree totally on are, how much will temperatures rise, how negative will the impacts be, what will be the cost of trying to mitigate globally as opposed to adapting locally,…
If that ensemble of Carrick is correct (and I have no reason to think that it isn’t), it does indicate that MBH98 had very little skill in telling about temperatures based on the proxies. It seems to confirm the basic claim of M&M that the method removed all variability that there may have been, and that was really their main message.
Everything else in their analysis was technical justification for this basic claim.
I’m pretty sure Aristotle 350BCE “History Of Animals” has some errors in it.
Therefore I logically conclude there are no animals.
Where does this kind of ‘logic’ end?
“I think you’re changing the topic a little.” No, KR changed the topic. He claimed that the results match. I brought a graph that showed they don’t. On the original issue, I think we’re pretty much done: you-all have arrived back where they were ten years ago, agreeing now on what everyone agreed on then (dropping, I hope, Nick Stokes’ claim that McIntyre was somehow “hiding” that). If you want to discuss whether the method was right, you would now need to try and understand what they said (back then) was wrong with picking 4 or 5 PCs ex post facto and basing the main result on them. I think it’s fine to get bored with the subject, but I continue to claim that you haven’t caught up yet.
Pekka, Miker613: Read the paper again;
Notice the freaking error bars. Look at those nice error bars… Looks like they knew there was error, that it was large (2 sigma there guys), and that ain’t bad for a first cut.
And Miker613: Carrick’s graph is within the error margins they outlined 16 years ago.
So…. who cares?
“I’m pretty sure Aristotle 350BCE “History Of Animals” has some errors in it.
Therefore I logically conclude there are no animals.”
Do you actually think that someone here is reasoning like that? You’re not listening.
I need to go out, but I have nothing to do with whatever Nick Stokes has claimed. That is Nick Stokes’s to own, not me.
I’m not sure why you think they picked 4 or 5 PCs ex post facto. The MBH98 analysis only required 2 or 3. The point is that the MM05 methods requires 5 or 6 and hence only comparing PC1 isn’t doing a full comparison of the two methods.
Carrick’s graph confuses me a little as it seems different to the latest IPCC figure and seems different to Pages2K. Having said that, I don’t have any particular interest in specifically defending MBH98. All in all, this whole topic seems so 20th century.
OK… since you avoid this… here’s Figure 5B from your hero McIntyre;
1/ The method has not been shown to be ‘incorrect’.
2/ Recent results support (1)
You have nothing and when it is pointed out to you, you resort to nonsense like that quoted above.
“Recent results support (1)” No clue what you’re talking about, unless you are describing some recent statistical results that support MBH’s methods. Far as I know, everyone has dropped them, including both review commissions and Mann and everyone else.
If you just mean PAGE2K or the like, you’re not making sense.
Libertarian physics excludes hockey sticks because…bacon.
Until some ClimateBollocks player does better than PAGES 2K they’re waving their arms to distract from their losing position.
miker613: To reiterate, you/Carrick have confirmed MBH98 is correct.
WebHubTelescope would say, “Own Goal”. That is because you just scored on your own team.
Pretty much everyone agrees that the proxy reconstructions have very little ability to catch fast changes.
That’s whywe have thermometers, instruments and satellites. I’m just not seeing a 100 ppm+ spike in carbon dioxide in the last several thousand years, maybe the proxies missed that?
Pretty much everyone agrees that earth has had even bigger temperature shifts before (probably not as fast.)
As is ten times slower and as in million or even tens of millions of years ago. Again, those tree ring proxies must have missed that. Bad Mann!
What they disagree totally on are, how much will temperatures rise, how negative will the impacts be, what will be the cost of trying to mitigate globally as opposed to adapting locally,…
They being cranks making fools of themselves in the face of critical scientific analysis on blogs.
Given the obvious hockey sticks in atmospheric carbon dioxide, recent temperatures and recent human population growth, and the obvious recent decimation of forests, wildlife etc., I ask you, why do you insist on demonstrating to us over and over again the veracity of Dunning Kruger?
So.. side by side;
(remember Carrick’s graph goes back further in time…)
I wrote that I had no reason to doubt Carrick’s composition, but I must say that I was surprised by it. Therefore my whole comment is conditional, except that I do believe that the basic claim of M&M was really only that the method had very little skill because there was too much noise to allow for getting effectively out the signal.
“That is because you just scored on your own team.” Some of us don’t have a team.
I don’t think badly of you or your comments Pekka.
I don’t think anyone would dispute that there was a lot of issues with MBH98. 2 sigma error bars speaks volumes to me. It was also one of the first papers on this subject so that tends to make it suspect.
But there have been many many more studies with many more data sets, and different methodologies. We now have way more data, tighter tolerances, and a better understanding.
I believe that is the jist of what Anders has been claiming in this article.
I guess I should add that I don’t know what you want from “Figure 5B”, or what you think it’s saying, or anything. Maybe you want to explain.
anoilman: I think that is from the MBH98.
While the original Mann method did tend to suppress past variation, that was a weakness which Mann recognized and began working on shortly after the publishing of the first two papers. For example he was working on methods to benchmark reconstructions techniques by 2002, long before MM showed up on the scene:
Click to access Pseudoproxy02.pdf
Just because your first cut isn’t the best approach doesn’t make it fraudulent or even wrong. Anyone who has written large computer programs would realize this from their own suboptimal initial takes on system design.
I want to know if Miker can tell me why this issue is so important to him and the others when as has been pointed out to you repeatedly that the whole matter is largely irrelevant to the current science. Are you try to prove Mann made mistakes or was trying to commit fraud or what? What is the purpose of rehashing the issue over and over again?
miker613: Sort of confused here… If you did technical stuff in university, the first thing they taught you was error margins and how to interpret them. You start by knowing that there are no binary decisions, and that measurements really are statistical things within margins of error. About 3/4 of any lab writeup in Physics was error measurements, calculations and discussions about accuracy.
MBH98 Figure 5B shows the error bars. That means any measurement within 2 sigma is likely a correct result. 2 sigma was chosen (I’m sure) because there was too much variance in the data at hand at that time.
BEST figure 1 shows the measurements with thermometers (1 and 2 sigma variances);
Click to access 2327-4581-1-101.pdf
To make it clear, I don’t accuse the authors of MBH98 of any wrongdoing.
I think that it was reasonable to do the study. It’s also true that their paper does admit that the analysis rests on the validity of several uncertain assumptions. With present knowledge we may probably conclude that their analysis really had very little power. At the time they didn’t have enough data to make the analysis and test, whether it had real power.
I understand also M&M (I don’t comment on their motives). They emphasized the problems of the method, taking into account the sparsity of the data. They run tests and draw their conclusions. Their tests were also dependent on uncertain assumptions, and they made probably some mistakes in that, but their hunch was probably correct – the MBH98 analysis did not have much power.
The technical part discussed above is commonplace. Similar issues come up in many (or all) fields. What is less commonplace is the war that this led to.
miker613, you’ve said that “Carrick posted an interesting graph of an ensemble of all proxies” and kept pasting the image, but the graph is only an ensemble of 3 studies together with one of the options from Mann08, and almost certainly is based on fewer proxies than PAGES2K.
If Carrick’s graph is correct, it shows that the mean of these specific studies shows a cooler LIA than MBH99, but since different proxies are expected to show differing variation depending on aspects such as season, this isn’t unexpected.
One of the three may be Ljungqvist 2010, which specifically discusses the differences and suggests that it is due to more proxies being available. It doesn’t suggest that methodology caused the difference.
The whole MM fuss has been about the maximum MWP, but Carrick’s graph supports the MBH assessment of the MWP and only differs in the LIA. So where’s the beef?
Joseph – On a meta level, hammering again and again on a 16yr paper, demonizing one (not all) of the authors, ignoring or just running away from later confirming work, and shotgunning invalid and simply wrong numeric techniques in attempts to discredit it – IMO this all comes down to one thing.
If the Medieval or Roman warm period might have been as warm as present (all the data says “no”), then maybe, just maybe, it could be argued that current warming might be due to natural causes, might not be our fault. We might not have to own up to it, and in particular we might not have to act.
So we see repeated attempts calling upon the indefensible (not testing for component significance), the absurd (comparing MBH PC1 to a MM05 basis component that doesn’t have that signal, ignoring the one that does), and the simply inadequate (failing to run validation tests on the reconstructions). Ad nauseum…
Chewbacca-ishness aside, you are in a real pickle. All millennial reconstructions look broadly like MBH98/99 and nobody cares about the confected contrarian fussing.
So when sane, objective observers gather to contemplate millennial climate variability, they look at PAGES 2k and then at you lot, and they laugh out loud.
Anyway, carry on obfuscating. The rest of the world will carry on happily without you.
And miker, we need to be consistent:
Directly contradicts your earlier self-identification:
Folks, I would not trust Carrick’s graph as far as I could throw it (with a 1 ton weight attached).
Look once again at the full ensemble of reconstructions from AR4. I’m pretty familiar with these, and I think Carrick is heavily weighting this toward the Ljundqvist and Loehle graphs and titling it an ensemble.
Loehle and Ljundqvist: http://tamino.files.wordpress.com/2010/09/compare.jpg
Pekka… Before you “see no reason to question” you should maybe take a few moments to be genuinely skeptical.
MikeN – “…try and understand what they said (back then) was wrong with picking 4 or 5 PCs ex post facto…”
Misunderstanding of the year there, MikeN. MBH evaluated the significance of their PCs and EOFs (spatial basis functions), retaining only those that passed test – that is a core part of the methodology discussed in the paper. MM05 changed the data centering, giving a different basis set (including the PC4 that they noted includes a ‘hockey-stick’ signal), yet MM05 failed to evaluate component significance. They then only retained 2, apparently because that’s how many MBH’s basis set included, revealing a serious misunderstanding of PCA – that you keep what’s significant.
If you use the very same test as MBH on the MM05 PCs, 5 are retained. Because they are numerically significant descriptors of the variability in the dataset, not because you like or dislike the results. Not an ex post facto issue at all, but a basic part of the methodology that MM05 neglected. Which was a huge error, one that invalidates their conclusions, and one which (16 years later) McIntyre is still trying to justify. Ex post facto.
All of your complaining doesn’t change the basic _error_ which invalidates MM05.
Ah jeez! Go look at Moberg05 as well. Carrick is clearly cherry picking reconstructions with the highest amplitude. You’d have to go back and read each of the papers to determine why each reconstruction is showing this.
I call a big fat BS on this one. Talk about pulling a fast one!
“I want to know if Miker can tell me why this issue is so important to him…” No problem. Here’s the answer:
You started it. This began as an attack on McIntyre by Nick Stokes, Kevin O’Neill, and a number of others (more or less including ATTP). They called him a fraud. McIntyre responded, point by point. As he posted each point, Stokes, O’Neill and others quickly posted, “You didn’t deal with this _other_ point!” By now he’s dealt with almost all of them, and there was absolutely no substance to any of them. Everything they said was wrong.
At this point, with not much left to attack, everyone is saying, Who cares about a old reconstruction anyhow?
Okay? This is an issue I’ve followed from the beginning to the end (in its current iteration in the last couple of weeks), and I can testify that as far as I can see it was a competely specious attack on McIntyre by some people who don’t like him and don’t care too much about accuracy. Next time I see someone link to some post about something awful about him, I’ll keep that in mind. No doubt this “fraud” issue will be linked there as well, to convince people who hate him that they are right to do so and don’t need to read his rebuttals.
So Eli (along with MT, James and the Weasel) are long time players of Climate Ball, and remember when Steve Mc used Nigel Persaud as a sock puppet to advance his cause, which leads the Bunny to wonder exactly who the owner operator of JeanS is. Conveniently Jean is acting as the fangs for Steve the cat
“Directly contradicts your earlier self-identification:” I complain on your reading comprehension. Nothing I said identifies me in any way as being on a team.
It has occurred to Eli that both MBH98/99 and PAGES 2000 show a lot less variation than other reconstructions, but they are also more global and involve more proxies. It is reasonable to assume that a reconstruction which averages together many proxies from many places will show less variation over a short period.
Joseph: “What is the purpose of rehashing the issue over and over again?”
The exact same purpose as there was in raising it to begin with. Find something about the science that’s sufficiently complex (hockey stick, surface record), then cook up an endless series of arguments while ignoring the rest of the science. The underlying motivation is left as an exercise for the reader.
I take Eli’s point about JeanS (maybe someone with access to the relevant software could run some text analysis?), but whether he’s real or not I find it fascinating that the small flock of purported academic stats experts that McI has attracted didn’t find any of the major problems that subsequently came to light. Readers might be left with no choice but to get exercised when considering that.
miker613… “I think it’s an exaggeration to say that modern reconstructions have confirmed MBH results. The handle is very different from MBH’s claimed, much bendier. ”
There is nothing claimed in MBH98/99 about the exact shape of the hockey stick!
Have none of you even read the conclusions of the paper?
“Not an ex post facto issue at all, but a basic part of the methodology that MM05 neglected.” As I’ve said before, I can’t judge the actual math issues. It seems clear from their recent postings that they disagree with your assessment, so I’ll leave it to you and them to work it out. Perhaps someone can find a place where they discuss it explicitly. I imagine this came up ten years ago.
MikeN – It did come up ten years ago, see here and (more thoroughly and quite explicitly) Wahl and Ammann 2007. McIntyre and McKitrick have simply spent the intervening time denying that it was a mistake.
They are simply wrong.
miker613 writes,”You started it. This began as an attack on McIntyre by Nick Stokes, Kevin O’Neill, and a number of others (more or less including ATTP). They called him a fraud. ”
[Mod: Sentence removed, disrespectful] My participation began with Judith Curry’s Fraudulent(?) hockey stick post. I pointed out that when real fraud has taken place Curry has defended the perpetrator (Edward Wegman). McIntyre was dragged in tangentially because Wegman based much of the Wegman Report on M&M’s papers.
I don’t believe I’ve ever called McIntyre a fraud. I seriously doubt that Nick or Anders have either. So, I think you’re living in a different reality.
[Mod: Sentence removed, disrespectful]
Oh dear, miker. Oh dear.
I noticed somewhere that Jean S is from Finland. I have absolutely no further idea of his background, but a search proved that he has posted also in Finnish on Finnish climate sites.
“With present knowledge we may probably conclude that their analysis really had very little power. At the time they didn’t have enough data to make the analysis and test, whether it had real power.”
And yet, somehow, they got what were subsequently shown to be correct results, within limits. Just a good guess, you think? (And we’re measuring “real power” how exactly?)
And hmm, Bohr’s early model of the atom was quite wrong. Will you be starting a movement of physicists to dig him up and burn him or at the least strip him of his Nobel, and to conduct learned conferences about how all subsequent related work is necessarily tainted? I look forward to it.
Eli will no doubt recall that MBH98/99 are NH while PAGES 2k is global, and Mann has acknowledged (apparently since 2001, before MM started the fuss) that regression methods understated amplitude. Even though Carrick evidently cherry picked reconstructions with greater amplitude, it may have been a lucky coincidence that the MBH studies of the NH match the most recent global estimate so well.
The point remains that McI’s “auditing” largely rests on his misusing or misunderstanding principal component analysis (PCA) methodology to remove PC4 from his “centered” method, then misleading readers with his false result.
As by 2002 Mann had moved on and replaced the relevant PCA step with RegEM methodology, from the outset McI was arguing against a superseded method and shouldn’t be given any credit for the shift away from that particular principal component analysis approach.
> No, KR changed the topic. He claimed that the results match. I brought a graph that showed they don’t.
On the 2014-10-01, at 14:25 pm, miker613:
KR’s response is at 15:03. This response was in answer to MikeN. Speaking of whom, here’s how he pulled the Tiljander trick at BartV’s a while ago:
Incidentally, the graph goes back to times which are irrelevant to Nick’s point, which should be discussed at Nick’s anyway.
You really should go tell Nick about Carrick’s graph, miker613.
> If the Medieval or Roman warm period might have been as warm as present (all the data says “no”), then maybe, just maybe, it could be argued that current warming might be due to natural causes, might not be our fault.
It could also be argued that a greated MWP increases sensitivity.
But then sensitivity is another game altogether, and “we” are talking about MBH, right?
Okay, I withdraw the claim that you called McIntyre a fraud. It changes little of my point. He is still responding to your accusations, one at a time.
KR – it’s nice that you have a link. But again, did you read the rebuttal? If not, maybe your link is the one that’s wrong? Loads of people seem to think that reading the sites that they like, and accepting whatever those sites say about the other side’s opinions, is sufficient.
What indicates for me very little power is that the best estimate is virtually flat. The variability found by later studies is not very large either but still much larger. Getting a virtually flat result, when the reality is not quite that flat is a strong indication of lacking power.
miker, MM’s upward-facing hockey stick selection, which is at the heart of their work and attack on MBH, is something requiring no math at all to note. I would even go so far as to say that such an error could not have been unintentional. No dodging allowed on this one.
“miker, MM’s upward-facing hockey stick selection, which is at the heart of their work and attack on MBH, is something requiring no math at all to note. I would even go so far as to say that such an error could not have been unintentional. No dodging allowed on this one.” Explain? No clue.
Sewing the seeds of FUD. Fear Uncertainty and Doubt.
Anyways, here’s the Pages 2k paper with supplementary data;
Figure 4 certainly shows it all, error bars (way way way smaller, forgings, you name it. It also compares Mann, Ljungqvist, Moberg, and Hegerl.
I’d like to take the time to thank Carrick and miker613 for backing Michael Mann and the rest of the scientific community for their efforts. I specifically want to thank them for pointing out that the existing science really is very very accurate, and that Mann was spot on with his original work.
Willard said… “It could also be argued that a greated MWP increases sensitivity.”
“By now he’s dealt with almost all of them, and there was absolutely no substance to any of them. Everything they said was wrong.”
” I can testify that as far as I can see it was a competely specious attack on McIntyre by some people who don’t like him and don’t care too much about accuracy.”
“As I’ve said before, I can’t judge the actual math issues. ”
So the criticisms of McIntyre were specious and without substance and to prove that you take the word of the man himself. Why is his word more meaningful than the multiple people criticizing it?
My understand of this dust up (corrections welcome):
MBH98 was pretty rough around the edges with some wide margins for error and based on some wild assumptions, facts that were clear from the beginning. Despite its flaws it motivated further research using different method and proxies which, in large part, showed very similar results. Years later McIntyre publishes a paper that claims to show that the methods used in MBH98 were horribly flawed so the results can not be trusted. Many people critiqued McIntyre’s paper as being flawed for misusing the very method he accused MBH of misusing. Most researches agreed that MBH98 was not particularly accurate but since the intervening 7 years of research had shown the wild assumptions and conclusions to be (more or less) accurate it really was of little consequence. Now here we are 16 years later and people are still arguing that MBH98 used flawed methods and should not be trusted as if that would somehow overturn the multiple studies that followed and bring the whole AGW theory crashing down.
Pekka, I think you’re proposing an arbitrary standard. The correct one is to compare the MBH results not only with what was found subsequently but with the prior understanding (this e.g.). Anything else is an exercise in Bohr-burning.
MikeN – I have read the rebuttal (link here, which you failed to provide). And in that rebuttal there is _nothing_ about evaluating PC significance, just a short note that PC4 accounted for only 8% of the variance, as opposed to the 38% for MBH PC1.
Entirely unsurprising, as MBH’s centering method resulted in only two PCs, whereas MM05 spread that same variance over five – and definitely there are portions of the HS response spread to other PCs; PC basis functions are orthogonal descriptions of the data, if MMPC4 was aligned with that particular variation it would have been much larger, meaning that variation is (partially) carried in other PCs.
MM05 and their response did not evaluate the significance of their PCs, did not threshold retention for significance, or they would have kept five. And they did not evaluate the statistical validity (again, a large part of the MBH methodology) of the MM reconstructions, which failed validation where they disagreed. Errors, unforced, wholly unaddressed.
You’ve repeatedly attempted to disavow your comments with “I don’t know the math, but…”. The math is clear, MM05 made a mistake. The history is clear, McIntyre continues to claim it wasn’t an error. And he’s wrong, no matter how many times you repeat nonsense.
Check what I wrote here
“Why is his word more meaningful than the multiple people criticizing it?” Oh, no – I wouldn’t do that. He posted links. For instance, in this post we’re discussing, Nick Stokes claimed that McIntyre was hiding a problem with his method; McIntyre posted links to climateaudit from ten years ago when he posted on it and discussed it extensively. Nick Stokes responded in the comments that when McIntyre testified before Congress and was asked about something slightly related, he didn’t mention this issue – that is what Stokes had meant by “something Steve McIntyre doesn’t want you to see”.
Okay? That’s the kind of thing I mean. Total disproof of nonsense by direct evidence.
I saw that, Pekka. The mistaken nature of your approach to this is quite clear.
No significant change to the PAGES 2k global reconstruction. And careful what you wish for. People keep trying to remind hot-MWP contrarians that they are arguing for higher climate sensitivity to radiative perturbation.
You really should think about this more often, and more carefully.
Pekka: “Getting a virtually flat result, when the reality is not quite that flat is a strong indication of lacking power.”
Is it? What non-arbitrary standard is there for that assessment?
“I have read the rebuttal (link here, which you failed to provide)” Of course I didn’t provide it. I didn’t even know about it. I was indeed suggesting that you go look for it. Presumably, since “in that rebuttal there is _nothing_ about evaluating PC significance”, you haven’t found the right rebuttal yet. Search climateaudit posts. Or maybe no one thinks your complaint has any validity. Or maybe you’re right. _I_ don’t know, and there isn’t any way for me to know, no matter how many times you put your points in bold. McIntyre and Jean S claim that the five PCs were chosen ex post facto to protect the hockey sticks in a way that a statistician cannot do; you seem to disagree. Go work it out with them; I don’t know what you want from me.
“I don’t know what you want from me.” A coherent, non-arbitrary POV?
“People keep trying to remind hot-MWP contrarians that they are arguing for higher climate sensitivity to radiative perturbation.”
Yes, their persistent failure to grapple with this point is proof of an essentially anti-scientific stance.
miker613 fails basic logic. MBH non-centered PCA and using all proxies with no PCA give essentially the same results, MM centered PCA with only 2 PCs gives a wildly different result, so “McIntyre and Jean S claim that the five PCs were chosen ex post facto to protect the hockey sticks” ….. failing to accept that the hockey sticks are in the data, and the MM method was simply hiding the hockey sticks. Which all subsequent investigations find.
If a method gives a result that’s a fraction of the real signal then it’s not an arbitrary statement to notice that the method lacks power.
We cannot know, whether the method would have shown anything from much stronger temperature variability, because a major contributor, and possibly the main contributor, for the lacking power was probably in the calibration phase.
MikeN – What do I want from you? Not a lot, quite frankly, but it would be nice if you would cease to repeat nonsense like your “ex post facto” claim – if you run the _very same significance test_ as used in MBH on the MM principal components, five are significant. The test from 1998!
Note that nowhere in MM05 is there a discussion on the retention criteria. None. They weren’t rejecting the MBH retention criteria, they simply didn’t use it, and hence _did not_ replicate MBH using centered data, as they claimed. Criticizing a methodology while failing to apply the same significance tests from a paper seven years earlier is absolutely not “after the fact”. It is, rather, a mistake by MM05. A mistake which McIntyre continues to double down on, almost a decade later.
“…there isn’t any way for me to know” Actually, there is. You could read the papers, as have I and any number of others. If you don’t know, if you can’t be bothered to look, there is _zero_ reason for you to repeat unsupportable claims without evidence. Other than trolling, which appears to be your MO.
Indeed. McIntyre and McKitrick should have done what Michael Mann did and used an objective rule to calculate how many PCs to keep. Then they would have had to post:
Where they’d be criticized for estimating red noise parameters from tree ring networks without detrending first.
“And careful what you wish for.” Don’t know about you; I wish for people to stop trying to win, and stick to science. I think it’s important to know facts about global warming so we can make good decisions. People who have a “team” threaten me.
“No significant change to the PAGES 2k global reconstruction.” Sure looked significant to me.
Anyhow, all those who hate McIntyre should enjoy the spectacle of PAGES2K making several changes from his corrections. I think that’s a good thing; I like science.
People who hate him should also consider honestly: do they have anything like his remarkable encyclopedic knowledge of proxies? If they don’t, maybe they should let the big boys (him, the people who make PAGES2K) work these things out and they should get the heck out of the way. If they like science.
I think “hate” is a very strong word and I’m not sure anyone has really expressed an opinion that could be interpreted as hatred. It seems a little unfortunate that you’ve chosen to potentially interpret things in that way.
Anyway, given a desire for an easy life, I shall probably turn on moderation for all and have an early night.
Arctic isn’t global.
‘ “People keep trying to remind hot-MWP contrarians that they are arguing for higher climate sensitivity to radiative perturbation.”
Yes, their persistent failure to grapple with this point is proof of an essentially anti-scientific stance.’
Don’t know about hot-MWP contrarians, but to me the anti-scientific ones are the ones who have a preferred result, whether it’s hot-MWP, no-MWP, or lower climate sensitivity.
Okay, I withdraw the word “hate”. I do definitely get an impression of strong dislike. anoilman has mostly stuck to obscure graphs today, but a couple of days ago we were getting lots of the “mcintyre is awful” links.
Of course, miker. So we must consider the implications for sensitivity to CO2 if millennial climate variability is rather greater than previously supposed. Contrarians never do this. So contrarians do not appear to be “sticking to science”.
The scientific ones heed the science.
Hate? No. Mock? Oh yes.
PAGES 2K has made corrections based on lots of people’s input. I presume McIntyre now accepts the hockey stick then. Or would he be a traitor to his team if he did?
That would be the contrarians: hot MWP; low climate sensitivity. A basic contradiction that betrays an unscientific predisposition to preferred (and mutually incompatible) results.
‘ “…there isn’t any way for me to know” Actually, there is. You could read the papers, as have I and any number of others. If you don’t know, if you can’t be bothered to look…’ Some people have enough math to read these papers and understand them, others don’t. I tend to confine myself to points that I can verify personally. It’s not enough to read a paper, or a blog post, and say, “I like that!” It takes weeks of work for statisticians to work these things out, and the rest of us are going to have to take their word for it. I don’t know if you’re a statistician. If you are, and if you put the work in to do the math yourself, maybe you have a right to claim that you know. If you aren’t or if you didn’t – maybe you read a post or two or skimmed a paper but didn’t work out the math – then you actually have no real clue any more than I do. You just picked a side you like and are quoting their talking points. Don’t blame me for refusing to do the same.
daves, PAGES still appears to be heavily weighted with NH proxies — probably inevitably given where land is found on the planet 🙂 (see their figure 2, lower panel)
Miker, who provoked this post? I will ask you again why you all are rehashing all of this. What are you trying to prove and why?
Will Power – Excellent reference, I remembered seeing it before but couldn’t find it.
Link here: RealClimate – PCA Details – demonstrates how using an objective test as applied by MBH 2 (or 3) eigenvalues (hence PCs) should be retained as being above noise, and for MM05 5 (or 6) should be retained.
Flat standard, objective, rules. Which MBH used. Which MM05 _failed_ to apply. As Wahl and Ammann 2007 note:
“neither the time period used to “center” the data before PC calculation nor the way the PC calculations are performed significantly affects the results, as long as the full extent of the climate information actually in the proxy data is represented by the PC time series.” (emphasis added)
Which MM05 failed to do, demonstrating their lack of understanding of PCA technique at the time. McIntyre’s doubling down again and again on this point is just appalling to me.
But, miker, you are doing the same. Don’t you recognise that? Wondrous. Do continue.
“You just picked a side you like and are quoting their talking points. Don’t blame me for refusing to do the same.”
Since, by your own admission, you can not do the math you have done exactly that.
> You just picked a side you like and are quoting their talking points. Don’t blame me for refusing to do the same.
Your first sentence in that thread, miker613:
Now, please take a break.
What fraction, Pekka, in accordance with what standard? You’re still sounding arbitrary.
Also, perhaps you should re-read MBH98 to see what was and wasn’t being claimed. Maybe start with the abstract:
Notice that the only specific claim about recent variability is regarding three of the eight years prior to 1998. Oddly you don’t seem to want to place any emphasis on that. How has it worked out in light of subsequent work? Did it have power as you define that term?
Also, note in particular that the graph they show, while it has a flattish (but see below) central tendency, comes with large error bars that (IIRC) make LIA-related claims hollow. You can take the selection of 2-sigma as a statement of relative lack of power if you like, but in that case you really need to not make that claim in a manner that implies the paper itself didn’t recognize the issue.
This is how MBH describe the flat handle:
Not so flat as all that, it seems. Note in particular the use of “pronounced.”
But if I were a clever fellow like McI with a libertarian agenda to push, I’d have seen gold in that 2-sigma.
There’s one technical issue related to the number of PCs to retain.
If the timeseries are short-centred rather than centred based on the full period the calculated total variance from zero increases. Another consequence is that the first PC is forced to deviate from that which would leave as little as possible variance for the other PCs to explain. That effect carries to further PCs leading to the outcome that each of them explains more variance than in the standard approach.
Taking that into account, how it’s possible that short-centred PCA allows for keeping fewer PCs than standard PCA (2 for short-centred, when 5 is required for standard PCA)?
Pekka: “the lacking power was probably in the calibration phase”
And I’d say that was reflected in the selection of 2-sigma.
“PAGES 2K has made corrections based on lots of people’s input.” According to the paper itself, they made five corrections to the Arctic data. Maybe a few more scattered. A hefty proportion seem to be McIntyre’s. Which as I said is a good thing.
“I presume McIntyre now accepts the hockey stick then.” Total non-sequitur. See his post for other issues he still has with the reconstruction.
“Or would he be a traitor to his team if he did?” I don’t like teams. Are you on one? Or can you value science whoever provides it? PAGES2K is much more valuable now! Because of McIntyre! Hurray for the helpful scientist!
Sigh. I’ve done nothing at this blog but defend people against attacks. Then the attackers claim that the other side is the one with a team.
‘ “You just picked a side you like and are quoting their talking points. Don’t blame me for refusing to do the same.”
Since, by your own admission, you can not do the math you have done exactly that.’
@Willard as well.
No. I am not doing that. I am pointing out that this side says A, that side says B. I am not competent to decide between them. Most of you are apparently not either. Some of you perhaps are.
That’s as far as I can go. That should be as far as most of you should go as well.
Zoiks! Holy trollathon batman! Hats off to you miker613! Very successful way to fritter away an afternoon for you!
I’m glad to hear that you and McIntyre, and of course Carrick now agree that the hockey stick is well within specs, and very accurate.
Thank you for supporting and proving it to everyone!
Pekka… what Steve just said. 2 sigma is not only a big deal, but if you look 2 sigma is very very wide.
Pekka – How is it possible? Because the short-centered approach results in PC basis functions that are better aligned with some of the the primary variabilities in the dataset (ie, hockey-stick), as shown by the larger explained variance each contains relative to MM05. The short-centered approach is simply more succinct than full centered. However, if you apply the _same_ significance criteria, retaining 2 and 5 PCs respectively, they account for 47.95% (MBH 2) and 48.46% (MM 5) of the variability (numbers here).
And in that they are pretty much equivalent – as long as you retain the _appropriate_ number of PCs for that centering method. Retaining PCs that contain the same cumulative variance gives you the same results, regardless of how many PCs a particular methodology requires for that.
Analogy (pretty direct, actually): If you have a variance that describes an arc in 2D, you can express that variance with an appropriately chosen set of polar coordinates (2D orthogonal basis set) using only the dimension (component) theta. If you were to use Cartesian coordinates (2D orthogonal basis set) you would be forced to use combinations of both dimensions (components) X and Y, less succinct, to equally express the same variance.
Addendum – The short-centered MBH principal components are more succinct than full-centered ones for this data; take a different dataset, with less of a strong common component over the centering period, and the relative number of significant PCs will change up/down for both methods.
I agree that the confidence interval is given as wide, but that does not change the fact that the best estimate is essentially flat. If the method would have power it would show more variability for the best estimate, even if it does not tell that the variability shown is statistically significant.
But the standard PCA chooces a PC1 that’s best aligned with the variability.
Presently I do think that the short-centred PCA calculates the percentages from inflated total variability, and that cannot be right as far as I understand.
And so Miker’s journey ends precisely where it started.
Pekka, “essentially flat” doesn’t fit with the description of the plot given by the authors of the paper.
“If the method would have power it would show more variability for the best estimate, even if it does not tell that the variability shown is statistically significant.”
Interesting claim, that. But it still seems completely arbitrary.
Pekka – Centering has an effect too. In this case short-centering on the calibration period packs a strong common component into one location, accounting for a great deal of the variability with that one PC, in other datasets wouldn’t necessarily be the case, in fact for most. In effect it slightly changes the dimensional basis of the proxies, and that alignment of the strong change really reduces the dimensionality of the variability and hence the number of dimensions required to express it.
I absolutely _disagree_ with the idea that short-centered inflates total variability. This is disproven by the near-identical reconstructions using MBH 2 and MM 5 PCs with approximately the same cumulative variability. If you were correct identical sum variability, covering the variability above noise _could not_ produce near identical results. And it does.
Note that the MM reconstructions that differ, using only 2 PCs, account for only about 28% of the data variability. Those PCs are simply less aligned with the variability in the sample due to how they were generated.
I consider the expressive power of the MBH PCs a serendipitous outcome due to the centering period and strong hockey-stick uptick present in almost all proxies in that period. But it’s as valid as full-centering (even if less standard in practice), in that near-identical reconstructions can be obtained using all above-noise PCs.
Pekka, here is the comparison of reconstructions from Mann 2008:
It is not entirely clear what Carrick has done, but he has certainly cherry picked the three reconstructions that show the greatest centennial scale variance (Moberg, Loehle, and Lundqvist). He also appears to use Mann 08 EIV land only, but certainly the EIV reconstruction rather than the CPS reconstruction (which is statistically superior, if I recall correctly). Further, he appears to have used a different baseline for the MBH reconstruction (as can be seen by the differences in 20th century reconstruction) to exaggerate the differences.
However, the MBH98 method does tend to suppress low frequency variation, a fact pointed out by (among others) Michael Mann prior to 2005. That is why Mann had abandoned the MBH method by 2005, and was using other methods shown to be superior in recovering “temperatures” from pseudo-proxies. That is not at issue. What is at issue is, are the particular criticisms of MBH 98 by M&M05 valid (by and large no), and was the MBH98 method a good first attempt at reconstructing temperatures with uncertainties (yes – no more recent reconstruction leaves the range of the MBH98 and 99 uncertainties). For Climate Ballers™ (on both sides of the divide) a more germain question is why M&M spent so much effort criticizing an already obsolete method in 2005, and why does McIntyre still spend so much time criticizing in in 2014?
As an aside, I will note that Mike613’s claim that Carrick graphs “an ensemble of all proxies”. Perhaps all skeptic ™ approved reconstructions – but certainly not all (or even a quarter).
Anyhoo… the Pages2k Paper originally published the supplementary data explaining error magins;
Supplementary Data sets:
Figure S2 | Proxy temperature reconstructions for the seven regions of the PAGES 2k Network. Quite clearly shows that where there is a lot of error, and obviously room for improvement. Updates to this are not only expected, but will continue to be expected. I look forward to seeing more in fact. (I don’t need McIntyre to predict this… I can think.)
I want to continue the hand of friendship for our pals over at ClimateAudit for pointing out that MBH98 was spot on the money, and correct. I think its great that amateurs of that caliber can produce useful evidence that the hockey stick is real, accurate, and a serious concern.
Rattus, thanks for the clarification that PAGES still appears to be heavily weighted with NH proxies. Clearly the various recent reconstructions diverge in detail, particularly regarding amplitude of the little ice age, I’d expect that reasons have been examined. As others have noted, the divergence remains well within the uncertainties highlighted by MBH99.
Pekka, saying that the MBH best estimate is essentially flat seems questionable, as the discussions on the paper highlighted the downwards slope from the MWP to the 19th century. The contrarians claim that it’s flat, but misrepresent the paper. As above, Mann was concerned that all the regression methods including MBH tended to reduce amplitude of variations, hence the wide uncertainties.
It’s surely possible that the datasets differ essentially, but it must be assured that explained variances are taken relative to variance calculated from the full period mean.
Pekka… “If the method would have power it would show more variability for the best estimate…”
The reconstructions that show more variability are the ones that (to my knowledge) use fewer proxies, or favor higher latitudes. The more global and the greater the coverage, the flatter the hockey stick handle gets, such as with the PAGES2k reconstruction.
No! If you include all PCs from short centered PCA you will get the same result as you get from including all PCs from standard PCA. Therefore percentages from total variability are not inflated. What does happen is that PCs that show the dominant signal the short centered period are more heavily weighted. Not all PC from any form of PCA of tree rings (or other proxy data) will represent variation due to temperature but temperature shows a very distinctive signal during the last century. PCs with a dominant signal in that period are therefore likely (but not guaranteed) to show a temperature signal.
It is unclear that short centered PCA is the best way to pick out that temperature signal. It is very clear, however, that it is an acceptable thing to do. If it were not, choosing proxies due to their likely response to temperature would also be invalid in temperature reconstructions (which is absurd). That does not mean it is not without statistical pitfalls, but what method in statistics is not?
It is also a feature of the MBH method, so it is not a flaw in M&M’s critique of that method (though that certainly has flaws enough).
Willard – _That_ was funny. And all too sadly familiar…
KR, plausibly, short centering results in the explained variance being the variance over the short centering period rather than over the full length of the proxy. If that is the case a simple comparison of the variance explained is an apples and oranges comparison. That does not change the fact that whether you retain approx 85% of variance using short centered PCA, or about the same variance using standard PCA, you get essentially the same result in the reconstruction:
“my first proper single author paper”
there goes Wotts’ claim to be a senior academic
I don’t believe I ever used the word “senior”. Strawman much?
Mirror, mirror on the wall… who is the most senior climate economist of all?
VeryTallGuy… Only one man in this conversation invents papers that he can’t seem to find….
Richard either doesn’t understand the nature of science and scientific publishing in other fields, or he is just trolling. I’m not putting my money on the first opinion.
An obvious danger of being single author is that you take data from others and botch the plusses and minuses (gremlins), or that you don’t do a sanity check on your results because the result is convenient, and thus conclude the supposed existence of 300 papers no one, including yourself, can find (#FreetheTol300).
And Richard appears to have little understanding of the typical publications of a physicist, or he does and is just trolling,
I try now to explain better the two points I was discussing yesterday (Finnish time). I start with the issue of centering and variance.
First of all, what I discuss is calculated variance, which is not the same as variability in the decentered (or short-centered) case.
We start with a set of time series and scale them based on some rule. In my consideration this scaling is done using rules applicable to the standard PCA, no separate scaling is allowed for decentered PCA.
In standard PCA, PC1 is determined by the requirement that it explains as much of the variance as possible, PC2 explains as much as is possible after PC1 is subtracted, etc. It’s also known that N first PCs explain as much variance as it is possible to explain by N PCs.
In decentered PCA the total variance calculated from the “new center” is the sum of the original variance and a term proportional to the squared separation of the two centers. When PCA is done in this case the PC1 moves closer to the direction defined by the decentering. If the separation applied in the decentering is so large that its contribution is large relative to the variation explained by the original PC1, then the new PC1 aligns closely with the decentering. The variance explained by the new PC1 contains contributions both from the real variability and from the extra variance created by decentring.
In the special case where decentring aligns perfectly with the original PC1 nothing else changes except that the new PC1 explains much more variance, all other PCs remain unchanged and explain as much variance as before. In all other cases the whole set of PCs is modified and there’s always more variance left unexplained after N steps that there is in the standard PCA.
When all PCs are included, the result is the same, but cutting at any N means that standard PCA explains more of the real variability than decentered PCA. In the decentered PCA there’s more real variability left unexplained and there’s also some distortion from the decentering left in the residual variance.
This is in more precise language the idea that I wanted to present.
Picking one point that has been repeated in comments many times.
It is said that if centred PCA is used, the correct number of PCs to use is 4 or 5, and therefore Mann’s moving this to PC1 does not matter.
One point of M&M was to show which PCAs were dominant, and found that not surprisingly it was Mann’s PC1. They then looked at the content of Mann’s PC1, and found that it consisted of Bristlecones that are not correlated to local temperatures, and Gaspe which Mann had artificially extended to fit one of his regression steps despite the series at that time only consisting of one tree. The NAS panel later said that strip-bark bristlecones should not be used in temperature reconstructions – though climate scientists subsequently ignored that advice and carried on using them.
Whether it is correct to use Manns’s PC1 / M&M’s PC4 is therefore irrelevant, when the data within it is either not related to temperature or fails basic quality checks.
re: Moberg, Loehle, Lundqvist … Carrick’s choics
As pointed out in Strange Scholarship in the Wegman Report (2010), p.142:
“Different reconstructions cover different geographies, and in particular, those focused on (land-dominated) NH extratropics are expected to vary more than the entire NH, which in turn varies more than global.”
One must take great care in spaghetti graphs to know what area each reconstruction really covers: Relative NH areas are:
0.13 60°N (Alaska, N. Canada, Scandinavia, Polar Urals, etc)
Likely to show sharpest swings, ice-albedo feedback. etc
0.50 30°N (to pole, sometimes also called extra-tropic) ~Ljungvist
0.60 23.5°N (Tropic of Cancer to pole, ~Moberg(2005))
1.00 0°N, NH (~MBH98, MBH99, others) smoother curves expected
2.0 = NH+SH, where the smoothest curves would be expected.
So, one has to read the papers to see (and the fractions are rough estimates)
~0.60 Moberg Figure 1: 2 proxies are slightly South of Tropic of Cancer, see Table 1.
Latitudes degN are: 81, 73, 46, 38, 33, 18, 68, 66, 40, 80, 18
~0.50 Ljungvist(2010) (I assume that’s the one)
“The new reconstruction presented in this paper consists of 30 temperature sensitive proxy records from the extra-tropical Northern Hemisphere (90–30°N), all of which reach back to at least AD 1000 and 16 all the way back to AD 1”
Loehle(2007) I assume that’s the one, published in Energy and Environment,
Gavin Schmidt dissected it inthis post, concluding:
“What does this imply for Loehle’s reconstruction? Unfortunately, the number of unsuitable series, errors in dating and transcription, combined with a mis-interpretation of what was being averaged, and a lack of validation, do not leave very much to discuss. Of the 18 original records, only 5 are potentially useful for comparing late 20th Century temperatures to medieval times, and they don’t have enough coverage to say anything significant about global trends. It’s not clear to me what impact fixing the various problems would be or what that would imply for the error bars, but as it stands, this reconstruction unfortunately does not add anything to the discussion.”
DeSmogBlog profile notes Loehle’s close relationship with the Heartland Institute, not a credibility plus.
SO; we have 2 reconstructions that may be OK, but do not really claim to cover the NH, but rather 50-60% of it, and one reconstruction that is quite flawed.
People who unwittingly compare graphical apples and oranges go bananas.
Then the other issue of how power is visible in the results of the analysis.
In the following I’ll discuss how information from the calibration period influences the resulting temperature estimates for the target period.
Let’s consider first the case, where calibration of the time series is perfect. That would be the case, if there were an external method that can tell accurately, how sensitive the time series are to the temperature. In this case all the variability in the estimates of target period temperatures is due to noise in proxies at that particular target time. Such variability would have as it’s average the true target period temperature, and would vary up and down around that temperature by an amount determined by the strength of the noise. The variability in the results would be the sum of variability of the real temperature over target period and the noise. The observed variability sets effectively an upper limit for the variability in real temperatures, if the calibration is known to be perfect.
In the above I assume that all noise is uncorrelated with the temperature. If some factor leads to correlated variability, the whole method fails at the level of the influence of this correlated factor. This is true for any time series analysis.
Now to the other source of uncertainty that comes from the calibration. The calibration is problematic, because the overlap of the proxies and the instrumental temperature measurements is short relative to the temporal resolution of the proxies when all autocorrelations present in the processes that influence the proxies are taken into account. There are often also delays and adjustable delays add to the uncertainty of the calibration.
Errors in the calibration due to noise (and more systematic errors, where they apply) lead to PCs that are different from those of the ideal case discussed above. Now the coefficients of each time series that contributes to the estimates for the target period are not correct. Some of the best proxies may get a small coefficient, it has even taken place that the sign has been wrong. One of the most common problems is that such time series get a significantly non-zero coefficient that have very little temperature dependence at all. In most cases the overall effect is to reduce the temperature signal of the target period. This fact has been recognized by all users of the approach as far as know, most definitely by Mann in his later papers.
Validation of the proxies used helps by reducing the number of proxies that have no signal and thus the contribution the make in washing out the signal, but validation has it’s own problems similar to the data selection problems in all statistical analysis. (How can that be done without introducing bias of a new kind?)
In the above I have explained why the first type of error does not reduce variability of temperatures over the target period, while the second type does. When we see in the results significantly less variability than presently considered correct, we can, with hindsight, conclude that the method had little power for the second reason and thus in the way that could make even a much stronger temperature signal of the past much weaker that it really was.
I disagree. There are two separate issues here. How does the centering influence the analysis? Everyone appears to broadly agree that it changes the number of PCs that should be retained. Therefore simply comparing PC1 (short-centered) with PC1 (standard-centered) is not really sufficient to show that the MBH98 analysis was wrong, since it isn’t a full construction.
A completely separate issue is whether or not the proxies themselves were suitable. It’s well beyond my knowledge to know the answer to that issue. However, there is some basic physics/science one can consider. The instrumental temperature record tells us that we warmed since about 1880. Therefore any reasonable reconstruction should have a blade of sorts, starting in the mid 1800s. Maybe MBH98 achieved this purely by chance. However, they also show (in Figure 4) a comparison between regional temperature anomalies from the instrumental temperature record and the reconstructions which appears remarkably good. I realise that this isn’t a watertight argument, but how did they get a reasonable comparison with the temperature record over the instrumental period and a reasonable comparison with regional variations, if the proxies were unsuitable?
ATTP, your response to Andy L’s argument is consistent with the assessment Mann makes in his book on the controversy: MM05 misused centred principal components analysis to omit PC4 and hence leave out the temperature related pattern, instead showing only noise.
This achieved by statistical means what MM had tried explicitly two years earlier – in MM03 they simply removed from the network two thirds of the proxy data used for the 15th–16th century. If a proxy showed the “hockey stick” pattern, MM03 argued there was some problem with the proxy and threw it out, despite their having no expertise in climate proxies.
I noticed this in your post:
That would be very welcome, if time permits. Imagine: a discussion about something other than contrarian nonsense and hopefully with only minimal contrarian input… 😉
Pekka – “The calibration is problematic, because the overlap of the proxies and the instrumental temperature measurements is short relative to the temporal resolution of the proxies when all autocorrelations present in the processes that influence the proxies are taken into account.”
What numbers are you using here to make this assertion?
That’s rather an explanation for what’s observed in this case, nothing in my argument builds on that as the observational basis is the temperature reconstructions themselves.
I haven’t look at that detail recently, but I have done it earlier. The statement is not very different from what can be found in many presentations of the methods. In a recent discussion, possibly in this thread, possibly elsewhere, a similar statement presented by Mann was quoted. Density and width of annual growth of trees has a significant annual component of correlation with temperature that’s not affected by these problems, but even that varies from case to case as annual correlations are often much weaker than decadal correlations (in some cases they are close, in others not). When decadal correlations dominate, the shortness of the period is a serious issue. All that applies to local temperatures, when temperatures are not local, the short term correlations get weaker and the issue more severe.
Some other proxies cannot be used at annual level at all.
Since some work I did came up here, I thought a few comments were in order regarding a graph of mine that is being discussed here. What I am providing is informational. I am not interested in food fights, but if there are genuine issue that people want to raise I will respond to those.
First, the curve being discussed is very similar to that found by Zeke:
who includes a comparison with Loehle.
SKS has a similar figure:
but does not include a comparison with Loehle.
There is nothing remarkable about the temperature series plotted in my figure. For the ensemble, I resampled all of the temperature series to a 10-year period (cubic-spline) and computed their arithmetic mean and standard deviation.
I did one thing that will be somewhat controversial to people who aren’t versant in climate proxy lore, which is I rescaled the temperature proxies to match each other (hence “pseudo-temperature”), and adjusted them to a common baseline (this should be non controversial, though food throwers can likely think of specious arguments to fling). I used Loehle because, due to the simple arithmetic average over precalibrated proxies, it is less likely to suffer from scaling bias effects. However, any of the proxies would work as well.
I will provide some justification for this below, but briefly, if what you want to really know is global mean temperature, there are multiple reasons to not expect the temperature scales to be the same between reconstructions (different spatial sampling and scaling bias associated with the algorithms used).
If you don’t rescale, there are two things that happen. One is that reconstructions with larger absolute scales (relative to the “true” global mean temperature scale) will be weighted more heavily than reconstruction with smaller absolute scales. The second is that the standard deviation of the ensemble will be larger. If you don’t shift to a common baseline, the main effect is an increase in the standard deviation of the ensemble.
In terms of “cherry picking”, my selection criterion was for 2000-year global or north hemispheric reconstructions. I believe that I’ve included all peer-reviewed reconstructions that met this criterion.
Mann 2008 stops at 200AD and was not included in earlier versions of the graph for that reason. I got interested in how it did, and was pleased to see that it tracked well with the other long-duration reconstructions.
The only other reconstructions of similar length are Christiansen and Ljungqvist (2012) and Hegerl et al. (2007). C&J seemed “too close” to L2010 so I stuck to the former reconstruction, and Hegerl cuts off at 558 AD (a bit too high for what I was looking at).
The more recent PAGES 2k isn’t global or north hemispheric, so not usable for a meta-analysis.
MBH 98 was added in response to comments from Boris on Lucia’s blog. I was making what should be an obvious point to any critical-thinking person, and is known anyway from the literature, which is the MBH 98 suffers from a complete loss of low-frequency information. (This makes the use of MBH98 to make comparisons about the relative warmth of the modern era to the MWP totally useless.)
Regarding Mann 2008, the EIV is Mann’s preferred reconstruction. From Mann’s paper:
I did look at Mann’s CPS as well, as as Mann observes, for the early period, this does not agree well with his EIV method. As I see it, there is vey little value in a meta-analysis of this sort in including non-preferred reconstructions.
Moberg is an obvious inclusion. It even gets the RC stamp of approval:
Ljungqvist, by any standard I’ve seen, is one of the more carefully done reconstructions, so it’s inclusion is a no-brainer. In the metrics I looked at, it has met or exceeded all of the other reconstructions. I’ll show one result below.
Regarding Loehle, since there is some confusion about this—partly due to Craig Loehle’s own words and his badly flawed first cut at this paper—because it cuts off in 1935, Loehle and McCollough does not demonstrate that temperatures in the MWP were warmer than current temperatures.
If you compare Loehle & McCollough to Moberg, analytically there is virtually no difference between what Loehle did than what Moberg did, for the low frequency portion of Moberg’s reconstruction, other than a difference in weighing of proxies.
Secondly, the argument over coverage for low-frequency applies equally to Moberg. Moberg has 11 low-frequency proxies (9 of these are used by Loehle). Loehle uses 9 further, all of which are considered to be temperature proxies and have published calibration values by their authors. While I agree that we can and should quibble over which proxies should be used (but I think Gavin is not the right person to be relying on for proxy selection, nor am I), fundamentally there is nothing wrong with the approach used in this paper. In fact, it’s relative simplicity and good agreement with more complex algorithms, rather than contradicting other work, acts as a form of verification that the more complex algorithms are not losing low-frequency information.
Loehle is limited to a low-frequency reconstruction (the rolloff is around a 50-year period) because of proxy selection, for which we expect and observe (with a few notable exceptions) a high degree of correlation between measurement on different locations of the Earth for long duration measurements. Since I was only looking at the low-frequency portion of the reconstructions, issues raised about the spatial sampling of Loehle (which are similar in any case to the issues that exist for the other reconstructions) are largely irrelevant.
When considering the effect of spatial sampling on the low-frequency portion of the reconstruction, we need to be aware of the effects of polar and land amplification on the estimated global (or hemispheric) reconstruction. As most of you know, land warms (and cools) more rapidly than ocean due its lower thermal mass. For reasons I can partly explicate, polar (land) regions are more sensitive to changes in forcing than more tropical ones.
The effect that sparse sampling has, for any reconstruction’s low-frequency portion of the signal, is that there will be a scaling bias for the “temperature scale” of a given proxy reconstruction compared to the temperature scale associated with global temperature.
In addition, methods like Composite-Plus-Scale are prone to an overall scaling bias and offset bias for the reconstruction period. (Offset bias occurs when the temperature scale during the reconstruction period is offset relative to the temperature scale of the calibration period.)
There are issues for high-frequency reconstructions because different areas start responding out of phase with respect to each other. However this is a fairly high-frequency phenomenon and most reconstructions get around this issue by low-pass filtering and only retaining periods longer than 10-years.
Nonetheless, we can get some idea by looking at spectra of the various reconstructions (I used a 200-year window with Welch tapering for the reconstructions), which I am showing below:
While the reconstructions have been scaled to match the low-frequency portion of Loehle & McCollough, the two temperature series are shown with no scale adjustment. Given uncertainties in the relationship of the pseudo-temperature scale to the global temperature scale, the level of agreement was a bit surprising to me.
Also — note that Moberg does not agree well with the other high-frequency reconstructions. Loehle predictable rolls off steeply below 50-Hz.
Ljungqvist and Mann 2008 EIV appear to agree well with each other (given uncertainty) and with the global temperature series, both in slope and in magnitude.
I had two simple questions about your figure. In Figure 5.7 of AR5 WGI, the reconstructions are relative to the 1881-1980 mean. MBH98 is, I think, 1902-1980. Are you sure you’ve scaled them all correctly (i.e., is MBH98 a fraction higher than it would be if you rescaled it relative to 1881-1980 or have you already done that?). Also, what about the 2σ for MBH98. Figure 5b of MBH98 seems to show quite a wide 2σ confidence interval.
A modest proposal:
> certainly the Kyoto arguments were primarily based on this new chart
I thought Carrick was a high-school kid, but since he is not interested in food fights, I guess he can’t be, eh (?)
So take a look at the proxy reconstruction of something that matters, perhaps consider records which match that of ENSO variability?
Seriously, these are highly-calibrated, as they can be routinely matched for each of the peaks and valleys of modern instrumental records . Start looking at something a little more advanced and maybe we can start caring.
 McGregor, S., A. Timmermann, and O. Timm. “A Unified Proxy for ENSO and PDO Variability since 1650.” Clim. Past 6, no. 1 (January 5, 2010): 1–17. doi:10.5194/cp-6-1-2010.
Willard (10/2/14 at 12:27pm) —
Thanks for the Tom Tomorrow link. Takes me back to a (seemingly) simpler time. Along the same lines, http://slatestarcodex.com/2014/09/30/i-can-tolerate-anything-except-the-outgroup/“>some recent musings by Scott Alexander. Brevity is not his strong point; tl;dr excerpt:
Cuts both ways, of course.
Another post in today’s feed that spoke to me about the dynamics under discussion here without touching upon Climate (much less Paleoclimate) is by Razib Khan. If It Doesn’t Make Sense in Light of All the Facts, It Doesn’t Make Sense.
Possibly useful perspectives, for some.
I think it might be time for this again (mods – feel free to remove for repetiton, but it seems to remain relevant):
KO asks Pekka:
Pekka, like Carrick, has not been paying attention to the proxy records that can be calibrated against the erratic time-series of modern-era ENSO measurements. The alignment is amazing if one cares enough to look at it.
 McGregor, S., A. Timmermann, and O. Timm. “A Unified Proxy for ENSO and PDO Variability since 1650.” Clim. Past 6, no. 1 (January 5, 2010): 1–17. doi:10.5194/cp-6-1-2010.
Judith Curry’s more diligent colleague at Georgia Tech, Kim Cobb @coralsncaves, pioneered much of this analysis a few years ago:
 Cobb, Kim M, Christopher D Charles, Hai Cheng, and R Lawrence Edwards. “El Nino/Southern Oscillation and Tropical Pacific Climate during the Last Millennium.” Nature 424, no. 6946 (2003): 271–76.
The two additional 2000 year reconstructions shown are Lundqvist 2010, and Christensen and Lundqvist 2012.
I note that characterizing any disagreement with you as food flinging sight unseen means you have already started the food fight.
To be fair to Carrick, we’ve had a couple of food fights in the past, so I took his comment to imply that he was trying to not start (or be involved in) another one. I would be pleased if that did turn out to be the case.
Re Carrick’s composite:
Carrick had some strange idea that his graph would make it plain to see that MBH 98 is wrong and MM’s methods will make a better reconstruction. If anything, the graph shows the opposite. MM’s sweetie has a warm LIA. Warmer than MBH. Much warmer then Carrick’s composite. And Carrick’s composite has a much larger/dramatic increase in temperature to the end. Or bigger HSI as according to MM’s formula.
ATTP: Responding to your 8:04 post
You have reached the heart of the matter with MBH98 but then stopped.
Everyone seems to accept that non-centred (i.e. non-standard) PCA moves what would by PC4 to PC1. This makes it the dominant driver of the hockey stick.
This would not matter if PC4/PC1 were a good temperature proxy. However it is not, hence NAS recommended not using bristlecones. Saying use of PC4/PC1 is justified because M&M should use the “correct” number of PCs is not relevant.
You ask why MBH corresponds to the temperature record. This is simply due to spurious correlation. The proxies were selected and weighted based on whether they matched current temperatures or not.
As to whether MBH is validated by later studies, IMO this has to be demonstrated not asserted. Someone needs to plot it against current studies and show whether there is agreement or not, and whether the error bars overlap or not. Carrick has made a start. Perhaps a defender of MBH could come up with an alternative version.
I think you’re (intentionally?) missing the point that I’m making. There are two separate issues. The number of PCs to retain that depends on the centering used, and whether or not the proxies are appropriate.
This appears to be rather an assertion in its own right. I have no great knowledge of proxies but referring to a NAS report is not particularly convincing.
Again, I think you’re putting words in people’s mouths. I’m certainly not claiming that later studies “validate” MBH98. My view is that later studies supersede MBH98. If I want to understand our millenial temperature history, I’ll look at recent work, not a paper published 16 years ago. The only point I’ve been making is that – broadly speaking – what this more recent work indicates with respect to our temperature history is about what we concluded based on MBH98. That doesn’t mean that MBH98 is validated, simply that our broad conclusions haven’t changed.
I have no interest in validating, or not, MBH98. That just seems so 20th century 🙂
Tom Curtis – Agreed, both methods are equivalent _if_ you use all significant components, as I’ve stated repeatedly.
Digging into the various reconstructions, in particular the Wahl and Ammann 2007 work, I realize that I’ve overstated something – “mea culpa”. I apologize for the length of the following.
MBH and MM05 differ not only in centering convention, but in MM using covariance matrix (‘unstandardized’ is the usual term there) PCA, while MBH using correlation matrix (‘standardized’). I had overlooked that in the general discussion of components, despite mention in discussions (facepalm), I realized it when looking at the W&A code that implemented all of these.
Standardized PCA will in general have fewer significant PCs, more variance in the leading PCs, than unstandardized, regardless of centering, hence fewer PCs to reproduce the data set, to converge. This is the major reason for 2 significant components in MBH and 5 significant components in MM.
W&A 2007 tested both of these approachs, plus centered standardized PCA. Full centered unstandardized (MM) gives five significant PCs, with the HS signal in PC4. Short centered standardized (MBH) gives two PCs, HS in PC1. And full centered standardized (W&A) gives two PCs, HS signal appearing in PC2.
The centering used is still relevant to where the HS signal appears – short centering moves it (PCA methodology unchanged) from PC1 to PC2. Again, this holds for this dataset and centering – if the short centering was on a different period than recent fast changes I don’t believe the HS signal would migrate PC2->PC1. If MM considered the _centering_ the primary issue they should have compared their hockey-stick index sort (HSI) in their PC1 in a full centered standardized PC2 – apples to apples. They did not do so.
However, attempting to compare principal components extracted from the covariance matrix with those from the correlation matrix is absurd – the methodology difference is guaranteed to give you a different dimensional breakdown of variability. This is what MM05 did, and it’s wrong. And comparing MBH PC1 to their MM PC1, which doesn’t encompass the HS signal, is absurd – yet they did so in MM05 Fig. 2, and McIntyre repeated this error last week in his CA post on t-statistics.
Again: All of these PCA methods are valid ways of distilling and major variations in complex datasets _if_ you use all the significant components. Which MM05 did not do.
* Short-centering emphasizes the recent calibration period and common strong signal, moving (using the same PCA methodology) the HS signal from PC2 to PC1, with two PCs significant in both cases. (I may have to run short-centered unstandardized PCA and see where the strongest HS signal goes)
* Standardized and unstandardized PCA components will not be directly comparable – unstandardized splits variability into more PCs, covering a different orthogonal basis set, the dimensions of which are _not_ those found by standardized PCA.
* MM05 compared apples with oranges (standarized vs. unstandardized PCA), invalidly tried to compare PCs from different basis sets, and in comparing their PC1 with MBH PC1 for the HS signal are looking in the wrong box, looking where the signal is not.
* MM05 never ran significance/selection on their PCs, and retained only two of the five significant PCs they found – the two they retained only account for ~28% of the data variability, whereas all five (and MBH’s two) account for ~48%. This is a huge and invalidating error in applying PCA.
* MM05 also never ran validation tests on their reconstructions – where they differ due to trimming proxies the MM05 reconstructions fail validation, they are not significant.
And yet McIntyre continues to insist on his interpretation, despite basic conceptual errors in applying PCA, despite comparing the wrong components for HSI, and despite never validating the reconstructions he insists are more correct. Sigh.
Anders, I had no problem with Carrick saying he didn’t want a food fight. But then he wrote:
In doing so he decided to defend the rather odd idea that calibration of reconstructions to modern temperatures should be achieved by calibrating with other reconstructions over the period of the reconstruction rather than by calibrating against the instrumental record over the period of overlap by a pre-emptive ad hominen. If he doesn’t want food fights, perhaps he should not fling the first plateful.
And, while I am about it, we may want to have a closer look at this claim:
It idea that “issues raised about the spatial sampling” are largely irrelevant is not born out when we look at the difference in 20th century temperatures between the three instrumental records shown on the IPCC graph, nor by the difference between the SH and NH temperature records over the twentieth century.
I take your point about using more modern studies rather than validating MBH98 – though others here seem to be keen to show that more modern studies somehow prove MBH98 was correct.
I also understand your point that correct use of PCA is a different argument to whether the proxies are valid or not. However if the effect of PCA (used correctly or not) is to give high weighting to proxies that are 1) not correlated to local temperature, 2) claimed by the original author to be proxies for something other than temperature (Graybill said they were CO2 proxies), 3) influenced by physical damage, and 4) considered so poor that the NAS panel explicitly recommended not using them, then arguing about the ‘correct’ number of PCs is merely dancing around a pinhead. You cannot rescue this merely by saying that Preisendorfer Rule N says it is OK.
Nice to hear from you, AMac.
Our current episode of ClimateBall ™, featuring the fiercest player in ClimateBall ™ history, resurrected both you and Ron Broberg (see Nick’s). Either it’s important, or mentioning Tiljander awoke you.
Freud may have been right on narcissism about small differences:
Thank you for the link.
Following through my reading of the hearings, I note the first comment by MR STUPAK:
Yes, but Gaspé starts in 1404, I presume.
A point I would like to raise here – If a paper uses what are at that time correct methodology, the best data then available, and results in conclusions that are consistent within the stated error bars with later work, then that paper is not wrong.
That holds even if the paper is superseded by improved methodology (RegEM and others), by better data (more and better proxies!), etc – in which case the paper was just limited by what was available. And even if the conclusions are superseded with better data and methods (which isn’t the case for MBH) and hence the conclusions are incorrect, the paper when written wasn’t in error.
And demonizing the authors of such papers is just wrong.
KR: ” If a paper uses what are at that time correct methodology… then that paper is not wrong. ”
But if that paper uses a unique variation of a method that has not been used before or since, does not disclose use of that variation, where the variation is criticised by one of the foremost experts in the field (albeit years later) and gives high weighting to proxies which the original researchers say were not related to temperature, then the paper is wrong – even when written.
The details of the proxies is beyond my knowledge (at this stage at least) however, here’s a paper arguing that they are suitable proxies for temperature. To be clear, I’m not showing this to prove you wrong, simply illustrating that one can find viable arguments as to why they might be suitable. You may also argue that one of the authors is conflicted, but I think that is beside the point.
I’ll go back to my earlier point, MBH98 produced a reconstruction with a 20th century temperature that was a good match to the instrumental temperature record and showed a figure (FIgure 4) that illustrates how it appears to be a good match to regional variations. Their method (short centering) appears to be suitable as long as they retain the right number of PCs and there is evidence that their proxies were suitable. And then I’ll also stress that what KR says is also valid (why are we still talking about this and why do so many people demonize the authors) and if I really want to understand our millenial temperature history I’ll look at more recent reconstructions.
Apologies if you experience this as trolling. I’m just trying to put a picture together about who you are.
BTW, I do know about the various publication patterns of physicists, but then again I do not believe you are one.
And I indeed cannot recall you claiming to be a senior academic, although you frequently pontificate as if you are.
Unless you have some rather odd definition of physicist, your belief is wrong. That doesn’t mean you can’t believe it though.
Possibly, although I’ve never claimed to be anything but an anonymous/pseudonymous blogger who happens to be a physicist and who happens to work at a UK university. You, on the other hand, behave nothing like a senior academic (although, to be fair, I’m not sure you’ve ever claimed to be one either).
AndyL – Short-centering was unusual, but is in fact not wrong – while it redistributes the PCs somewhat, including _all_ PCs that are above noise level results in reconstructions that are all but identical to full-centering.
“Weighting” is ambiguous in this discussion – MBH weighted the contribution of various proxies based on confidence levels for their proxy validation, and those could weights could be disagreed with. Not because they were bad methodology, but simply because others might have different opinions as to their confidence levels. But as Wahl and Ammann 2007 demonstrated, removing that weighting makes a negligible difference in the reconstruction.
Neither short-centering nor individual proxy weights, regardless of being non-standard, affect the reconstruction or the conclusions.
And whether or not there was some dissent about the validity of certain individual proxies, they had not been rejected at that time – and subsequent work with additional and improved proxies indicates that those proxies in question were, in fact, not a problem.
Your objections do not hold.
That comment should be framed, AT, like the picture Richard tries to make of you.
Such ClimateBall ™ move is a thing of beauty:
I will disbelieve who you say you are
because you can’t prove me wrong
unless you give me what I want.
I will put words in your mouth
because it puts a picture together
about who you are.
I’m sorry to Tol, but then I will.
Richard – Yes, that is in fact trolling. And in essence a combination of an ad hominem attack on ATTPs blogging by an Appeal to Authority fallacy – attempting to denigrate what’s said here by questioning his credentials rather than addressing the discussion.
I wouldn’t worry about it. I’d be much more concerned if Richard didn’t behave in this way. That would be incredibly suspicious 🙂
AndyL: You cut KR’s quote short, and particularly missed the most important part.
“results in conclusions that are consistent within the stated error bars”
This is Physics 101. If there were concerns around accuracy, a good paper will talk a lot about it and certainly mathematically show it. Mann shows 2 sigma. Not only that, but 2 sigma is really wide.
2 sigma can be very tight you know. Mann is showing a wide error margin, which means there is a lot of noise in his data sets.
Finding a different data set that fits the error margins is not an accomplishment. Its expected.
ATTP (October 2, 2014 at 2:59 pm) —
You link to Salzer et al’s 2009 PNAS, on bristlecone pines. Back in 2010, I wrote a (journal-club-like) comment on that paper; it might be of interest to you.
APPT: If you stop short of thinking through the implications of using non-centred PCA on the actual proxies in this case then you are missing the whole point of the criticism of MBH. BTW McIntyre and a collegue resampled the exact same trees on a field trip. He has repeatedly requested that the proxies be “brought up to date”.
KR: “And whether or not there was some dissent about the validity of certain individual proxies, they had not been rejected at that time” Clearly not, because no-one had previously considered using them as temperature proxies.
Mann has shown at least twice that he does not care what the original researchers say about the proxies, so long as they correlate somehow, an approach which will guarantee spurious correlation. Firstly he used bristlecones which the original author considered CO2 proxies, and secondly he used Tiljander despite the original paper stating the proxies were contaminated in Mann’s correlation period.
If the gremlins had not already trashed it, I’d suggest someone should email Richard and warn him that an imposter here is attempting to besmirch his reputation.
Given the title of this blog involves physics, thoughtful people might consider the lesson of delightful article in Physics Today in July, The search for Newton’s constant by Clive Speaker and Terry Quinn, which starts:
“Three decades of careful experimentation have painted a surprisingly hazy picture of the constant governing the most familiar force on Earth.’
Figure 1. Measurements of Newton’s gravitational constant G, is especially relevant to the discussion here of reconstruction, although that is a much “simpler” case than coimparing sequences of temperature reconstructions over different geographies.
Anyone who doesn’t see the relevance and has strong opinions about invalidity of MBH98/99 … might study reasoning about measurements and error bars .. and even better, understand why IPCC AR4 WG I Fig 6.10 (c) is a really useful visualization.
I’ve got to turn off, but I’m sure ATTP can explain the Physics Today relevance.
AndyL: “BTW McIntyre and a collegue resampled the exact same trees on a field trip. He has repeatedly requested that the proxies be “brought up to date”.”
It won’t cause MBH98 to be rewritten. It won’t stop Steve McIntyre from complaining bitterly to the end of the earth about MBH98. Actually I think he’ll complain bitterly no matter what. He complained for like 6 months, that he couldn’t order people around in a peer review. (This put him in the category of ‘crank’.)
Contrary to popular belief (by you and yours), scientists don’t sit around waiting for another data point to add to the graph. They move on and actually study the material and try different methods and otherwise try to move the science forward.
Real scientists don’t do is sit around and dwell on old news. They refine and improve. Mann has done this. McIntyre has not.
Interesting, but that appears to be about the divergence problem which is somewhat different to whether or not Bristlecone Pines were a suitable proxy for MBH98 to use. Having said that, I’m very keen not to be dragged into a discussion about a topic that I know little about. I’m even not all that happy about having been dragged into a discussion about MBH98, as interesting as it has turned out to be.
“AndyL: “BTW McIntyre and a collegue resampled the exact same trees on a field trip. He has repeatedly requested that the proxies be “brought up to date”.”
I kinda care. I’ve long wondered if they got the proper permits to core these old bristlecones …
AndyL – Remove the bristlecone pines, and the reconstruction is a hockey stick. Remove the Gaspe proxy, and the reconstruction is a hockey stick. Remove the Tiljander , and the reconstruction is a hockey stick with a greatly reduced MWP. Those don’t change the conclusions.
While there was indeed some dissent on the quality of various proxies, _they had not been invalidated_. In fact, if you check current references such as Salzer et al 2009, proxies such as the bristlecones have indeed been demonstrated as valid, with earlier questions disproven.
Again, your objections are well-debunked nonsense.
dhogaza: Pfft! What are those?
Dobbs: “If you’re the police, then where are your badges?”
Gold Hat: “Badges? We ain’t got no badges. We don’t need no badges. I don’t have to show you any stinkin’ badges!”
Why? Curiosity. I’m a social scientist. I want to know how people behave and why.
AMac – The ‘Divergence Problem’ for the last 50 years simply doesn’t invalidate the proxy for earlier periods – as the proxy tracks temperatures up to that point. And that earlier proxy data has been since well confirmed by completely independent proxies with yearly resolution such as speleothems.
More complaints about issues that don’t affect the MBH paper or its conclusions…
Down the rabbit hole AndyL tries to lead us, ignoring all the key strikes against the so-called auditor and picking at nits and trying to obfuscate. This seems like a rather lame effort to defend McIntyre’s wrong doings, which has been the point here and at Nick Stoke’s blog. So it would seem that AndyL is perfectly OK with McIntyre’s BS. But perhaps he is reasonable and will concede that point, you know the actual issue here.
The unfortunate reality for folks like the auditor is that their are literally dozens of Hockey sticks from multiple groups. This is something that McIntyre and his followers continue to deny or refuse to acknowledge, instead continuing to obsessively pine away about a paper written 16 years ago. Now that is some serious denial, immaturity and OCD.
“BTW McIntyre and a collegue resampled the exact same trees on a field trip.”
AndyL almost makes it sound like McIntyre is interested in the truth! 😉 Surely he is smart enough to know what the auditor’s real agenda is.
BTW, if I recall correctly, that was the field trip that was essentially illegal, was it not? That is, he did not follow the correct procedure and obtain the required permits etc….you know the thing he routinely gets his knickers in a twist about.
When the original authors of an empirical study do not base their objections on specific knowledge about the data quality, their views on the implications of the data are only views of certain scientists, others are free to disagree.
No field of study will develop without someone taking the first step. That was the role of MBH98 for much of its content. That the first attempts are not perfect and that some weaknesses are later found in them does not change that.
Really? I thought you were an econometrician. Of course, you’ve previously claimed to be a climate scientist, and now a social scientist. If I was interested in understanding how people behave and why, I might find that interesting. Given that I’m not really, I don’t.
> I’m very keen not to be dragged into a discussion about a topic that I know little about.
Understandable. My own view is that “post-hoc analysis” remains the biggest challenge for paleoclimate reconstructions. These are tempting methods to use, but other fields have learned to avoid them (or account for them, e.g. with the Bonferroni correction). This will be a hard problem to tackle, and wider confidence intervals are an inevitable and unwanted outcome. Jim Bouldin is a dendrochonologist who has been <a href="writing about these issues.
Thanks for clicking through and reading my precis of Salzer et al. 😉
“I want to know how people behave and why.”
Efficiently, of course, with perfect foreknowledge of market conditions. Doesn’t everyone know that?
Steve, I want to know if this is possible, but I don’t know who to ask.
“and secondly he used Tiljander despite the original paper stating the proxies were contaminated in Mann’s correlation period.”
Good grief, one almost starts to wonder whether you even read the paper. After all, it says
“Potential data quality problems. In addition to checking whether or
not potential problems specific to tree-ring data have any
significant impact on our reconstructions in earlier centuries (see
Fig. S7), we also examined whether or not potential problems
noted for several records (see Dataset S1 for details) might
compromise the reconstructions. These records include the four
Tijander et al. (12) series used (see Fig. S9) for which the original
authors note that human effects over the past few centuries
unrelated to climate might impact records (the original paper
states ‘‘Natural variability in the sediment record was disrupted
by increased human impact in the catchment area at A.D. 1720.’’
and later, ‘‘In the case of Lake Korttajarvi it is a demanding task
to calibrate the physical varve data we have collected against
meteorological data, because human impacts have distorted the
natural signal to varying extents’’). These issues are particularly
significant because there are few proxy records, particularly in
the temperature-screened dataset (see Fig. S9), available back
through the 9th century. The Tijander et al. series constitute 4
of the 15 available Northern Hemisphere records before that
It is frikkin’ acknowledged. And not only that, the analysis is repeated with the problematic proxies removed.
Okay! I did a little bit of research, posting a question at climateaudit, then searching there for “preisendorfer”. A few conclusions:
1) KR said that “in that rebuttal there is _nothing_ about evaluating PC significance”, I suggested looking for more rebuttals. I think I can presume that he doesn’t know about them; that McIntyre has failed to address this. However, there are _something like two dozen rebuttals_ on climateaudit on this very subject, going back to 2004. According to those posts:
2) Mann and co later claimed to use Preisendorfer’s Rule N to evaluate PC significance. However, they did not mention doing so in MBH1998 wrt retaining tree ring networks, referring instead to spacial extent and size of networks.
3) This was no accident, since analysis shows that the principal components of the tree networks were actually not chosen that way. This analysis was only possible after Corrigendum SI of July 2004, where the pattern of retained PCs was shown for the first time.
4) No one knows how they were chosen; to this day, there is no information available on how the PCs were actually retained. The only thing sure is that it wasn’t using Rule N.
5) Thus, M&M didn’t use Rule N either, there was no reason to at the time. They just followed MBH’s procedure of using the top two PCs.
6) Preisdorfer’s Rule N is not “the standard way” in statistics to evaluate PC significance. There are more than half a dozen; it is one of them.
7) But even Preisdorfer himself emphasizes that the Rule is not an automatic recipe for inclusion; you still have to have scientific reason to think that the principal component is supplying useful information. In this case, there is good reason to think it is not:
a) PC4 basically just selects for the bristlecone network, in the southwest US. It doesn’t seem likely to be a good proxy for temperature of North America.
b) Indeed, if you take out the bristlecones, the hockey stick goes away.
c) Experts in the field, both before MBH98 and after, have advised that the bristlecones in particular have serious problems that affect their ability to act as temperature proxies. The NAS panel suggested that strip-bark samples not be used.
d) The hockey stick signal moves from PC1 down to PC4, and from about 40% to 8% of the explained variance – as everyone agreed all along. But then the next procedure is to favor PCs that match modern temperatures. That leads to the bristlecones being the PC that yields the hockey stick – even though the bristlecones don’t actually match their own local temperatures, just the overall North American temperature. They certainly don’t represent any of the other data.
The upshot of all this:
1) To my layman’s eyes, M&M have responded very effectively to this claim.
2) People here have posted a dozen times or more about M&M’s “mistake”. They have claimed that McIntyre stubbornly “never admitted his mistake”.
3) This all gives me a definite impression that they are completely unaware of any of these responses. They post a realclimate link, and that’s all they know about; they think that’s all there is.
4) However, the realclimate link is entirely misleading. It tries to give a false picture by claiming a procedure was used which actually could not have been used. It thereby avoids the impression that the procedure was something proposed after the fact to make sure the bristlecones don’t get dropped. Rather, it tries to give the impression that this is the “standard procedure”.
5) It also does not give any information about the extensive rebuttals by the other side. Unlike climateaudit, which exhaustively links all parts of both sides of the argument, realclimate just presented their picture of both sides, where of course their side wins.
5) My tentative conclusion is that the posters here are the victims of a misinformation campaign by realclimate.
Note that in some of these points, the math needs to be checked; I can only report what they say. Others are questions of fact (“this is what Preisendorfer says in his book”, “MBH reported a different set of criteria in MBH98”), and are verified by links available in the posts. Still others are to me simple logic: e.g., you can’t make a valid temperature reconstruction just based on one set of trees.
I’ve avoided cluttering this with links, but search “Preisendorfer” to a get a whole list of links for all this. Finding what I wanted took me less than an hour total.
I have to disagree with you. I for one find it very interesting why anyone would be apparently unable to pursue criticisms of each other’s work through normal channels of academic debate.
Richard, can you apply your huge expertise to help on this?
Side note on the MBH/MM topic: the significance evaluation used (Preisendorfer’s Rule N) is but one possible rule, and there are others in the literature. There can therefore be discussion as to whether that was the most appropriate rule to use. I’ve seen considerable argument over that at CA.
Use of different rules is a basis for disagreement. Failure to use any rules whatsoever (as in MM05) is simply an invalidating error.
“Remove the bristlecone pines, and the reconstruction is a hockey stick. Remove the Gaspe proxy, and the reconstruction is a hockey stick. Remove the Tiljander , and the reconstruction is a hockey stick with a greatly reduced MWP. Those don’t change the conclusions”.
Remove bristlecones AND Tiljander from Mann09 as Gavin Schmidt did and the the result is not a hockey stick.
Having wide 2-sigma error bars does not prove something is high quality. I would really like to see someone plot MBH09, Mann09 and Pages2K, and other combinations with all error bars visible
Richard Tol, I am a natural scientist, and thus interested to learn in what black hole I can find the 300 papers you hypothesized rejected the notion that more than 50% of the observed warming is NOT caused by greenhouse gas emissions.
I don’t expect an answer, since we have asked you so many times already, so why do you expect ATTP to answer *your* (and, I should note, scientifically irrelevant) questions?
“behave nothing like a senior academic”
To be entirely fair, people like Happer, Lindzen, Singer and the founding GMI trio come to mind. With a little imaginative cherry-picking, one might even be able to deem them a representative sample.
Maybe its becauuse I’m an engineer and not a physicist, but to me you can’t analyse numbers without being aware what those numbers represent. Saying that, I agree that more recent analysis is more useful and relevant.
You seem mystified “why are we still talking about this and why do so many people demonize the authors”. Well we are talking now mainly because some people started accusing McIntyre over his M&M05 paper, but also because MBH is the subject of a current court case instigated by Mann.
As to why Mann is demonised, this is due to his behaviour, not his science. See Judith Curry’s latest blog post for an introduction. http://judithcurry.com/2014/10/01/steyn-versus-mann-norms-of-behavior/
AndyL: “Having wide 2-sigma error bars does not prove something is high quality.”
No one claimed this, so man (not Mann) up and get over it.
I want to thank you, Carrick, Miker613, and of course our esteemed colleague, Steve McIntyre for their continued support of MBH98. I’m glad to see amateurs of your caliber have been able to find no discernible issues with his work. Its good to know the work really is solid.
> I’m a social scientist.
I thought you were an econometrician, Richard:
It seems though that how you or your Gremlins behaved made Andrew Gelman believe you were not one.
AndyL: We know what the proxies represent.
I’m an engineer. I design sensors and communication systems. All I do is look at calibration (statistical comparison and correction), measurement, and error analysis. I have a patent for my work in developing an accurate sensor.
Looking over error, understanding error, measuring error is all I do. 2 sigma.. wide bars… lots of error. Its a fact. Time to move on and get over it. Time to find something else to complain about.
How about the start of Season 2: Agents of Shield? Man I thought that was cool!
> This seems like a rather lame effort to defend McIntyre’s wrong doings, which has been the point here and at Nick Stoke’s blog.
I have not seen such defense at Nick’s. AndyL simply quoted the fiercest player in ClimateBall ™ history over there. Way more courageous to fight hockey sticks and stones over here.
And now miker613 did some research after playing dumb in response to Nick:
Interestingly, the Auditor’s voice of God appears, with a reference to Ralph Cicerone, followed by a smiley.
Still no appearance from miker613 at Nick’s, in contrast to what seemed to presume his first question here.
AndyL – The Tiljander sediments were discussed as a potential issue in MBH, the bristlecones were an acceptable (although discussed) proxy at the time and have been further validated since then. Again, you are shot-gunning nonsense.
As to overlapped reconstructions with error bars, IPCC AR4 is here, Fig. 6.10c, and for AR5 here, figure 5.8a including PAGES2K. Note that the values in the AR4 graph are anomalies from the 1961 to 1990 mean, while in AR5 they appear to be anomalies from the 1500-1850 mean, with current temperatures much higher – an offset of perhaps 0.5C between the two and post-2000 not included. Both are shown as probability distribution functions including all error bars.
Warning – they are both graphs of hockey sticks, showing a MCA almost certainly cooler than today.
well you said “Mann is showing a wide error margin” as if that were a good thing, but maybe you merely meant there is a lot of noise
Anyway, having lots of studies that go up at the right hand end is hardly evidence of anything, as no-one doubts that temperatures went up in the 20th Century. It is agreement prior to the instrument record that is the challenge, and MBH merely seem to have averaged that out to a flat line (the handle). As I suggested, and as Carrick started, lets see all these studies with their error bars overlaid. Demonstrate agreement, don’t just assert it.
Oops, double negative. And thus a correction (without invoking gremlins):
“Richard Tol, I am a natural scientist, and thus interested to learn in what black hole I can find the 300 papers you hypothesized rejected the notion that more than 50% of the observed warming is caused by greenhouse gas emissions.”
AndyL: I’m not doing squat. I’m not the one standing here with airy fairy arm waving.
One would think that you guys are claiming something valid, you would have freaking looked at the error bars. That’s what and engineer would do. (Did.. done…)
But you guys didn’t… After 16 years you didn’t do that?
I’m laughing at you guys.
“BTW, if I recall correctly, that was the field trip that was essentially illegal, was it not? That is, he did not follow the correct procedure and obtain the required permits etc”
Yep. They’re libertarian, after all … and Very Important Libertarians.
KR: It would be more useful to show error bars separately for each study (not merely combined), but we have what we have.
Visually, in the LIA in AR5 the reconstructions are about 0.5 to 0.7 degrees apart, but the Mann 09 error bars in that period are about +/- 0.2 degrees. This does not demonstrate agreement between Mann09 and other studies.
“Still no appearance from miker613 at Nick’s, in contrast to what seemed to presume his first question here.” Yeah – Nick posts at climateaudit, so I can talk to him there. He did such an awful job on the current issue that I don’t see the point any more.
“playing dumb” – no, I guess I am dumb. I absolutely think that Nick’s whole point is wiped out, and his response convinces me that he was just pretending to have a point in the first place.
anoilman: “I’m not the one standing here with airy fairy arm waving. ”
and I’m not the one claiming agreement between the modern reconstructions. All I’m asking is that whoever makes that claim demonstrate it.
“As to why Mann is demonised, this is due to his behaviour, not his science. See Judith Curry’s latest blog post for an introduction. http://judithcurry.com/2014/10/01/steyn-versus-mann-norms-of-behavior/”
That piece is so filled with misrepresentations and lies that she should be ashamed, and you even more for linking to it.
Take her first claim that Mann thwarted M&M’s attempts to reproduce MBH98/99: this is false. The data used were mentioned and freely available. “Ah”, you say, “but the code!”. Well, whether that is a part of communalism can be discussed. Some would say “no”, others “yes”. I’m in the first camp.
Her second claim is a plain lie AND a misrepresentation of communalism. Mann told Jones he’d forward an e-mail. That isn’t even close to helping Jones to figure out(!) how to evade FOIA laws. And in what universe are e-mails a part of communalism? If Curry believes they are, I would expect her to publish all of her e-mails. She doesn’t. Hence, she herself does not follow what she herself considers “communalism”
Her third claim is another misrepresentation of Merton’s norms. Universalism doesn’t say you cannot dismiss someone’s criticism because of perceived ideological bias – which could be considered a lack of disinterestedness.
Her fourth claim is similar to all those people falsely claiming an ad hominem, ignoring that the ‘names’ used are a *conclusion* based on someone’s actions/words/behavior. But even if he were just calling her names, it *still* would not be a violation of universalism!
Her fifth claim is the final misrepresentation of Merton’s norms. There’s nothing in Merton’s descriptions that claims scientists violate disinterestedness if they propose policy actions based on scientific research. The arrow points in the other direction: your work being driven by your desired policy actions. In other words, the way Mann perceived M&M’s actions.
AndyL: I’m still laughing at you.
You realize that the other data sets have error bars as well, and therefore overlap a healthy 50%? You got that fact right?
To really knock MBH98 out of the park, you need something way outside, with much tighter tolerances. You’d also need to demonstrate that this possible THEN, and not now.
Anyways, good luck, and try hard!
Willard: “I have not seen such defense at Nick’s. AndyL simply quoted the fiercest player in ClimateBall ™ history over there. Way more courageous to fight hockey sticks and stones over here.”
I find it interesting that “Tiljander” was pulled into this discussion. These proxies were not in MBH 98/99, so what the heck does it have to do with this discussion.
As Rattus Norvegicus writes Tiljander data is more recent. Result were published in 2002-3, the main paper in 2003 (and the thesis of Tiljander in 2005, but it was based on the earlier papers with little additional data).
miker613 lacks a BS detector. Nor do I believe he’s actually ever read MBH98.
“We isolate the dominant patterns of the instrumental surface temperature data through principal component analysis (footnote #25). Page 781, footnote is to Preisendorfer, see below.
“In a given calibration exercise, we retain a specified subset of the annually averaged eigenvectors … In practice, only a small subset N(eofs) of the highest-rank eigenvectors turn out to be useful in these exercises from the standpoint of verifiable reconstructive skill. An objective criterion was used to determine the particular set of eigenvectors which should be used in the calibration as follows. Preisendorfer’s (footnote #25) selection rule ‘rule N’ was applied to the multiproxy network to determine the approximate number N(eofs) of significant independent climate patterns that are resolved by the network, taking into account the spatial correlation within the multiproxy data set.” Page 786, footnote is again to Preisendorfer, see below
“25. Preisendorfer, R. W. Principal Component Analysis in Meteorology and Oceanography Elsevier, Amsterdam, 1988).” Page 787
Now, what was miker saying about Mann never mentioning Preisendorfer and Rule N until 2004?
anoilman: “I’m still laughing at you.. You realize that the other data sets have error bars as well, and therefore overlap a healthy 50%? You got that fact right?”
Of course I get that you [self snip]. It’s also blindingly clear that not all the reconstructions can agree with each other, if they have similar error bars.
It’s up to the people who claim that modern reconstructions are in agreement, or that MBH has been vindicated by modern reconstructions, to demonstrate that their claim is correct.
“Now, what was miker saying about Mann never mentioning Preisendorfer and Rule N until 2004?” See the posts I mentioned at climateaudit. They discuss when he claimed to use Rule N (the case you mentioned), and when he said he was doing something else. For what I was talking about, it was something else.
One place I saw: http://climateaudit.org/2008/03/14/mbh-pc-retention-rules/#comment-140365
miker613 what was it you were saying yesterday about quoting talking points just picking a side you agreed with?
Since *all* the millennial reconstructions show modern warming to be exceptional I am at a loss as to what your point might possibly be.
If you wish to claim – against *all the evidence* – that there was a global and synchronous “MWP” as warm or warmer than the present then you are arguing for a high climate sensitivity to radiative perturbation. This has serious implications for the future response to modern CO2 forcing.
However, my guess is, that like every single other contrarian I have ever conversed with, you are a firm believer in low climate sensitivity.
This endless fussing over MBH strikes me as profoundly intellectually confused.
miker, Kevin pointed to a footnote in the original paper which references Prisendorfer, give it up man.
Or we can just move on and consider the modern reconstructions only.
Alternatively we could consider what we concluded based on MBH98 and what we conclude today. Won’t be exactly the same, but I suspect most would agree that our understanding today has improved and more detailed, but not inconsistent with our understanding based on MBH98. The whole point of these reconstructions is to build understanding of our past climate. They have no real relevance otherwise.
Same goes for you, miker. I noticed that you simply dodged this point yesterday. A tell for intellectual dishonesty if there ever was.
I’ve just checked the paper and it does indeed mention and cite Preisendorfer.
There are countless examples of Mann being provocative, obstructive and less than scrupulously honest. You may think this is acceptable behaviour for a scientist, but it is why people demonise him.
If we all were to concede that MBH98/99 is full of holes and grossly flawed and that MM05 made some very valid points about the (even then) superseded paper where would that leave us? Would it prove the Mann was/is a hack? That AGW is a hoax? That AGW is much ado about nothing? Or would it prove only that MBH98 was a very early paper in a very new field that made mistakes but still managed to kick the ball in the right direction?
I’m not sure that there is any real acceptable behaviour for a scientist/academic. IMO, how someone behaves has no real bearing on their scientific ability. It would be very nice if all scientists were always pleasant and polite, but they’re not and it doesn’t appear to really matter. Some of the people that I suspect you may respect are amongst some of the rudest I’ve ever encountered (one of whom has been commenting on this post).
Here’s an alternative picture. A PhD student publishes a paper that generates incredible impact but is also attacked heavily, as is he. He survives. I would not have. That he is a little blunter and ruder than I might choose to be is not at all surprising, in my opinion. In some sense it’s surprising that he isn’t even more extreme.
Given the multiple convenient falsehoods, graph-mining and reliance on absurd sources this 2005 post, why would anyone with the slightest sense ascribe credibility to McIntyre?
“There are countless examples of Mann being provocative, obstructive and less than scrupulously honest. You may think this is acceptable behaviour for a scientist, but it is why people demonise him.”
So what you say is that it is OK to be provocative and less than scrupulously honest in demonizing Mann, because he is provocative, obstructive and less than scrupulously honest himself?
AndyL, please look up the Tu quoque fallacy. Which you have just committed.
“miker613 what was it you were saying yesterday about quoting talking points just picking a side you agreed with?” I addressed that. Some things I can check myself, others _as I said_ I am just reporting what they said.
Would you prefer to continue only hearing realclimate’s side of things? Sounds like that. But given that even with the things I could verify myself, I already see that this isn’t “a mistake the McIntyre tried to hide and never addressed” – which is what several of you kept saying – but something that he dealt with numerous times and at length, I think that anyone who wants to actually understand the issue had better go beyond realclimate’s presentation. Others just keep going there for your tidy links.
[Mod : How did you get a comment through? Oh, and that is very definitely rhetorical.]
Marco, any reasonable auditor could hardly be expected to know the difference between 1995 and 1990, and surely it’s unfair to suggest that use of references backed by a dog astrology journal would indicate that the auditor was being provocative, obstructive and less than scrupulously honest? We feel sure that the auditor’s alternative universe provides a full explanation.
So now AndyL joins Miker in blanking this:
Since *all* the millennial reconstructions show modern warming to be exceptional I am at a loss as to what your point might possibly be.
If you wish to claim – against *all the evidence* – that there was a global and synchronous “MWP” as warm or warmer than the present then you are arguing for a high climate sensitivity to radiative perturbation. This has serious implications for the future response to modern CO2 forcing.
However, my guess is, that like every single other contrarian I have ever conversed with, you are a firm believer in low climate sensitivity.
This endless fussing over MBH strikes me as profoundly intellectually confused.
“If you wish to claim – against *all the evidence* – that there was a global and synchronous “MWP” as warm or warmer than the present then you are arguing for a high climate sensitivity to radiative perturbation.” This is like the fifth time this profoundly confused point has been posted on this thread. I answered it several times already, which you call “blanking” it. Once more:
We are discussing an issue. The issue is not climate sensitivity. Only Climateballers, partisan team players, would suggest that “Because I want climate sensitivity to be low, therefore I will twist and pervert the facts on a totally different issue which may bear on climate sensitivity.”
Those of us who actually care about truth will strive to understand this issue correctly, and let the results fall where they may.
[Mod : Refers to a deleted comment.]
[Mod : Refers to a deleted comment.]
AndyL:”It’s up to the people who claim that modern reconstructions are in agreement, or that MBH has been vindicated by modern reconstructions, to demonstrate that their claim is correct.”
They did. Case closed.
I’m really grateful for your continued support of MBH98 being accurate.
I have at this point lost all track of what the original disagreement was over and what AndyL, miker613, etc are trying to show us.
If we pretend for a moment that I concede every single point, accept all arguments and all references what do we learn? What are we to take away from this discussion?
I don’t believe you.
You dodged the point. Again.
pbjamm: Everyone here is in violent agreement that Mann’s work has withstood the test of time despite the efforts of a few misguided hacks? 🙂
The usual claim – once all the obfuscatory and incorrect bullshit is removed – is that MBH suppressed the “MWP” thus making modern warming appear more extreme in a millennial context.
pbjamm, I think Robert Way addressed this (responding to someone who said that M&M only touched on minor points). “I don’t think these are minor points. I think they get major points correct. MBH98 was not an example of someone using a technique with flaws and then as he learned better techniques he moved on… He fought like a dog to discredit and argue with those on the other side that his method was not flawed. And in the end he never admitted that the entire method was a mistake. Saying “I was wrong but when done right it gives close to the same answer” is no excuse. He never even said that but I’m just making a point.”
It indeed proves that Mann was a hack, which is after all may be relevant to his court case(s).
He also gives climate science a bad name. Personally, I would think that if you care about saving the planet from global warming, you should have thrown him under the bus long ago. The entire discussion here IMHO gives climate science a bad name. I come from a undergraduate physics background, but I don’t remember that we had disinformation websites, and people who only read them and triumphantly post links from them to defeat their foes, and all this kind of nonsense. Politicians do that, not scientists. If you care about the planet, don’t make people think that climate scientists are politicians. That’ll be the last time anyone listens to them.
BBD: “The usual claim – once all the obfuscatory and incorrect bullshit is removed – is that MBH suppressed the “MWP” thus making modern warming appear more extreme in a millennial context.”
And if we really care to use such old science, these guys are also arguing for increased sensitivity from IPCC estimates, so we are in serious serous jeopardy.
pbjamm: One key take away is that Miker613 doesn’t know math and isn’t qualified to talk about it. AndyL the supposed engineer doesn’t know what error margins are or how to interpret them. Neither had read MBH98 yet talked animatedly about them.
“Having been investigated by almost one dozen bodies due to accusations of fraud, and none of those investigations having found Plaintiff’s [Mann’s] work to be fraudulent, it must be concluded that the accusations are provably false. Reference to Plaintiff, as a fraud is a misstatement of fact.”
— DC Superior Court ruling Mann’s defamation suit against National Review and CEI, July 2013
Miker’s disinformation has run its course.
“You dodged the point. Again.” I don’t understand why you think your attempt to distract me with a “Squirrel!!” is the point. Off topic.
You are desperately trying to avoid facing the problem, Miker. It’s funny to watch you wriggle.
Are you saying that you *don’t* believe that MBH suppressed the “MWP”?
Don’t do that again, Miker.
“Neither had read MBH98”. You’re like a little behind in this discussion. That point was made and refuted eleven minutes later. In other words, not only was I familiar with the quote, but with the mistake O’Neill made by quoting it. I think it would be better to avoid returning to points that have been settled.
pbjamm: Libertarians don’t need no stinkin badges?
Please answer the question:
Are you saying that you *don’t* believe that MBH suppressed the “MWP”?
miker613: if you had read MBH98… then you’d know that McIntyre’s corrections are within the error margins of MBH98.
To recap, McIntyre is endorsing MBH98.
He’s proved that with all his efforts, MBH98 is right.
I get a feeling that this discussion is getting to the “not particularly constructive phase”. Maybe you could clarify your position with respect to Peisendorfer rule N. You seemed to suggest that Mann didn’t mention it when you said
And yet, MBH98 both mentions it and cites the original paper. I’m finding it hard to take what you say seriously if you’re still suggesting otherwise.
dave s (re McIntyre’s 2005 blog post):yes, and actually, when McIntyre quoted Deming’s essay on Crichton, he was quoting it from Fred Singer’s website, where it was published 3 months before JSE, where Deming’s essay, where Deming’s essay shared an issue with crop circle debunk, Myanmar recincarnation, UFO discussion, and parapsychology.
But, who could doubt Deming’s 2005 claim about what happened in 1995, for which zero evidence was ever offered?
One might also check David Deming in Wikipedia or at DeSMogBlog.
One could note that he’s long been affiliated with NCPA, was chosen by Senator Inhofe to testify on climate … but an even quicker look is from his book Black & White: Politically Incorrect Essays on Politics, Culture, Science, Religion, Energy, and Environment. One can obtain a copy for ~$10, but a quick perusal of the Table of Contents there offers some insight, and IMHO the titles of the essays reasonably reflect their contents. Here are the first 21 of 51
‘ ‘1 The Petroleum Age is Just Beginning
2 The Oil Price Bubble
3 Oil Fuels Human Progress
4 Natural Gas: Fuel of the Future
5 Lots of Oil left
6 Fossil Fuels Benefit Humanity
7 The Pipeline Controversy is Manufactured
8 Cutting the Knot of Global Warming
9 Inhofe Correct on Global Warming
10 US Senate Testimony on Global Warming
11 Inconvenient Truths
12 Year of Global Cooling
13 The Coming Ice Age
14 Global Warming is Over
15 Science is Never Settled
16 Global Warming is a Fraud
17 Death of a Civilization
18 Global Warming and the Age of the Earth a Lesson on the Nature of Scienctific Knowledge
19 The Problem with Al Gore
20 Global Warming Hoax Collapses
21 Why I Deny Global Warming’
I still dont understand what you are trying to prove miker613.
Mann is a jerk and a hack. Fine. Why does it matter when we now have 16 more years of studies done by people who are not Mike Mann to show that his horrible paper was essentially accurate?
pbjamm: Understanding is not required in the denial circles.
I don’t know in this case, but many people have effectively sworn eternal allegiance to Lamb(1965) of which a variant was shown by McIntyre in that 2005 post.. As others have noted, were that sketch of Central England meaingful as a global reconstruction, CO2 sensitivity would have to be a lot higher.
It’s gone quiet.
Perhaps Miker is struggling to work out how he can admit the self-evident (yes, he thinks MBH suppressed the “MWP” and Lamb is God) without agreeing that this is an argument for high sensitivity.
ATTP, I really am mostly in the position of a messenger here. I provided a link to climateaudit, and a way to get lots more links on the same topic. He really addresses this over there: what Mann claimed, when he claimed it, what he actually didn’t do – in a whole lot of detail, with calculations, charts, and R code for replication. I really think that anyone who wants to hear what McIntyre says should read what he says, instead of a non-expert trying to guesstimate his way through the discussion.
Roughly speaking, though, I understand McIntyre to be saying (a) Preisendorfer was mentioned in Mann on a different issue. (b) On the issue we’re discussing Mann implied that he used a different method. (c) Later (2004) he claimed to have used Preisendorfer for this issue as well. (d) But (says McIntyre) he actually didn’t and here are the calculations to prove it.
pbjamm can’t seem to follow a discussion because I respond to points as they are brought up. That’s his or her loss. anoilman has simply resorted to untrue ad-homs
ATTP: Answering your two points before I go:
Mann is “a little blunter and ruder than I might choose to be” You asked why Mann is demonised. Many people think his behaviour is significantly worse than being “blunt”. You don’t have to go far to find examples. His behaviour may be justified in the minds of some, but for other people it is that behaviour which causes him to be demonised. Robert Way, as quoted above by miker613, gets to some of it.
“Or we can just move on and consider the modern reconstructions only”
Indeed. That was one of the options I asked for. A demonstration (not an assertion) that the modern reconstructions are within each other’s error bars *in the reconstruction period*. Merely overlaying all the reconstructions and showing that the lines all lie within the composite shaded error bars as done in AR4 and AR5 is hardly such a demonstration
Two more blanks.
Clearly we have got down to the core issue.
miker613 you imply that the people here have not read McIntyre in his own words. I do not think that is true for many. Some read it and dismissed it nearly 10 years ago. Reading the same claims again will change nothing. Repeating something again louder does not make it more true.
AndyL : I was trying to big-picture this discussion. It has gone far afield and taken many twists. My question was even if all your/McIntyre’s points were accepted as correct what are you trying to prove? You might have gotten that from my previous comment had you bothered to read it.
Firstly, I didn’t realise that I/we were talking with McIntyre’s messenger. Secondly, what makes you think I’m interested in what McIntyre says? If I want to know something about MBH98, I’ll go and read the paper. If I want to know about MM05, I’ll read the paper. I might ask people who know more than me some questions. I may read things elsewhere to try and clarify things, but I’m not particularly interested in McIntyre’s views on the subject. I may even read some of what McIntyre writes, but he’s primarily a blog writer and I certainly don’t rely on blogs to get my information about climate science. I say that with complete acknowledgement of irony.
We can each, I guess, have our own views as to whether or not someone’s behaviour is appropriate. As far as I can tell, the entire discourse is pretty shocking and I don’t really think Mann stands out in any particular way. He also has the advantage of largely knowing what he’s talking about. Being obnoxious and wrong is a good deal more irritating than being obnoxious and right.
Forgetting all the fighting the material available on the MBH98 paper from all directions can certainly be used in deepening understanding of PCA and statistical methods more generally. Spending an effort to figure out where each contributor is right and where wrong may well be more interesting and educating than simply reading textbooks of statistical analysis. Many people have made valid points, but most if not all have also made errors in their arguments. Thus this is not an exercise where any reliable source can present complete and full answers, but the student must form his or her own judgment based on the evidence. That can be really educating.
“I still dont understand what you are trying to prove miker613.” Nothing, really. Some people claimed that M&M was wrong. I had a very different impression from following climateaudit for a while, where it’s taken for granted that MBH was disproved. I’m just trying to follow.
As Steve McIntyre says frequently (I saw one a couple of days ago), none of this shows whether or not there was a MWP. That is a science question. This is a history and statistics question.
Maybe some have, but I’m not sure that many have claimed it is “wrong”. What many are suggesting is that how it is interpreting by others is wrong, but I can’t really face going through the whole process again.
‘Are you saying that you *don’t* believe that MBH suppressed the “MWP”?’
Me personally? My impression from following this discussion, which entailed my learning a lot about past climateaudit posts and some realclimate posts, and from before, is that MBH had a definite goal of “getting rid of the MWP”. They published a paper with a lot of flaws because there would have been no hockey stick if they had used standard techniques. Mann refused to admit its flaws later, and kept fighting to change the story in such a way that would keep his hockey stick and no MWP. That’s the Climateball history.
But in the scientific reality, _did_ MBH suppress the MWP? Of course not. Whether or not there was an MWP is a question of science, not of Climateball history, and is better addressed by seeing whether PAGES2K and the like are sufficient.
MikeN – Well, I hope that you have noticed that MM05 is wrong, failing in basic PCA analysis (no selection for significance), producing invalid reconstructions (which they never tested), and comparing mis-matched components (comparing MBH PC1 to MM PC1 for their hockey-stick index is like looking at a driveway, claiming that grass does not exist, while ignoring the adjacent lawn).
[ Side note, just for my amusement: in MM05 Fig. 1 there is a comparison between one of MM’s red-noise/MBH PC runs with something of a hockey-stick shape, and the MBH reconstruction. What _isn’t discussed_, but is obvious from the axis scales, is that the red noise run has a range almost an order of magnitude smaller than the MBH reconstruction (0.08C vs. 0.6C), meaning that even if MM05 were correct about the MBH procedure generating HS’s from red noise (which they aren’t) that would explain less than 0.1C of the reconstruction trend – still leaving the present the warmest in the last 1000 years. And that wouldn’t by any means invalidate the MBH conclusions. ]
Except the instrumental temperature record tells us that a reconstruction without a blade is wrong.
MikeN – Another howler in your last post, “..there would have been no hockey stick if they had used standard techniques…” is demonstrably wrong.
Wahl and Ammon 2007, using MBH, MM, and combinations of standard techniques – when done properly (ie, including the variance above the noise level as per any PC exclusion rule), you see a hockey stick in the proxy data. Always. And even if you don’t use PCA at all, well, hockey-stick.
The only way _not_ to see a hockey-stick in that data is to process it incorrectly, dropping significant information, as MM did. In other words, you have to do it wrong…
Wow, I hadn’t noticed the scale on Figure 1 was completely different in the top panel compared to the bottom. That seems to be true of the 12 figures in the Wegman report too.
Then what is McI arguing if *not* that MBH suppressed the “MWP”?
And Miker, I asked *you* a question. Please answer:
Are you saying that *you* don’t believe that MBH suppressed the “MWP”?
This is you upthread:
And this is you now, desperately trying to evade the question:
Such intellectual dishonesty cannot pass unremarked.
Is this your comment, AndyL?
> Only Climateballers, partisan team players, would suggest that “Because I want climate sensitivity to be low, therefore I will twist and pervert the facts on a totally different issue which may bear on climate sensitivity.”
Good ClimateBall ™ players like miker613 would rather construct a straw man like this one, and when confronted with the observation that a higher MWP would have consequences regarding sensitivity, would cry “squirrel!” instead of acknowledging that fact, not unlike the divide and conquer algorithm. It would still be nice if miker613 acknowledges it for future reference.
Let your tenderness guide you, miker613.
ATTP – Indeed, the largest hockey-sticks that the MM procedure produced from ‘red noise’ appear to be less than significant, i.e. values in the noise level.
ATTP, I was certainly not appointed! 🙂
“I may read things elsewhere to try and clarify things, but I’m not particularly interested in McIntyre’s views on the subject. I may even read some of what McIntyre writes, but he’s primarily a blog writer and I certainly don’t rely on blogs to get my information about climate science.” I think in this case that that may be a mistake. Though McIntyre has published peer-reviewed work, his main impact on climate science has always been through his blog. He publishes real calculations there, real R code, and documents very carefully. As noted yesterday, PAGES2K made three corrections based on his blog, and it isn’t the first time by any means. Reading MM05 will get you a certain distance, but if you’re interested in whether Mann et al actually used Rule N in his paper (perhaps most of us aren’t, but apparently some people keep insisting on it) the only source is the R code on McIntyre’s blog and Tamino’s.
The truth is that if you had waited for M&M05, you would have been at least a year late; the issue had been fought on the blogs long before. You would also have missed the extended discussion at climateaudit where Robert Way defended his paper C&W against all comers. And so on. The world moves faster these days.
We crossed, but it makes no difference. Your tortuous and evasive waffle still doesn’t address the problem. You admit that you believe that MBH tried to suppress the “MWP” but do not admit that a significant global and synchronous warming event to rival modern warming would be evidence that sensitivity is fairly high.
Do you think that the climate system is fairly sensitive to radiative perturbation, Miker?
ATTP, this seems to have got upside down: “Some people claimed that M&M was wrong” can more clearly be stated is that M&M have misused PC1 to argue that MBH is wrong about the hockey stick, when in practice any proper selection of PCs of the relevant proxy data will still show a hockey stick. People have also stated that M&M claimed that hockey sticks were created from trendless red noise, when the “red noise” they produced actually had a hockey stick trend, which was exaggerated by their undisclosed 1/100 selection of the most “hockey-stick” like graphs.
When forced to concede that M&M essentially discarded the PC with data showing the trend rather than noise, the trend is blamed on the bristlecone pines and Gaspe with the assumption that these are poor proxies selected by that naughty Mann… but they’re forgetting the H in MBH, Malcolm K. Hughes who by 1998 was already a published expert in dendrochronology.
Of course our contrarian friends think such scientific expertise is no match for a blogger with a background in mining exploration!!
I’m pretty sure that you are wrong. It was not necessary to tweak the methods to get rid of the variability in the temperatures of past centuries. The difficulty was in getting enough skill for the analysis to present any conclusions. Methods without any skill were expected to produce a hockey stick with a straight shaft. Only when the method has enough skill can it lead to anything else.
It is a perfectly reasonable idea to analyze the set of all proxy records that might tell about past temperatures. At the time of MBH98 there started to be so much data that such an attempt was possible, and MBH98 did it. They found the zero result of straight shaft with some noise. Now it was time to decide, how to estimate the upper limit for the variability that there still might be even, when nothing was observed. It’s only this stage where real question arise on their results.
If they had seen a clear signal there had been another question: Is the signal of the correct size as the method has a tendency of reducing the signal? In this case with virtually no signal the question was in the error ranges. They gave rather wide error ranges, but even that may have been more stringent than the data and method really could support.
The use of short-centered PCM added to the difficulty of determining the uncertainty ranges as that was a non-standard method that had not been analyzed previously by statisticians, and as that approach really introduced some additional problems.
Indeed, that is my view too. Don’t get me wrong, I didn’t say I wouldn’t read his blog. My point was that actual understanding comes from the scientific literature, not from blogs. Maybe there is some polymath out there who is publishing excellent work on their blog, rather than in the scientific literature. From what I’ve seen, though, I’ll take my chances.
Hmmm, so that seems to suggest that even though Figure 2 suggests that short centering brings the hockey stick shape to PC1, these are not particularly significant hockey sticks compared to those that you get from a full reconstruction using real proxies. Amazing.
KR, I answered you already, I refer you there. You claim things to be mistakes, MM claim that you’re making the mistake, and their claims are much more convincing. Until you deal with what they actually said, I don’t see the point.
ATTP, I answered you already. Selecting for hockey sticks just because of the instrumental record is data-snooping. You can’t do the validation and get the results in the same step, or you are pretty much guaranteed to get a hockey stick whatever you start with. According to McIntyre, you get a 1 SD hockey stick 99% of the time. In MBH’s case, it leads to basing an entire NH record on the Southwestern US, even though the trees there did not match their own local temperatures.
Are you still being McIntyre’s messenger?
Fun aside from miker, “MBH had a definite goal of ‘getting rid of the MWP'” …. miker doesn’t seem to realise that MBH98 never went as far back as the MWP, and MBH99 was announced with the statement that it “supports earlier theories that temperatures in medieval times were relatively warm”. However, Hughes rightly added that “even the warmer intervals in the reconstruction pale in comparison with mid-to-late 20th-century temperatures”.
That’s the bit the contrarians don’t like, they don’t just want the MWP to exist, contrarians want it to be warmer than the present. As MBH99 and subsequent reconstructions have confirmed, the MWP just wasn’t that warm on a hemispheric or global scale.
You admit that you believe that MBH tried to suppress the “MWP” but do not admit that a significant global and synchronous warming event to rival modern warming would be evidence that sensitivity is fairly high.
Do you think that the climate system is fairly sensitive to radiative perturbation, Miker?
But with climate sensitivity down so low that there’s no need for a policy response to CO2 emissions.
It’s entertaining watching Miker implode into a black hole of intellectual dishonesty over this.
Miker613 has joined the HWQDAJ club: He Who Quotes Dog Astrology Journal club, along with McIntyre, McKitrick, Montford, Lindzen, etc ,etc. (“get rid of MWP”)..
But sadly, “Miker613” is just an Internet handel, unlike the others. so doesn’t get onto the good list.
Actually, I was thinking about the MM05 figures again and am not convinced the scaling matters. I’m surprised that they didn’t rescale Figure 1 to have the same kind of range, but presumably what matters is the size of the blade relative to the noise in the shaft, which looks similar in both the top and bottom panels.
Somewhat paradoxically the first time I remember reading a claim close to “They don’t just want the MWP to exist” came through John Mashey’s comment in this thread. His purpose was clearly to show, how stupid Steve McI is, but what he actually did was to advertise a claimed leak presented to prove that people worked with such a goal in mind.
I’m sure that claim is familiar to regular followers of ClimateAudit, but I go there only to read threads with much technical content, and almost always related to statistics in some way.
The tendency to quote favourably those they disagree with on (nearly) all other issues when that quotation happens to share (or can be portrayed as sharing) agreement is one of my reasons for particular contempt for climate auditors and acolytes. If you think Robert Way’s opinion is worthy of respect, pay attention to it even when he disagrees with you. Indeed, pay particular attention to it when he disagrees with you. If you do not respect his opinion, don’t quote him when he agrees with you. Your argument, when you do so, is entirely rhetorical and specious.
In this case, Robert Way is wrong about the best remembered part of M&M05, and may well be wrong about the rest as well.
About the best remembered part (red noise purportedly producing Hockey Sticks), the HSI used by McIntyre and McKitrick was never validated. They merely proposed an index that they claimed picked out hockey stick shapes without ever demonstrating that it tended to do so, or that it exclusively does so. As it happens, it neither always picks out hockey stick shapes, nor does it exclusively do so. Indeed, the reconstructions for MBH 98 & 99, and the PC1 used for reconstructing 1400-1450 AD in MBH 98 are either not hockey sticks according to the index, or are border line hockey sticks at best. Meanwhile, the M&M HSI will pick out a straight line with a positive slope as a hockey stick. Application of more intuitive hockey stick indexes to the 100 highest HSI pseudo-proxies generated for M&M05 results in nearly all of them being rejected, while the MBH reconstructions are accepted. Therefore there are variant HSIs that reliably pick out MBH reconstructions as hockey sticks, and reliably reject pseudo proxies generated from red noise using short centered PCA. The failure to validate the HSI as a statistical test means that the first half of M&M05 relies in its argument entirely on a visual comparison with a few cherry picked pseudo-proxies. There are some problems with MBH 98, and reasonable disagreement about how deep they lie, but had the first part of M&M05 been properly understood, it would have earned them an ignoble award.
The second part may have more merit. It redoes the Monte Carlo simulations to check the RE statistic using different characteristics of red noise. The characteristics used are derived from the proxies used, and therefore will include any signal persistence generated by any temperature signal. That seems to me an error. Specifically, it seems like a means of artificially inflating uncertainty. I am far from certain, however, and would be interested in the opinion of an expert in statistics without an obvious axe to grind on the issue. If M&M05 are correct on this issue, however, it merely means that the MBH98 reconstruction is not skilfull as far back in time as reported by MBH.
This claim is a staple at JC’s:
So when I read you as saying the first time you have encountered it is on this thread I have misunderstood you, right?
Pekka, if you came across the claim from John Mashey, you would also be aware of research showing the claimed “quote” is almost certainly spurious. Specifically, the claim was made by a known skeptic who has been unable to produce the purported email containing the “quote”, nor name the author of the “quote”. Further, “skeptic” attempts to identify the author of the “quote” have been shown to be inconsistent. The best that can be claimed for the quote is that it is a hostile paraphrase of a (probably) innocuous comment. Possibly, however, it was manufactured from the whole cloth to suit a particular “skeptics” rhetorical agenda.
“My point was that actual understanding comes from the scientific literature, not from blogs.” Could be. But what about his latest post about PAGES2K? He notes there several major concerns that he still has with various proxies. Do others know about those? Are they correct? Are you going to wait till he publishes a journal article on it?
Others here have tended to point to PAGES2K as the definitive word; is that true, or are there enough serious problems that we can’t be sure of the results?
And McIntyre has done this for each of the proxy studies; I don’t know if people who think that “everything was confirmed dozens of times” even know about the issues.
Or, maybe he’s wrong. But where to find out but at his blog?
PB, I am beginning to think this is now mostly an attempt to make Mann look bad by any means necessary. Especially among those who don’t understand the science. I often hear why don’t realists throw Mann under the bus because he is a rotten apple.
“Worse for the ‘side’ trying to depose the hockey stick, the objections they make are arcane and far beyond the comprehension of almost 100% of the audience. ” – it being a well known fact that the correctness of a scientific theorem is directly linked to the ability of people to understand it. Hence the curious phenomena that Darwins theory of evolution is actually not true within the borders of Texas, but does apply every where else on the planet.
miker613: I don’t believe a single thing McIntyre says. I think he’s a blowhard.
I’m a big picture kind of guy. I need to see that the overall data is incorrect, and it needs to also demonstrate that with the error margins present. If the best you have demonstrates that your ‘new’ data set is within Mann’s error margins, then I can’t be bothered to care. Its not worth getting upset about let alone worried, or even blog.
Here’s Mann’s original work;
Here’s Carrick summing up McIntyre’s efforts;
In any case Pages 2k has many updates and given the truck sized error margins in some of those proxy sets, I’m telling you that more updates are coming. Remember, you heard it from me first, not McIntyre. It will not be McIntyre’s work that causes more updates to come out. It looks like they got a lot more work to do.
If you really want to nit pick, I think you’d have a hay day with John Mashey’s spelling and grammar. You could correct that forever. 🙂 (However, that won’t make the content any different.)
John Mashey: Here’s Barry Bickmore, he discusses a lot of stuff, but cuts over to a John Christy paper wherein Christy removed the error bars to back his claims. This is a surprisingly simple tactic when you think about it.
All, Willard, thanks for remindign me of this comment, which gathered key elements of chronology.
What’s McI’s preoccupation with 1998? I remember the year vaguely. Bill Clinton was having trouble with Lewinsky etc. And of course there was Windows 1998, since replaced by Windows NT, 2000, XP, Vista, 7, 8 and soon Windows 10. Do you think McI and his followers still run Windows 98 and complain bitterly about its faults, accuse Microsoft of fraudulent software practices? Hey that was a long time ago…
KR ATTP, the relatively small child’s hockey stick was noted and discussed very early on. Since then, well, since then:)
Even Chris Monckton knew about the size issue and yes, another bunny (and he was not the first) spots the pony in 2006
On the Denning affair my point was only to question the net effect of reminding of such cases. Some people might perhaps change their views on McI, some others find a new argument for their beliefs of conspiratorial behavior of climate scientists. Neither influence is likely be significant, but the net direction is unknown.
Pekka, on the Denning issue, it was miker613 who first raised it on this thread, not John Mashey. Your “question” is irrelevant in that context, unless your contention is that any doubt as to the effect of it being raised means we should allow the “skeptics” to control the narrative on that point.
That comment of miker613 was 2.5 hours later than the one of John Mashey that I had in minf. It was so much later that it might well have been triggered by John’s link.
It’s exactly this combination of these two comments that led to my reaction.
Why are we still taliking about this? It already seemed to be ancient history when I first came across it in 2010.
Indeed, that’s why I find much of what goes on in the online climate debate remarkably childish. Rehashing old things over and over again. To be clear, I’m not suggesting I’m not guilty of it myself. I get the impression that some don’t realise that science does not progress through a series of audits.
Because contrarians would rather discuss the tiny squirrel for ever instead of the sequoia it is sitting beneath.
Do you really think there’s no limit to miker’s credulosity, Pekka? Notice the, as John says, the “multiple convenient falsehoods, graph-mining and reliance on absurd sources” miker would have had to read through to get to the Deming material. McI’s entire case in that post is based on finding the Lamb cartoon in the SAR, where it doesn’t exist. Instead, we have the far flatter Bradley and Jones (1993) together with text stating that there’s no clear evidence for an MWP and weak evidence for an LIA, in which context MBH98 wasn’t much of a leap. IIRC the early IPCC reports weren’t available on-line at the time, which perhaps explains how McI thought he could get away with it.
Of course you’re guilty, Anders, this post being Exhibit A. At least you have the excuse of having not been through all of this before. But never again, yes?
Just to add what is probably an obvious point: What McI does isn’t auditing. Use of the term is a bit of salesmanship intended to give his activities a patina of credibility they don’t deserve.
I’m presenting personal views on what various ways of commenting on the net leads to. I have in mind only people who believe in science. How the others comment enters only as my guess of their reaction as that reaction will also affect the outcome.
Inducing the kind of interchange where being backed by truth is of little value is counter-productive in my view, and I tend to see many comments in that light. That applies, in particular, to most speculation of motives and many other “revelations” about the connections of various people.
Thanks ATTP and BBD. I like what you said.
Can’t contrarians fixate on more recent tiny squirrels? (Yes, I know they can and they often do, but it’s strange to keep dragging this one up again). The list of papers with temperature reconstructions published since 2005 which essentially support the conclusions of MBH 98 seemed quite long the last time I looked. Anyway, my comment above was redundant as Raff put it so much better a few comments before.
So, to summarise.
[Playing the ref. -W]
Pekka, granted that McIntyre uses the Deming quote in the page that John Mashey linked to, but he used a lot more specious material as well, not to mention the coward’s insult relating to mining promoters. Just a link does not cut it at introducing the topic to this thread. If it did, your rule of thumb would prevent us from linking to any page in which “skeptics” say something absurd, either in the OP or comments lest other “skeptics” use that as an excuse introduce those absurdities into the current discussion.
It is very simple, the person who actually introduces the topic into discussion is reponsible for doing so. Nobody else. A pragmatics that does not insist on this rule merely shackles the defenders of science while allowing the pseudoscientists to frame the narrative.
Steve Bloom: “What McI does isn’t auditing. Use of the term is a bit of salesmanship intended to give his activities a patina of credibility they don’t deserve.” Actually I hadn’t noticed this for a long while. Through watching far too much of the ‘debate’ about AGW, the word ‘auditing’ has lost all credibility for me. (That is an achievement, I suppose). I forget that newcomers will probably think that ‘auditing’ actually means something positive.
I suppose the morphing of meanings of words might be quite normal but I’ve just not noticed it much before.
> Why are we still taliking about this?
The audit never ends.
Speaking of which, notice the names appearing in that 2010 thread at Jeff’s:
This post links to another thread, where we find this explanation of some kind of publication bias in climate science:
Recalling the Deming Affair only shows how the Auditor’s fierceness may be unprecedented in Climateball ™ history.
I cannot tell a lie, it was me that introduced the MWP to this thread on October 1, 2014 at 6:29 pm, commenting on Carrick’s graph which showed newer graphs diverging downwards from MBH in the Little Ice Age. As John has noted, such divergence is not an issue as “NH extratropics are expected to vary more than the entire NH, which in turn varies more than global.”
My comment was that “The whole MM fuss has been about the maximum MWP”. The linked Steve McIntyre blog shows him in 2005 arguing that the various multiproxy reconstructions around 1998 were merely a race to be “victor in getting rid of the MWP” in order to produce a compelling advertising image to promote the Kyoto Protocol. Which Steve justifies by his experience of mining promoters, and the elusive Deming quote.
McIntyre’s own narrative is that he got interested in the MBH graphs when the IPCC 2001 figure was used by the Canadian government to promote Kyoto, Hence MM03 which claimed to have corrected data flaws in MBH98, and their “major finding is that the values in the early 15th century exceed any values in the 20th century.” As a press release for MM03 put it, “more evidence that the 20th century wasn’t the warmest on record”. Both MM05 papers extended the argument to disputing the MBH principal components methodology, though Mann by then had moved on to using RegEM.
The quibbles about MBH98/99 continue for political reasons, and have been revived in recent weeks as part of McIntyre’s support for Steyn.
Willard: “The audit never ends.” You keep reminding us, but I keep forgetting. My mind seems unable to grasp the weirdness of it all.
“If your handle is miker and you want proper answers to questions which have been bothering you, you will have to wade through reams of insults, belittlement, and bare assertions in order to find the handful of posts which actually try to address your concerns.”
Miker’s stated in the beginning that he is incapable of understanding the answers to his concerns, because he doesn’t have skill at math or statistics.
And there’s no evidence that “questions bother him”. Instead, he parrots McI and friends, uncritically accepting everything they say as being the truth, even when confronted with evidence that McI is, shall we say, minimally concerned with the truth. When confronted with evidence, he, like a real parrot, repeats the the same-old, same-old over and over.
Perhaps you think such behavior deserves respect. Some others might differ …
Please note how the exchange between Tom Curtis and the Auditor in the Daly episode ends:
This was in answer to:
The word “uncritical” rings a bell. I think I’ve heard the accusation recently. But where?
Except for the “uncritical acceptance”, the fiercest player in Climateball ™ does not dispute Tom’s judgment.
Also, notice the time between the post and the exchange.
I have been commenting a bit on the deficiencies of the M&M05 Hockey Stick Index. I had the good(?) fortune today to discover my spread sheet analysing it was on my current hard drive, not the defunct hard drive I believed it to be on. (That appears to indicate that entropy is catching up with my brain faster than with my computer.) However, that allows me to present to you a straight hockey stick:
M&M indicate that anything with a HSI greater than 1 is a “hockey stick”, by which definition, this straight line plus white noise is a hockey stick. Indeed, it is even more of a hockey stick than is MBH98 (HSI=1.129). Just for the fun of it, I generated 132 such “hockey sticks”. They had a mean HSI of 1.177, and a standard deviation of 0.05, indicating that MBH98 is near one standard deviation below the hockey stickness of a straight line, at least as measured by the M&M05 HSI.
Not being content with that, I developed a range of variant Hockey Stick Indexes, and tested the statistics of M&M05’s cherry picked top 100 HSI pseudo-proxies:
The variants shown are the ration of the standard deviations during the calibration period and the rest of the duration of the proxy; the angle formed by the trend over the calibration period relative to the trend over the rest of the duration; the angle formed at the most recent major inflection point in the century prior to the start of the calibration period; and the Angle at the inflection point weighted by the closeness of the inflection point to the start of the calibration period.
As can be seen, on all these measures the cherry picked top 100 HSI pseudo-proxies from M&M05 perform poorly, and in general MBH 98 and 99 perform well. I assume, but do not know that the cherry picked top 100 would perform better than average of the full number of pseudo-proxies generated. The (12 point mean) is the mean of the twelve top 100 pseudo-proxies used by McIntyre in various illustrations of the HSI.
I should note, with regard to my preceding post, that ideally I should perform this analysis with PC1 of the NOAMER tree rings, which will certainly not perform as well as the full reconstructions as regards to the time of the inflection point, but may do so with regard to the criteria above. Unfortunately I cannot find a copy of the data to perform that analysis.
Willard: Is this your comment, AndyL?
Thank you, AndyL. You claimed credit for some of the anonymous comments at Nick’s. There’s a range of comments on that thread, e.g.:
Would you be so kind to go at Nick’s and identify the comments you authored?
Thank you for your concerns.
Eli Rabett – Yes, size matters 🙂
I won’t claim any originality on that point – I recall reading about that particular point a number of years ago (on a thread involving deepclimate and others), but it struck me anew re-reading the MM05 papers. So. Many. Errors.
Zombie squirrels. Dawn Of The Auditors. Etc.
“Shoot it in the head, man!”
> So. Many. Errors.
That a paper contains N errors does not mean it’s worthless. That a paper contains N+1 error does not make it N+1 worse.
For what it’s worth, here’s what JEG said about MM05b:
I’ll let you find who’s that JEG, KR.
A variant of this ClimateBall ™ move (“So. Many. Errors”) is the Auditor’s “spitball”:
Notice the scope of the [apology].
Have you read Appendix A of the Wegman Report by any chance, KR?
> I have now, and I’m wondering what textbook that section was appropriated from. As to their comments […]
That’s what I call a Yes, But, KR. In fact, you padded your “yes” with two “but”.
The first “but” introduces what I call a squirrel. Who authored [Appendix A] doesn’t matter in evaluating its content. Unless its content is wrong, in which case the question “who authored it?” is there to find an explanation why the content is wrong. So first we must establish if there’s anything wrong in that Appendix A.
The second “but” switches to what you hammered more than ten times in this thread alone.
Do you find anything to correct in Appendix A?
Interesting link. I noticed JEG’s final remark:
I’m sure you’ve read it, but here’s JEG’s review of Loehle (2007), just for the thread.
ATTP – WRT your note about the red noise scaling, that’s an excellent point. Personally, I would think that judging replication on synthetic data would require using _all_ the steps on the method in question, including scaling, but perhaps MM failed to apply that step. So perhaps the scaling isn’t the issue.
Tom Curtis – excellent analysis there. Which again points out that even in a strange synthetic data set (autocorrelation length of ~19 years for trees, as opposed to the 1.35 years or so that should be used) the hockey-stick signal that might in some small fraction of the Monte Carlo analysis has significant differences from the real data.
willard – “…read Appendix A of the Wegman Report…” I have now, and I’m wondering what textbook that section was appropriated from. As to their comments re: MBH, the PCA step used by MBH _correctly_ extracts the dominant variance into appropriate PCs for the data presented to PCA. Short centered data is slightly different than full-centered, as they have had different preprocessing. However, in both cases using all of the significant PCs will give you the the same results. As W&A 2007 stated and demonstrated:
“When proxy PCs are employed, neither the time period used to “center” the data before PC calculation nor the way the PC calculations are performed significantly affects the results, as long as the full extent of the climate information actually in the proxy data is represented by the PC time series.”
MM05 didn’t use the full extent of the information, dropping PCs, in large part because they didn’t apply a retention criteria. Oops.
Very good, BBD!
Searching for Bouldin’s take on Loehle 2007, I stumbled upon Jeff Id’s repy:
So MikeN, who peddled in Tiljander and brought up AMac at Bart’s, now brings the news to Id.
A mercurial appearance on the ClimateBall ™ fields by MikeN. Interestingly, Jim did not appear over there.
Have you told Jim at the time, MikeN?
Let us note MikeN’s first comment in that thread:
Dichotomizing technical and political issues is an open problem.
And JEG in 2008 telling pseudoskeptics it’s time to move on.
Tom, I’m not sure what the point of your trend + straight line with white noise exercise is. The M&M simulations were done with trendless noise. They demonstrate that short centering creates a (spurious) PC1 by selecting those (trendless) simulated series which maximize the difference in means between the true mean (zero) and the mean of the calibration period. Unless I’m missing something, since none of the simulated series have true trends (nor should they) your exercise is meaningless.
Eli Rabbit: That was hilarious! Do you think Napoleon would disapprove of us getting upset over McIntyre’s little size problem?
“Even Chris Monckton knew about the size issue and yes, another bunny (and he was not the first) spots the pony in 2006.”
FYI… I frequently deal with a heavy noise to signal ratio which is significantly more challenging to understand McIntyre’s little efforts.
By the way… there has been follow on work about Dunning-Kruger. Those efforts didn’t just stop 15 years ago.
I wonder at what point “Wishful Seeing” kicks in.
It seems pretty obvious that in a lot of these discussions, We look at error bars, and they seem somehow blind to them.
“Error bars? What are those? What am I looking at?”
“Those are the lines your new plot is inside.”
Eli points out that the pseudo noise is 1/10 the signal. They can’t see that scale issue.
At what point do their efforts become credible?
willard – I’ll note that the Wegman report is something of a red herring here, a political mashup of copied MM procedures produced under a political (not scientific) basis.
I would consider the line “In this case the right singular vectors, V, are no longer the eigenvectors” both a misstatement and an improper implication of basic math incompetence by MBH. PCA properly extracts eigenvectors of the data presented, and that data is slightly different due to the short or full centered pre-processing step. For MBH this was useful since that dimensional basis more clearly segmented long term trend behavior. But since the methods are equivalent for temperature reconstruction if you include all of the significant PCs, it’s really a void issue. Both methods usefully reduce the data to the general trends shared by the proxies, allowing reconstructions.
But that’s as far as I’m willing to discuss the Wegman report – I would rather examine the original complaints.
It’s simultaneous with the onset of denial.
“Sometimes people suggest that I should go and comment on posts like those written by Steve”
In the free marketplace of free blogs no need exists to “balance” any one blog with opposing viewpoints. Readers are free to amplify their own personal monocultures by clinging to a single viewpoint or broaden their views (paying the price of less depth as a consequence) by visiting opposing viewpoints and perhaps challenging all of them.
Since I learn of blogs by reading comments on other blogs I suspect it is useful to occasionally post something on other blogs just so readers know you exist.
At the risk of inhibiting an interesting discussion on Appendix A:
1) Most of that section was (virtually certain to have been written by David W. Scott, whose statistical skills are well-respected by good statisticians I know.
I have *never* found any evidence of plagiarism (I looked) and would discourage any claim of such, unless one someone can make a strong case for that.
a) Has a different writing style than the rest of the WR
b) Is something well within Scott’s competence (look at CV), and is standard math.
c) Feels like a theoretical discussion of centered versus decentered PCA, the sort of thing you’d get from a good statistician who didn’t look at the specific data.
d) Of the 3 pages, the only material that directly mentions MBH is:
“In the notation of Mann et al. (1998), C” is PC” and this corresponds to the first reconstruction….
In the temperature reconstruction model, the PC1 (first principal component) is being used to reconstruct a time series that is capturing the most variability in the data.’
“In Mann et al. (1998), the study period is partitioned into a reconstruction period 1400-
1995 and a training period 1902-1980 in which all the proxy variables are available. The
data matrix is centered using the training data rather than the overall means. Because the
training period has higher temperatures, this biases the overall data lower for the period
1400-1995, thus inflating the variance. In this case the right singular vectors, Z , are no
longer the eigenvectors.”
(Of course, this erroneously fails to address the 2 PCs vs 5 PCs issue.)
d) is pretty inappropriate for a report to Congress. How many of them could read this?
Of course, Wegman, Said and Scott aren’t talking, so I cannot know, but the following conjecture fits all public evidence:
1) Section 2.2 is more appropriate for a general audience, but DC showed that it was plagiarized, with errors introduced, as well. Its style is that found pervasively, and quite likely was assembled by Said, perhaps with cursory review by Wegman.
2) Other than Appendix A, there is no evidence of any serious involvement in the WR by Scott. Wegman & Said were involved throughout, went to meetings, handled correspondence.
But, Section 2.2 was weak, and perhaps worse:
Wegman GMU, distinguished statistician
Yasmin Said – postdoc at GMU, after academic year at Johns Hopkins U, after PhD with Wegman
+ Ack’d help from:
John T. Rigsby, part-time PhD student at GMU, Naval Surface Weapons Center
Denise M. Reeves, part-time PhD student at GMU, MITRE
(she’s the one who Wegman tried to throw under the bus later, wouldn’t go)
That’s pretty weak, although somewhat hidden by giving JHU, NSWC and MITRE affiliations.
That is so far from an NRC panel as to be absurd.
3) So, it is very likely that Wegman asked his long-time associate Scott to write Appendix A, and they added his name as 2nd author (although it is very likely that Said did most of the work.)
Again, no one is talking, but I think it likely that Wegman & Said added the MBH-specifics above.
But, all this allowed Scott’s name as an author, lending much more weight …
> I’ll note that the Wegman report is something of a red herring here
What do you mean, here?
Your “slightly different” is duly acknowledged, KR. I’ll take this concedo as a “I have no correction to issue for now regarding the Appendix A to Wegman Report”, unless you object or have anything else to add.
But proper PCA, I know, I know.
Why is “but proper PCA” not a red herring here, again? I read back AT’s post, and I’m afraid I could not find anything for you to hook what you still repeat for the nth time.
Oh, and you forgot to own your “So. Many. Errors” earlier.
Thanks for playing.
By the way, those blessed by possession of or access to Essex & McKitrick(2002) Taken By Storm – the troubled science, policy and politics of global warming
might want to review Chapter 5 – Trex Plays Hockey
Note that McIntyre was *not* involved at that point. … but one can easily find precursors of the statistical arguments used in MM papers in 2003 and 2005.
Unlike Essex, McIntyre had full time available to work on this.
McKitrick went further back with CEI, but certainly, Fred Singer gave an invited talk at Essex’s school in spring 2001, and McKitrick was sponsored by CEI to talk in Washington later that year.
willard – Not sure where you’re going with your comments. I cannot tell if you have issues with how PCA was handled in MM, in MBH, or discussed, or what your point is. Which may well be a lack of understanding on my part.
I certainly stated “so. many. errors.” above, I didn’t think I needed to repeat it – I have documented numerous (and invalidating) errors in the MM criticism of MBH on this thread, errors that I’m certainly not the first to note. As to that Appendix, I think you have mischaracterized my comment here. Most of that Appendix is bog standard math, with the exception of (incorrectly) claiming MBH didn’t extract eigenvectors of variation.
In a certainly vain attempt to move the conversation looking forwards rather than back..
On adaption or mitigation
Willard, interested in your take…
> Not sure where you’re going with your comments.
Climateball ™, KR. I’m showing how you play Climateball ™.
The “So. Many. Errors.” sounds quite suboptimal. In history of science in general, and in the hockey sticks hurly burlies in particular.
Just imagine if someone said “MBH98. So. Many. Errors.”
Wait. That rings a bell. What was your answer against that move, again?
In Climateball ™, everything you do can be played against you.
I’ll return later to your “but proper PCA”.
Hope this helps,
Willard, your simultaneous playing and refereeing is boring. One might think that the underlying reason for your Climateball shtick is to provide cover for you to do that.
Imagine that Jean S comes here and says:
Do you really think that your Ennui will help KR, or that KR’s “So. Many. Errors.” blunder will be patched by his “but proper PCA”? I think neither you nor KR has thought one New York minute about AT’s point in the editorial above.
KR really should substantiate his impression that auditors “got most of their statistical ‘expertise’ from reading the R language help files”. If this is a genuine impression, I can assure you that he’s in for a tough ride: auditors can smell posturing. He should at the very least “read the blog” before saying so, or else show more kung fu than throwing “slightly different” around and repeating the very same line over and over again.
One does not simply win a formal argument with an adverb, followed by an adjective. This applies to MM05b. This applies to KR too.
I could not care less for jerkitude, including yours:
I care for ClimateBall ™ moves so suboptimal as to be self-defeating.
Thank you for your concerns.
Steve Bloom: I think everyone has something to contribute. I learned a lot about the rings and hoops I kept getting run through by deniers, by understanding ClimateBall.
I did learn much jerkitude from them and Willard has highlighted that to me.
Willard has many shticks, and yeah it can be annoying.
I do notice that Willard is hauling what these folks say elsewhere into the light of day here. It really limits their ability to talk out of both sides of their head. This tends to calm things down a bit.
FYI: Telling me someone said the exact opposite was one of the first ClimateBall techniques employed against me. Its really low brow but it crops up from time to time. I absolutely love to do it when they score an ‘own goal’. Its the one time they never attack the evidence, ’cause its theirs. Usually they respond with avoidance.
Carrick’s latest and greatest graph is in the error margins of MBH98. I find this hilarious. The sum total of denier efforts for 16 years is that he was right.
It’s good to hear an affirming click after one has pushed a button.
aom, I agree that Willard’s keeping track of various contradictory statements is useful, but beyond that we’ll have to part ways.
Willard, I would ignore jean s, consistent with the good reasons why I stopped spending time at CA quite some time before you made your first appearance. But if you’re in the market for revealing Climateball moves, this jean s comment slightly later in the linked thread is better:
To this day, deniers continue to ask for the “Climategate” scientists to be thrown under the bus, and that if that would only happen then all would be sweetness and light thereafter. Yeah right.
Re statistical proofs generally, AFAICT while stats seems very useful as rules of thumb, there are far too many choices of varying tests and standards of significance for the word “proof” to be used in a mathematical sense. So I take such fulminations by the likes of jean s with a rather large grain of salt. Being basically right, as MBH were, is its own reward.
1) Even allowing that the red noise generated by hosking.sim (and hence used by M&M) is trendless, it is most certainly not true that the principle components generated from them are trendless. In particular, PC1 using short centered PCA (and hence some lower ordered PC using standard PCA) typically shows a trend. Certainly the 100 cherry picked samples all show a trend. It follows that the average of the other PCs generated from the trendless red noise also show an opposite trend, although of what magnitude for any particular PC is unknown.
2) M&M’s HSI is not applied to the red noise, but to the first PC of the red noise – all of which have a trend with short centered PCA, and most of which have a trend with standard PCA.
3) Applying the HSI to straight lines plus white noise with a trend equal to the mean of cherry picked 100 and a signal to noise ratio of 1.25 produces a mean HSI of 1.16, approximately equal to that of the MBH98 reconstruction. Applying it to slopes 10 billion times smaller but with the same signal to noise ratio produces the same mean HSI.
4) In fact, the size of the slope does not matter, provided that it is not flat or vertical. What matters is the signal to noise ratio. Any signal to noise ratio equal to 1 or more will produce a mean HSI equal to 1 or more, up to a maximum of 1.498 (no noise).
5) From this it follows that the HSI developed by M&M cannot consistently distinguish between a straight line and a hockey stick shape. I suspect there are other shapes that it cannot distinguish either, but for now we need only consider the straight line. That means that, from the M&M05 HSI, we are unable to determine whether or not half of the 10,000 pseudo proxies are distinguishable from a straight line. Nor, using that index, are we able to distinguish MBH98 from a straight line. That means that as a statistical test of the tendency of short centered PCA to generate shapes similar to that of MBH98, the test is totally without power. It tells you absolutely nothing.
6) Perhaps more importantly, when you devise variant Hockey Stick Indexes that are better able to determine a hockey stick shape, MBH98 and 99 stand out as easily statistically distinguishable from PC1s generated from red noise using short centered PCA. So not only did M&M05 use a test with no statistical power, without validating the test; but alternative tests exist which would have refuted their thesis.
In terms of science, that means the entire first half of M&M05 is simply trash. Pure and simple.
Steve, I try to measure people by what use they have to offer, and not the failings they show. The reason is that everyone has failings.
I also think I know why you rub each other the wrong way. You take an intellectual high road, and ClimateBall simply isn’t about that. Willard constantly emphasizes that fact to smarter folks like us. (Personally I focused on the political drives long before the technical, since I feel that the technical short comings in denial are so egregiously obvious.)
Willard is much smarter than his shtick. You might learn to appreciate that if emailed people more. 🙂
I have noticed I used a defective noise model for the straight hockey stick graph. Here is a corrected graph with S/N ratio of 1.25 and a correct normally distributed white noise model:
willard – I claim no expertise (nor interest) in ClimateBall as I see it, and often find myself eating soup with a fork when dealing with fallacies, illogic, rhetorical tricks, and the downright wrong.
If Jean S were to come here and say what’s in your last post, I would simply suggest s/he perusing Wahl and Ammann 2007, where both MBH and MM methods (when MM is done correctly), as well as full-centered standard (covariance) PCA bring out the overall hockey-stick that is actually in the data – in reconstructions that are all but indistinguishable.
‘Cause if MBH did something as drastically wrong as claimed by MM, I cannot see _how_ they would come up with the same result as the MM techniques (when actually including the significant PCs).
> I would ignore jean s.
Of course you would. That would be prudent. Prudence is important. But then you don’t play in the same kind of games.
Respect is also important. Lack of respect loses games. Lack of respect loses winning games.
ClimateBall ™ players need to manage their commitments properly:
Look how Nick proceeds. One point at a time. No unjustified provocation. Resistance to verbal abuses, manipulation, or any other kind of power play. Focused on his points. Patient. Realistic regarding his communication objectives. Very disciplined regarding what he’s willing to commit. Nick never uss “So. Many. Errors.” Against the fiercest player in ClimateBall ™ history, this is a losing move.
Even if you survive, you get to defend all your commitments at the same time. Against someone whom you just dismissed. Can you imagine the time this takes? If you can’t imagine the time it takes, consider the the it took Mike.
Now, read back JEG’s comment regarding MM05b. See the difference?
“To this day, deniers continue to ask for the ‘Climategate’ scientists to be thrown under the bus, and that if that would only happen then all would be sweetness and light thereafter. Yeah right.”
I share your skepticism but probably for a different reason. Global angst was never scientific in the first place, it is political, and that angst would just find some other champion, probably returning to Peak Oil since that’s demonstrably finite.
I will admit that deflating the pride of the proud would make a day brighter but doesn’t by itself address or redress anything.
Some bunnies want to be Christiano Ronaldo, Willard wants to be Howard Webb
> I would simply suggest s/he perusing Wahl and Ammann 2007
That’s already done:
There’s a whole encyclopedia on WA07.
Auditors work hard. They deserve the respect they don’t give often to their opponents. If the fiercest player in ClimateBall ™ history follow through his disrespect, he should lose, for all that matters in ClimateBall ™ is sportsmanship. After all, ClimateBall ™ is a spectator sport.
Nobody needs to lose for science to win.
Think Scotty, Eli:
You can play Gordie if you still wish.
For what it is worth, I have posted a more detailed discussion of technical details of the points discussed above both at Skeptical Science and at my blog.
“The sum total of denier efforts for 16 years is that he was right.”
I hear that sort of thing frequently at futbol games when observers, who had nothing to do with the victory, nevertheless feel a sense of glory when their team gets a thing “right”. It is especially the case that fans will cheer an excellent play by a losing team since they don’t have much else to cheer.
But I’m not down on the field in the cold and the rain kicking the ball and being kicked by opponents. It isn’t my glory when my team wins. It is theirs.
“Look how Nick proceeds. One point at a time. No unjustified provocation. Resistance to verbal abuses, manipulation, or any other kind of power play.”
Yes, I think Nick’s online behavior is exemplary. While I can appreciate that not everyone can exhibit such patience in face of relentless barrages of abuse, it’s still something one ought to strive to emulate. When I see Stephen Mosher being badly abused at WUWT (with Watts frequent encouragements), or Jim D on Climate Etc., or Nick on Climate Audit, I am tempted to point to RealClimate, SkepticalScience or ATTP for instances of fora where protracted civil discussion about debated topics is possible, so it pains me when it degenerates into abuse, as it too often does on SkS, when some AGW skeptic all too seldom shows up. I disagree that Miker613 “deserved” the treatment he has been subjected to. It may not be this blog’s main vocation, as it is for RC and SkS, to educate the public. But that still no reason not to be patient with the occasional dissenting voice, however misguided it might be.
As a lurker, I very much appreciated contributions from Tom Curtis, KR and Pekka, especially, in framing up the issue about reconstructions and the significance of PCs. Since I already appreciate the validity of the broad AGW framework, and agree with ATTP regarding the weak relevance of the now superseded MBH1998 to Holocene temperature history, I am not tempted to accord too much weight to all the nits and insinuations from McIntyre. But “irrelevant” topics can still be interesting and instructive. Also, even a broken clocks can indicate the correct time twice a day, and I would have liked to hear the response to McIntyre’s claim, linked here by Miker613, regarding Mann’s use of Prensendorfer’s rule N: “This comment describes how they determined the number of temperature PCs to reconstruct. This step can be seen in the source code. This is a different step than the determination of retained PCs in tree ring networks.” Is this claim valid or just obfuscatory?
Brits play field hockey
Here’s a model that could to reconcile my results with those of SteveB:
This is a bit outdated, as every team plays the trap. It’s a tried and true. Somehow related is the left-wing lock:
This one has been developed by Bowman.
Were it not for the name “hockey stick”, I would not be here.
Eli, I think you will find you have that reversed. The americans and canadians play ice hockey, a derivative of the original game, hockey which is played by the british (and australians) on grass.
I don’t dislike physical play per se. But this is usually something you do when you don’t have the ball. If we assume that the ball is science (Michael Tobis would rather say it’s journalists), then it makes sense to move the ball forward when discussing scientific points.
Not every players are on the ice to score goals. Here’s a player that Jacques Lemaire coached:
ClimateBall ™ allows physical contacts. Tom’s work on hockey sticks may have this property.
Here’s a thing I missed along the way: “[footnote 2] MBH98 refers to the index resulting from their calculation as a “reconstruction.” This is a misnomer since it is a novel index, rather than the recomputation of something previously observed. Therefore it will be referred to herein as “construction.” CORRECTIONS TO THE MANN et. al. (1998) PROXY DATA BASE AND NORTHERN HEMISPHERIC AVERAGE TEMPERATURE SERIES, M&M, 2003 E&E.
I really can’t hide a smile reading that. It’s classic XKCD; So, why does need a whole journal anyway? Didn’t anyone (reviewers, referees, editors) bother to clue them in that temperature *reconstructions* existed for decades in the literature before MBH? OK, rhetorical question – it was published in E&E.
Aussies also play cricket, where you learn about sledging.
Really? Trendless input can create an output with a trend? If it is all the same I would like to see how you have derived this. Your mathematical proof. And the proof is more then a simple ols run on the series. Once you allow for the type of noise in the series I think you will find that any ols trends are not statistically significant. This is why short centering selection of differing means to create a PC1 is spurious. It is not a true signal. It is an artefact of a flawed method. One would need to select 100% of offsetting spurious PC’s from short centering to force a reconstruction back to demonstrate a (proper) zero signal. Properly centered PCA is unbiased therefore much less likely to show spurious HS signal in PC1 from trendless noise where no such signal exists. By contrast, properly centered PCA is likely to find a linear trend plus white noise because it contains a true trend signal.
Here’s a point about M&M that was raised several years ago by William Connolley.
“Maybe this is a good place to ask some skeptics: As I understand it, M&M claim that (a) the MBH method mines for hockey sticks and (b) you won’t get a HS without the bristlecones (or whatever). These appear to be incompatible claims, to me.”
M&M have, in effect, claimed that Mann’s procedure can create hockey-sticks from random noise, but it somehow can’t do the same with most tree-ring data.
Download the zip-file from https://drive.google.com/open?id=0B0pXYsr8qYS6dHB2dV96OHpGU0U&authuser=0
It contains Mann’s NOAMER tree-ring code and the Wahl/Ammann R scripts that show how short-centering and full-centering produce virtually identical results (i.e. a hockey-stick), when Mann’s PC-selection procedure is implemented properly for both the short-centered and full-centered runs.
The code runs basically “out of the box” on Linux or Mac systems with the complete R software suite installed. Will probably do so on Windows machines as well (but I haven’t verified that).
What you will find is that the PC1 generated via short-centering can be reproduced very closely with a linear combination of PC’s generated via full-centering. The hockey-stick is *not* an artifact of the procedure. It is in the data.
It’s a little unfortunate when someone gets more of a hard time here than maybe they deserve. FWIW, miker remained pleasant and polite, which I value, but I can see why some got frustrated as it wasn’t clear if he was expressing his own views or simply parroting what he’d read elsewhere,
I don’t know the actual answer to this. The claim that Mann only mentioned Peisendorfer until 2004 is clearly wrong since they definitely discuss it in their 1998 paper. I don’t know if they applied it correctly to the tree ring network or not, I don’t know, but this link gives a pretty convincing argument as to why short centering requires keeping 2 – 3 and standard centering requires keeping 5 – 6.
Interesting comment from William. Does seem rather inconsistent. Actually, William’s comment reminded me of something that I had thought of mentioning. Something we try very hard to teach our students is to ask whether or not what they’re doing make sense and if their results are sensible. This is something that those who seem to make a big deal about the intricacies of statistical methods and statistical details seem unwilling/unable to actually consider.
Laymanlurker, for your benefit, here are some statistics:
____________||__Line + noise___||___x5989____||___Mean___||___MBH98
Trend Increase_||__2.331_E-12__||___0.05148__ ||__0.0343____||___0.1083
Incr (St Devs)_||______2.717____||____2.705___ ||___ 1.817____||___0.829
So, given the difference in slope between x5989 (one of the pseudo proxies displayed by McIntyre in one of his graphs) and also the mean of the cherry picked top 1% pseudo proxies, and the linear trend plus slope, the argument that the problem is that the the pseudo proxies don’t really have a trend seems on its face to be ridiculous. (As an aside, the line + noise have the same settings as for the graph I posted at 11:21 pm, although obviously a different set of random numbers.)
However, you want to bring in statistical significance. If so, however clearly at least some of the cherry picked pseudo proxies have a statistically significant trend, and the trend increase of the mean of the pseudo proxies is statistically significant at the 90% confidence level. Further, expressed in terms of standard deviations the MBH98 “hockey stick” is the flattest of them all. Ergo, if McIntyre wants to escape the implications of the straight line hockey stick using this dodge, he needs to at the same time acknowledge that the MBH98 hockey stick was easily statistically distinguished from his own by an even simpler test that he did not apply (the magnitude of the standard deviations).
However, the point you raise is irrelevant. The question is not whether or not there are stastical tests that will distinguish between straight line hockey sticks and M&M05’s pseudo proxies. I in fact devised 4 variant HSIs that would easily do so. The question is whether or not the statistical test M&M05 actually employed would do so, and it does not.
If McIntyre wants to produce his own variant HSI, eg, the standard HSI weighted by the inverse of the trend increase in standard deviations, then he needs to show first that it would distinguish between his pseudo proxies and a straight line plus noise; and second that using that vHSI the MBH98 reconstruction is not statistically distinguishable from the pseudo proxies he generated from red noise. Given that for that particular vHSI, MBH98 is a the 88th percentile for the cherry picked top 1% of pseudo proxies, I don’t like his chances of it being below the 95th percentile for all of them. Further, having developed such a vHSI he also needs to show it is superior to other candidates that clearly distinguish between MBH98 and and pseudo proxies.
The real questions are quantitative, qualitative arguments can tell only that some effects are present at some level and that some other effects are totally absent. Applying decentered PCA of the type of MBH will create some hockeystickiness out of trendless noise. With strong decentering all that goes to the PC1, the sum of all other PCs cancels that but some of that is left at any cutoff in the number of PCs. This is a valid qualitative statement, but that does not tell, whether the effect is strong enough to be even noticeable without careful analysis. Nor does it tell, how rapidly adding PCs cancels out the spurious effect seen in PC1.
Quantitative results depend on the nature of the time-series. Real signals present in the time-series affect naturally the outcome and so does the nature of the noise. Red noise with suitable autocorrelations leads to a strong effect, white noise with no autocorrelations to a lesser effect. In the case of no real signal it’s meaningless to discuss the strength of the noise, because the final results are presented scaled in a way that makes all strengths finally the same. With signal the signal-to-noise ratio is essential.
Both the strength relative to noise of the hockeystick produced from trendless noise and its detailed shape depend also on the nature of the noise. In some cases the transition from the shaft to blade is rather sharp, in other cases spread over longer period.
Attempts to prove or disprove the points of M&M without a quantitative analysis is bound to fail, because their concerns are right qualitatively, but not as obviously quantitatively. In quantitative argumentation its more important to look at the real time-series than any particular simulation of M&M, because their noise model is just one of many that might be used.
Based with the support of present knowledge a look at the final results of the MBH analyses seems to tell that the analyses did not have the required skill to determine, how much variability there was in the temperatures of the last 1000 years (here I include both MBH98 and MBH99). I cannot tell exactly, what are all the reasons for that, but it may well be that M&M have presented correct explanations.
Nothing in that changes the fact that MBH pioneered an approach that has been improved since by both them and others. The papers have thus a significant role in science as science is about developing new methods and gradually improving understanding using those and further improved methods.
A quick followup to the discussion about the number of PC’s to retain. If you retain too few, you risk losing the signal. If you retain a few too many, there will be very little impact on your results.
I certainly see no reason to want to prove or disprove M&M. As you point out, it does illustrate that short centered will tend to create a form of hockey stick (although this may not be particularly significant and it really just shows that short centering moves the HS into PC1). However, as far as I can tell, it does not show that the hockey stick in MBH98 is an artifact, which is what’s actually important. And, I’ll stress again that given that this is really about our millenial temperature history, more recent work is much, much more relevant.
Pierre, comment in response to Hu McCulloch is accurate as far as it goes. However, given that Preisendorfer’s rule N was mentioned in connection to selecting PCs in the paper, it is natural to at least try it where the selection criteria is not mentioned. This is particularly the case as MBH98 indicated that, “Certain densely sampled regional dendroclimatic data sets have been represented in the network by a smaller number of leading principal components (typically 3–11 depending on the spatial extent and size of the data set).”
If they are being represented by a smaller set, the smaller set should be that which maximizes the representation. Further, there is a direct indication that the number of principal components used was typically more than 2. Therefore any attempted replication of MBH98 from the same data should have used some well recognized selection rule, and the natural first one to try would be Preisendorfer’s rule N, given that it is known the authors use it, and indeed have used it in the same paper.
The real question about that analysis is not whether the appearance of a hockeystick is an artifact of the method, the real question is, whether the method can tell much about the temperatures of centuries that precede instrumental period, because that and only that was the purpose of the study. The issues are related, but they are not the same.
If it does not have the skill in spite of the clear hockeystick, and it seems now that it does not, we may ponder why that’s the case. Perhaps it is in the bristlecones as the present argument of Steve McI is in my understanding. By that I mean that they bristlecones certainly help in creating the hockeystick (I don’t think that there’s much disagreement on this), but do not necessarily have comparable skill in describing NH temperatures of the past. Whether their role is as large as McI and other claim, I really don’t know as I have not looked at that.
Anders, it is only true to say “…short centered will tend to create a form of hockey stick…” if you have a very loose definition of a hockey stick shape. To illustrate the point, over the calibration period 32.3% of the cherry picked top 1% of pseudo proxies have a zero or negative slope. The maximum slope from the pseudo proxies over that period is less than half the slope of MBH98, and the mean is less than a tenth of it. (This is after the MBH reconstructions have been normalized to have the same slope over the non-calibration period, which slightly increases the slope in the calibration period and also the HSI.)
Officially, hockey sticks are supposed to have a blade width no greater than 5% of the length of the handle. In MBH 98, that works out as thirty years. Therefore for the hockey stick analogy to be apt, the maximum extent of zero or near zero trend after the rise of the blade should be about 30 years. M&M’s cherry picked top 1% have typically for to five times that extent. That is sufficiently large that the analogy does not apply. And that is among the 1% of “hockey sticks” McIntyre thought suitable for the viewing public, not the 99% he keeps carefully hidden away. The best that can be said for M&M05 is that they showed that while MBH98 produces a hockey stick, the algorithm applied to red noise produces long shafted cranks.
Quibbling about analogies is normally pointless. In this case, however, the fact is that the MBH98 reconstruction has a number of interesting statistical properties, and M&M05 only showed that the algorithm applied to the right sort of red noise produces one of them. They showed nothing about the other properties, and variant Hockey Stick Indexes show that they do not produce them. M&M cover all that up by talking about hockey sticks. If they actually analyzed the various statistical properties of the MBH98 “hockey stick”, and compared it to the equivalent statistical properties of their pseudo proxies, their game would be up. So they leave all the hard work to a persuasive analogy and some cherry picked visual examples and hope nobody notices the complete lack of substance in their actual analysis.
On the use of Preisendorfer’s N McI and others claim that MBH98 used it only in the final step where they combine proxies from different locations with the help of the EOFs. Use in that step is mentioned explicitly in the paper, but nothing is said on use in earlier steps, and the whole controversy is about an earlier step, where NORAM proxies were combined. They claim to have evidence to effectively prove that rule N was not used for NORAM. Furthermore, it’s not at all obvious that the rule could be used properly together with decentering.
With decentering it’s essential that the cutoff in N is done in a way that guarantees that no systematic residual caused by decentering is left in the PCs left out. Determining such a N may be impossible, because decentering distorts the residual in a way that may spread the artificial compensating “signal” in a systematic way into very many PCs.
Pekka, the bristle cone pines (and Gaspe cedars) make virtually no difference to the reconstruction except in the period 1400-1600. That does increase the HSI, but primarily the great increase in annual variability of the reconstruction. It has no effect on the shape of the blade of the “hockey stick”, so I think most non deniers would dispute that it helped in creating the “hockey stick”. If you want agreement, you need to get rid of the vague language of “hockey sticks” that M&M use to conceal their poor analysis, and sell their message from that analysis.
In any event, the primary effect of removing bristle cone pines is to add noise. Further, the resulting reconstruction is in even less agreement with modern reconstructions than the original MBH98. Consequently if we are to assume reasonable validity of modern reconstructions, the case is stronger for retaining rather than removing the bristle cone pines.
What is more, that really is an irrelevant issue in 2014. Even if the bristle cone pines should not have been used, MBH were not in a position to know that in 1998. It is a footnote for historians – not a relevant issue for the science.
The amount of noise is a very essential factor in deciding, whether there’s a change that that the method has skill. I wrote above that the issue is quantitative, the amount of noise relative to the calibrating signal is one of the most important quantitative indicators.
Pekka, McIntyre has an extensive post comparing MBH’s retained principal components relative to his calculation of which should be retained based on a mechanistic application of Preisendorf’s rule N. His comparison results in this table:
You will notice the numbers do not quite match, but are typically very close. I have added in the difference in variance explained for the NOAMER network (as that is what McIntyre concentrates on), and as you can see it is small.
Further, I am not convinced McIntyre has in fact emulated MBH98 on this point. In particular, MBH typically determine the principal components for 1980-1750, then separately for 1980-1700, and so on, using only those proxies that extend over the full period in each case. Thus proxies that do not extend past 1750 are not used in determining the principle components for periods earlier than 1750, etc. This is an important part of the MBH98 technique that McIntyre regularly neglects.
In this case, the retained proxies having two values as reported had different number of principal components retained in different half century periods. That strongly suggests MBH used the stepwise analysis for thinning dense networks as well as in the temperature reconstructions. McIntyre, however, clearly did not. That being the case, his emulation is only an emulation for the period to AD1750, and in consequence that he may well have incorrectly determined the number of principal components to retain for earlier periods.
Finally, Preisendorf himself recommends against a pedestrian application of Rule N, recommending that final selection depends in part on empirical considerations. (Sorry, lost the link so I can’t provide an exact quote, which probably matters.) Therefore slight differences in numbers of principal components retained which have a small effect on total variance explained (all that has been demonstrated by McIntyre, and certainly all that has been demonstrated for NOAMER) do not violate Preisendorf’s rule N as Preisendorf himself suggested it be applied. Consequently McIntyre has not shown that Preisendorf’s rule N was not applied by MBH98, and has certainly not justified his simply retaining 2 principal components for his standard analysis when 5 would have been required to retain the same amount of variance explained.
My conclusion is that the whole “question” about whether Mann used the rule he said he did is simply intended to distract from the fact that McIntyre did not use any valid rule, nor attempt to maintain an equivalence in variance retained. McIntyre made a major error or analysis, in other words, and his hoping to distract us by pointing out (putative) specks in Mann’s eye.
Pekka, yes the amount of noise is important in determining if the method has skill. And so?
That is not a reason by itself for dropping the bristle cone pines, and no adequate reason has been given for doing so. Further, if you do so the resulting reconstruction is:
1) Still within uncertainty of the original reconstruction;
2) Agrees with later reconstructions worse;
3) Suggests a drop in temperature from the MWP much later than (historical and archeological) anecdotal evidence suggests; and
4) Merely results in a hockey stick of shorter length.
Where is the issue if the first attempt to make a reconstruction with uncertainty over estimated skill in its earliest period. Especially given that it was supplanted a year later by a new reconstruction with more data (which McIntyre carefully avoids analyzing, I note).
The increase in noise proves that bristle cone pines helped in creating the blade. That was the connection to earlier discussion. I have not said that they should have been dropped. I speculated only that they might be a partial explanation for what we can observe now from comparison of MBH results with later analysis of NH temperatures.
Pekka, the increase in noise is entirely restricted to the pre-1600 AD period. It has no relevance to the “blade”, only to the length of the “shaft”. The reason is because of the step wise procedure in MBH98, and hence the very much larger number of proxies available. The network back to 1760 has 93 proxies (or PCs standing for dense proxy sets), to 1700 has 74, to 57, to 1450 it has 24, and to 1400 it has 22, of which just two are from NOAMER. It idea that reducing 22 to 20 proxies will have the same impact as reducing 93 to 91 is rather silly. But if it is to eliminate the blade, that is what it must do.
==> “…but I can see why some got frustrated as it wasn’t clear if he was expressing his own views or simply parroting what he’d read elsewhere,”
I’m going to weigh in also, in support of Miker613. I would much prefer reading his comments than those calling him “stupid.” Even when they have cartooons!
Now I can’t understand the technical arguments, so maybe there is an aspect of bad faith in his comments I can’t fully appreciate, but from what I can tell I do think that he was trying to engage in good faith discussion. “Parroting” what you reads elsewhere can be a legitimate way to explore an issue, by asking for responses to opposing arguments. Maybe Miker613 uses some Climateball moves, but sometimes that happens because someone is “motivated” to persist in arguments so as to maintain their current perspective, and not because they are deliberately engaging in bad faith arguments or trying to argue illegitimately.
I don’t think that the poor treatment Mirke613 received here (not that his treatment was uniformly poor) was in fact “deserved,” and I don’t think that poor treatment of someone engaging in good faith should be excused as being understandably the result of “frustration.” Adding “something extra” to arguments is never necessary, or I’d even say “understandable” if you are focused on productive engagement (although it might be understandable in the sense that we “understand” that in response to frustration people behave in childish or non-productive ways). If you think that someone is presenting arguments in a way that isn’t consistent with productive engagement, tell them why and allow them to adjust. If they don’t adjust, move on.
I broadly agree. I’d rather people didn’t resort to treating someone poorly, especially if they’re behaving well. In defense my comment, I actually said “I can see why” which wasn’t really intended as excusing behaviour, simply illustrating that I could see the issue.
I think, though, there is a difference between someone who says “I read this somewhere else. What do you think?” and someone who makes comments that appear to be arguments in their own right and you only discovering later that they were simply things that had been read elsewhere.
I have read only a small fraction of what McI has written, and an equally small fraction of, how others have argued against him. So far I haven’t seen any clear discussion of what worries me most in comparisons like the one presented in the notes of McI that you linked to or in the use of decentered PCA in any analysis. I have mentioned that point already in two comments.
The point is that decentering adds to the total calculated variance. It may add a really significant amount to the total calculated variance if the distance between real center and new center is large relative to variance. If a normalization is done after decentering, the normalized variance is not changed, but that does not remove the problem as normalization means that real variability is reduced by the factor of normalization. At that point a fraction of the variability is artificial from decentering and the rest appears smaller than it really is.
When the PC1 is calculated in the decentered case it takes most of the artificial variance and some of the real variance. Its share of the total variance is determined by the extent of decentering. This share seems to be more than 30% of all variance in the case of NOAMER1400. Thus all values of remaining variance should be multiplied by about 1.5, when they are compared to a similar calculation without decentering. After removing PC1 more than 90% of the original real variance is left (probably around 93-94% as PC4 of the centered method explains 6.6% while PC1 of decentered analysis explains 38% of the inflated variance, and as PC4 of the standard PCA is supposed to be closest to decentered PC1 in direction), Taking PC1 and PC2 more than 70% of variability is probably left, etc.
What’s even worse is that extracting PC1 of the decentered case leaves a residual that has a systematic bias to compensate the distortion caused by decentering. That systematic bias is likely to be spread to many of the remaining PCs. How much of it remains at each step is likely to be proportional to the remaining residual variance. In case of NOAMER1400 most of it will likely be left in PCs that no-one could consider of including in the set of PCs to retain. Thus it is more or less guaranteed to distort exactly the distribution that’s important for the further analysis.
As I explained above, that effect may (or is even likely to) be rather large in the case of NOAMER1400. It’s smaller in other cases as the share of PC1 of the decentered analysis is not as much larger than that of the centered analysis (NOAMER1450 and AUSTRAL1750 look pretty bad as well).
As I wrote, I haven’t seen this discussed in the same spirit elsewhere, but it’s quite possible that it has been discussed. It’s also possible that I have missed some point that makes the issue less severe, but right now the logic seems strong to me.
But, surely, in some sense we’re talking about a paper published 16 years ago. The method has changed. The proxies have changed. It’s all part of the scientific process. The changes are probably a combination of the original method having problems and newer methods being developed that would have been preferred even if there weren’t problems with the original method. I guess there’s no real problem with discussing the details of the original method, but it doesn’t invalidate the contribution that the original paper made.
==> “I think, though, there is a difference between someone who says “I read this somewhere else. What do you think?” and someone who makes comments that appear to be arguments in their own right and you only discovering later that they were simply things that had been read elsewhere.”
Sure it’s different,..but sometimes it might not be a meaningful difference. Let’s say that Miker did that in this case – how was that important?
Well, I think there’s a difference between having a discussion with someone who appears to be making an argument of their own and someone who is simply repeating what they’ve read elsewhere and doesn’t actually understand the argument being made. I should stress, though, that my issue is more with not realising that the latter was happening than with it happening (i.e., if it had been clear, I wouldn’t really have a problem).
To be fair, though, I didn’t really have any big issues with miker’s contributions here. I found it pleasant and I did think he was engaging in good faith (I’m also not all that comfortable discussing someone in their absence). I’m simply suggesting that it would be good if people made it clear if they were presenting their own argument or presenting arguments they’d read elsewhere,
–> “I’m simply suggesting that it would be good if people made it clear if they were presenting their own argument or presenting arguments they’d read elsewhere,”
I discuss this now for two reasons.
The first is that Tom brought it up again presenting points that I don’t fully agree with.
The other is that I cannot get around trying to understand the issues well myself. It’s a mathematical problem that I find intellectually interesting, and being retired I spend time with issues that interest me, even if they have little direct relevance.
Okay, I’ve had enough apologism for Miker. That’s two too many.
Miker exhibited complete intellectual dishonesty (aka bad faith) in my interaction with him. What is more, my interaction with him was simple enough for anyone to follow.
Pekka @12:30 pm, I follow your reasoning, but am not mathematician enough to say whether or not your reasoning is valid. I suspect, however, that it is invalid. The reason is the data provided by Michael Mann at Real Climate (and more or less confirmed by McIntyre at Climate Audit). Specifically, here are the variances explained by various principal components of the NOAMER tree ring series using standard (red crosses), and short centered (blue circles) PCA:
There is much more variance explained in the first to principal components using the short centered analysis, than in the first two using the standard analysis. Indeed, as it happens the variance explained in those first two principal components (47.95%) is just 0.51% less than the first five principal components of the standard PCA (48.46% explained).
Now, according to your argument, the first two principle components from the short centered analysis are inflated, with “more than 70% of the variance left”, whereas that is not true for the standard PCA. Ergo, on your assumption, the first two principal components of the short centered analysis explain only 60% of that explained by the first five principal components of the standard analysis. That being the case, they should produce markedly different reconstructions, which is just not the case:
Granted that is just a visual comparison, which is a poor method in statistics, but Mann, Wahl and Amman and McIntyre all seem to agree that using the five standard method principal components produces almost exactly the same reconstruction as using just the two Mannian principal components.
If I were to hazard a guess as to what is happening, I would say the first principal component is inflated as you suggest, but that the remaining principal components are also inflated in a way that additively cancels the inflated significance of the first principal component. Consequently, because all are inflated, the variance explained by all principal components remains constant. Further, so long as sufficient principal components are used to explain a sizable portion of that variance, the inflated variance of the first principal component is cancelled out. (All this assumes your initial intuition is correct, which may not be the case.)
Regardless of the ins and outs of this argument, however, the fact remains that no matter which method you use, if you use principal components based on rule N, the method makes virtually no difference to the final reconstruction.
For what it’s worth, I was the target of much of miker613’s cross checks:
On the 2014-09-29, at 13:30:
Now, think about this: miker613 comes in a comment thread about my post, rejects everything I say in that post said and excludes me over and over again from his exchanges, while pushing a peanut that has nothing to do with what I was saying. This is not my idea of tenderness :
I don’t mind much cross-checks. They usually backlash: everything you say and do can be said or done against you. But a ClimateBall ™ game without physical contact is no fun. So there’s a trade-off.
Please beware the rhetorical effect of piling on.
It takes at least two players to make a ClimateBall ™ game.
Every ClimateBall ™ should own the moves they play.
Yes, willard –
I did look through Miker613’s comments in some more detail and saw that he is more guilty of slashing than the impression I had from a less concentrated form of reading his comments.
Tom Curtis, Nick Stokes, Beast Of Caerbannog. I like pictures, they speak volumes.
I’d like to see a graph of the signal side by side with the pseudo noise, at the same scale. Then add the noise to the signal and see what it looks like. Under the hood, this is what’s being done. Such a picture would be worth thousands of words. (I’d bet millions have been spilt over this.)
Tom it may be fun to measure signal hockey stickiness, noise hockey stickiness, and combined hockey stickiness.
That’s exactly the plot I use to prove or at least justify my point.
The variances are on different scales The blue circles should be moved up by a factor of about 1.5 to make them comparable. What’s not directly visible in the plot is the amount of residual variance after each step, i.e. what’s left when the values are subtracted from the total variance which is 100% for standard PCA, but approximately 150% for the decentered case that includes the real variability + about 50% extra variance from decentering.
The seriousness of the issue is indicated by the size of the residual variance and by the fact that the long tail contains a systematic contribution created by the decentering process. That each of the remaining PCs is small and that each of them is orthogonal to all other PCs of the decomposition does not remove the problem, because they are not orthogonal to the decentering term, which is only roughly parallel to the PC1.
Believe me, this is a real issue of mathematics. Dismissing it must be justified by a separate analysis that proves that the quantitative effect is small enough in a specific application. Because I have not done the full analysis, I cannot tell, how large it finally is in this case, but the numbers that I have presented should be enough to tell that the issue must be taken seriously.
Joshua/Willard: miker613 was trolling pure and simple. He used some seriously great moves that I haven’t seen since I was 13. These were real pratt moves too.
Cover his ears and say, Nyah nyah I can’t hear you. (I don’t get one of this math, but it seems really important.). Answer, math… Response… I don’t get math, but you are wrong.
Then there’s “talk to the hand”. When he claims to be just a messenger.
But folks this is really about spreading FUD (Fear, Uncertainty, and Doubt). Show up the way they did makes believe that there is controversy. Arguments make it look like that.
Tom, Pekka – MBH used correlation/standardized PCA, MM used covariance/unstandardized PCA, which accounts for the different number of PCs.
Full centered covariance PCA (MM): 5 significant PCs, largest HS signal in PC4.
Short centered correlation PCA (MBH): 2 significant PCs, largest HS signal in PC1.
Full centered correlation PCA (W&A 2007): 2 significant PCs, largest HS signal in PC2.
Deepclimate has a good 2010 discussion on the biasing effect of short-centering here.
“…the biasing effect of “short-centered” PCA is much less evident when applied to AR1(.2), even when viewing the simulated PC1s in isolation. To show the extreme effect claimed by McIntyre, one must use an unrealistically high AR1 parameter. This is yet one more reason that the NRC’s ultimate finding on the matter, namely that “short-centered” PCA did not “unduly influence” the resulting Mann et al reconstruction, is entirely unsurprising.”
This has all been hashed out in detail over the years – short-centering bias, while present, is quantitatively _not_ the cause of MBH’s results.
Do you have anything to comment on the points I have been discussing?
Pekka – The variance shift question is an interesting one, but as one might say, “the proof is in the pudding”, in the effects actually seen. And if you reconstruct with short or full centering, with correlation or covariance matrix PCA, and retain _all significant components_ as per Preisendorfer or other rules, you get essentially identical reconstructions. Hence that method of centering doesn’t distort the data compared to full-centering. Or the reconstructions would differ.
You seem to be arguing that it’s plausible for significant variance in the data to be pushed off into excluded components, the long tail – but please keep in mind that past a certain point the components are expressing variations between individual proxies, not the general trends of the proxies as a whole (the entire point of data reduction with PCA). After a certain point (say, when eigenvalues drop below those of uncorrelated white noise, or have less contribution to the reconstruction than any individual dataset, any individual proxy – and the tail of the PC significance contains a great many of these. And if significant general behaviors were excluded, those reconstructions would differ accordingly.
As I noted above and in a previous comment on covariance/correlation PCA, the 5 in MM and 2 in MBH are from different PCA formulations, and the magnitudes are _not_ directly comparable in value, ~48% variance spread between 5 PCs on one hand and 2 PCs on the other. You would have to compare PC1 and PC2 from MBH with PC1 and PC2 (both significant) from full-centered correlation PVA (as in W&A) to get numbers that were meaningful.
In terms of certainty in the reconstructions, the validation step for MBH is a significant part of the paper. W&A show that if you use the 5 significant PCs from MM, the reconstruction passes that validation, but if you then drop disputed proxies, the MM reconstruction fails validation (insufficient data) in the 15th century where it begins to disagree. I don’t see any evidence of loss of reconstructive skill under that validation – I suspect you would have to develop (and verify the correctness of) a different validation method that MBH used, keeping in mind that successive reconstructions using different statistical techniques and entirely different proxies are in overall agreement with MBH, which is itself quite a validation of the reconstructive skill. As you noted above, quantitation is where we can determine whether the MBH methodology is skillful, if foolkish math errors have been avoided. There is considerable evidence indicating that it was indeed a skillful reconstruction for the techniques and available proxies of the time.
In short – do reconstructions using more standard centering and a generally accepted retention rule agree with those from MBH? Yes. Hence the short-centering did not distort the data.
Did not *meaningfully* distort the data.
I have finally found data for the NOAMER PC1 that I have been able to download. The data also included PC1 for the tree rings from the Sierra Madre occidental, and southern great plains (Stahle). I think it is instructive to view both together:
Bear in mind that both PC1s where produced by the method that “mines hockey sticks” according to McIntyre. Evidently, the Stahle database had no hockey stick to mine. It’s Hockey Stick Index is a measly 0.31. In contrast, the NOAMER PC1 has a HSI of 3.2. That is 50% larger than the highest HSI from McIntyre’s cherry picked top 100 pseudo proxies, or 27.5 standard deviations above the mean of that cherry picked top 100. I am sure it is much lower (closer to six standard deviations) above the full 10000 pseudo proxies from M&M05, but I do not have the statistics for both sets.
In any event, statistically, neither of these PCs owe their shape to any tendency to “hockey stick mining”. The one is too low in its HSI, and the other too high (something M&M should have known, and should have reported in M&M05). It is very clear from this that PC1 of the NOAMER series is a super “hockey stick” for the very simple reason that the information in that PC was in the original data (a point made clear by McIntyre’s additional, and inconsistent line of argument that the NOAMER PC1 represents the data from the bristlecone pines) .
First a correction, and then some additional data:
I have just checked the calibration period trend for NOAMER PC1 and it is high relative to the top 100 HSIs, but not sufficiently high to be statistically significant. I have yet to apply any variant Hockey Stick Indexes to it.
The additional information is that I have now calculated a better estimate of the deviation from the mean in terms of HSI by Stahle PC1, which is about eleven standard deviations. That is a ball park figure only. If MBH used their standard procedure for the Stahle database, that represents a problem for McIntyre’s thesis. It is possible, however, that they merely used principle components reported by Stahle et al (which would tend to confirm the M&M05 thesis).
I see three levels in these questions:
1) The level of correctness of the methods. On this level decentered PCA introduces a bias, whose size is difficult to estimate, and that can in some cases be large, while is probably small in most small samples (and all these samples are small in that sense). This level is the one I have discussed in my latest comments.
2) The effect of the choice of the method in this particular analysis. More extensive comparisons have evidently shown that the effect was not large, but if such comparisons are needed to assure that the method does not cause an error in this particular case, then it’s better to use methods that are better in control to start with.
3) What we can say about the skill of the original analyses in retrospect? Analyses refers here to the combination of methods and data. On that I see that the comparisons show that the skill was poor in a way that’s not fully reflected in the wide error ranges, because it’s highly unlikely that a skillful method would have produced so little low frequency variability in the best estimates in comparison to what later analyses show. Skillfulness requires success in first forming the PC1 so that it’s expectation value at each time of the past has the correct expectation value for the temperature, and then little enough uncertainty around that estimate to give useful results. Low frequency results are less dependent on the latter. There for the poor success in that indicates that the first step failed, whatever the reason.
Combining the success in finding the blade but failing in skill leads in retrospect to the conclusion that the combination of data and methods was misleading. If the problem is finally not in the method, then the data has probably been non-representative. Some of that had the blade, but not the corresponding past variability. This kind of problems are very common when statistical analyses are done with data that’s on the verge of being sufficient.
AnOilMan says “But folks this is really about spreading FUD (Fear, Uncertainty, and Doubt).”
Bingo, hole-in-one, scored a perfect hit. Now all you need to do is identify the pointy end of that spear.
I see this thread is still active. However it is perpetuating several misconceptions. Let me try and be brief and list some of them.
1. M&M say that MBH98 methods produce hockeysticks from nothing.
This is incorrect. They argue that Mann’s flawed “PCA” picks out whatever hockeystick shaped series are in the relevant set and promote them to the “PC1” – the supposed dominant pattern in the data – even if PCA properly done shows the hockeystick pattern explaining a much lower proportion of the variance.
2. It doesn’t matter anyway because if you do correct PCA you can get a hockeystick in PC4 and you can get Mannian results.
This is half true. The catch is that the PC4 is not the dominant pattern in the North American tree rings but the pattern produced by the bristlecone pines and Gaspe. So you are effectively arguing that this subset of trees are very special indicators of hemispheric temperatures whereas the rest of the network is not. This seems unlikely given that they are of an unusual stripbark form and were collected on the assumption that they were sensitive to CO2 fertilisation. Moreover later work suggests that the calibration period growth burst may well have been due to mechanical deformation.
3 Preisendorfer’s rule N says that the PC4 should be included.
The rule says that there is a pattern in the data. But no simple rule says that that pattern is temperature, see above. Moreover there is no evidence that Mann used rule N for anything other than the conventional PCA of instrumental temperatures. The claim that it was used for the tree rings does not appear till the 2004 Nature correspondence, and there is good reason to suppose it is false, namely that in the stepwise regressions different numbers of PCAs for exactly the same networks were used for different dates.
4 M&M do not reproduce the stepwise regressions.
This is true for the M&M 2003 paper, which is not surprising because Mann had not revealed that this is what he had done in 2003. However both the 2005 M&M papers do reproduce the stepwise procedure, which seem to have been another innovation in methods by Mann.
5 The Mannian PC1 shows too little variation to produce the Mann hockeystick
This is incorrect. The MBH98 procedure is a two step process. First the supposed temperature proxies are derived, then temperature is (inversely) regressed on the RHS proxies. It’s the regression step that scales the proxies (PCA has no natural units and is not in degrees). In the 1400 step only the bristlecones and Gaspe have a hockeystick shape. The remaining 20 odd proxies are little different from white noise. So the bristlecones and Gaspe dominate the shape of the reconstruction.
6 If the bristlecones/Gaspe are left out the reconstruction does not change
This is disingenuous. If only the bristlecone pines are left out, Gaspe dominates the reconstruction and produces a hockeystick. Similarly if Gaspe is left out the bristlecones dominate and you get a hockeystick But if both are left out there is no hockeystick.
All these results have been demonstrated with code etc, either in the M&M articles or in technical blogposts at Climate Audit and elsewhere. If people want to disagree then it behoves them o show where these calculations are wrong (if they are). Otherwise it’s just handwaving.. The bottom line is, as has been pointed out long ago, whether the Graybill bristlecones are a good Northern hemisphere temperature proxy.
I’ve been away for a couple of days, and don’t really have time to keep doing this anyhow. But I did not think that I was badly treated, certainly not by most of the people here. I thank those who spoke up on my behalf. IMHO, calmer discussions are more likely to lead to enlightenment.
I was happy to see that Tom Curtis, in addition to posting here and elsewhere, has posted at climateaudit (where they are disputing his points vigorously. I expect him to get his head handed to him, but YMMV.) That’s good.
I am also interested in Pekka’s contributions here, as he is a whole lot more capable of carrying on this conversation than I am.
As for the shriller voices in the debate, they are pretty ubiquitous on all sides on the internet. No point in fussing about it, or in pointing to them as proof that the other side is awful.
Mann’s results keep being replicated.
What can’t hose playing the man and a 16 year old paper cease? Is it just poor loser syndrome?
I disagree. Noone here has said that. What has been suggested is that some interpret MM05 in this way, but – as you say – they are wrong (i.e., it is regularly claimed on “skeptic” blogs that MM05 shows that the MBH98 method produces hockeysticks from random noise).
I don’t particularly disagree with the rest of what you’ve said, although I do think you’ve somewhat misinterpreted some of what has been said here (but I don’t particularly care). Just so it’s clear. I’m certainly not that bothered about MM05 and have no great interest in arguing about it or about MBH98 (other than I’ve learned some things and found it interesting). At the end of the day, if we want to understand temperatures over the last millenium, we can simply go and look at papers published in the last few years. We don’t need to discuss (or argue about) papers published 10 or so years ago.
“The problem, says Mr. McIntyre, is that Dr. Mann’s mathematical technique in drawing the graph is prone to generating hockey-stick shapes even when applied to random data. Therefore, he argues, it proves nothing.”
Antonio Regalado, The Wall Street Journal
Feb. 14, 2005
Anders: ” “M&M say that MBH98 methods produce hockeysticks from nothing. ” I disagree. Noone here has said that.”
Anders is right.
Furthermore nothing of significance has been presented. This fact is being studiously ignored.
Data has been replicated of other papers and studies, with and without Mann’s techniques. So who cares about MBH98?
There appears to be scale issue, where in the simulated noise is 1/10 the signal. So… we are arguing over 1/10 any measurement. So who cares?
Newer curves are still well within the original MBH98 2-sigma error curves. This was an acknowledgement then that there was issues with the data, and yet, we still haven’t found enough difference in the signal to show the original paper wrong. So who cares about MBH98?
Carrick’s graph show’s that it is within the original error margins. (I am not questioning what he’s done, although many would.) But if its in the error allowed in the original MBH98, who cares?
Techniques used in MBH98 have been abandoned for better ones by Mann himself. So who cares about MBH98?
This is squabbling over the insignificant mouse nuts of an old paper. Its a big fat waste of time.
The first rule of MBH98 criticism is don’t admit there actually is a hockey stick.
anoilman: but as long as there is endless arguments about MBH98 and whether or not the 1400ADish results are perfect … the blade of the hockey stick remains in serious doubt and therefore can be ignored until all is known. (After all, future temperatures depend on current state, emissions, Milnakovitch ,etc, not on what the state was pre-1400., so the only thing that ever mattered was doubt about the upward blade.)
As an analogy: see earlier comment about measurements. Do you realize that physicists do not agree on the exact value of G?
Given that uncertainty, people should feel free to throw kids off bridges until physicists agree exactly on the strength of gravity.
Sadly, as per your earlier comments, a large number of people have absolutely no clue about real-world measurements, error bars and signal-vs-noise.
As I have written so much here, and as Miker mentioned me again I want to stress a few points:
1) MBH98 and MBH99 extended earlier proxy analyses significantly. Before that it was not possible to conclude, whether such analysis is possible. The papers contain also discussion of potential caveats. To give an example, in MBH98 they write Implicit in our approach are at least three fundamental assumptions ...
2) The methods were in part new. The short-centered PCA was not used in the main step, but in a preliminary step that had the goal of extracting and presenting relevant information from North American network in small number of input time-series for the main step. The method is certainly questionable, but it did probably its task well enough.
3) In retrospect, with benefit of knowing the later work, it seems likely that they were somewhat over-optimistic on the skill of the analyses, but that kind of over-optimism is really common in early work that applies new approaches in situations, where it’s not possible to both extract much information and to test fully the statistical skill of the methods. It’s typical, ant probably beneficial for the progress of science, that the power to extract information gets more emphasis at that stage.
4) Scientific papers are written for other scientists, who are expected to understand the potential pitfalls. The paper should mention them (as these papers largely do), but it’s not wrong to be rather brief on them.
5) Since late 1990s the practice of making all data and code available has become much more common for the benefit of science, but that’s a rather recent development still underway in many fields. (U.S. culture has for long been ahead of the culture of most European countries in openness of information, when information created using public funds is considered).
6) I do think that many people have still major misconceptions on the technical contributions of Steve McIntyre and his collaborators. They have done a lot of work and are quite competent. I do think that their early criticism was significantly misplaced, but as the main stream scientists have learned since the early parts of this controversy, so have also them.
I don’t like the personality wars that have developed around these issues, but I don’t want to say anything more about that.
The MM contribution has political significance, regardless of how trivial it is in science.
Antonio Regalado’s WSJ February 2005 article sparked a Congressional dispute with Joe Barton and Ed Whitfield grilling the MBH scientists in what Sherwood Boehlert, chairman of the House Science Committee, called a “misguided and illegitimate investigation” which led directly to the NAS’s National Research Council report chaired by Gerry North, and the competing Wegman Report.
Since July 2003 (three months before MM first published on the subject) Senator James M. Inhofe had cited criticism of MBH as indicating “that manmade global warming is the greatest hoax ever perpetrated on the American people”. The claims for MM’s “hockey stick” critique continue to support that denial of science, hence the reluctance to move on.
Just a quick comment. Tom Curtis and maybe some others argued that it is inconsistent for there to be a low HSI for Stahle when M&M argue that Mannian PCA mines for hockeysticks. This argument only makes sense if you think that Mannian methods always produce hockeysticks, otherwise there is no inconsistency. I am happy we can all agree on the effect of Mannian methods, even if KR thinks its a valuable feature rather than a bug.
I am also glad you agree with my other points, so reject the silly idea put about by KR and Eli that the Mann PC1 cannot be the sole reason for the hockey stick in MBH98 because it does not vary enough.
For those who want to carry on discussing the current state of temperature reconstructions and leave this badly flawed 1998 analysis behind I suggest turning to http://climateaudit.org/2014/10/01/revisions-to-pages2k-arctic/
and subsequent po
Not quite sure you’ve completely absorbed all the comments that have been made. On the other hand, if you want to continue believing that it is badly flawed, carry on I guess.
I think the flaws have been demonstrated beyond reasonable doubt, it’s not a question of belief. The continued misunderstandings in the comments on this site long after the errors have been pointed out is depressing.
Jsam writes “What can’t hose playing the man and a 16 year old paper cease?”
I cannot speak for others of course but I suspect that to the extent government policies stem from that 16 year old paper continued discussion is likely.
John Mashey, thanks for that. I did have a good read through it on the first time around. I worry that kind of knowledge will be very troubling to denier understanding of GRACE.
I think the reason I brought up Aristotle 350BCE Book of Animals was that he made so many errors in much of that original work. Yet to stand around and complain about it now serves neither to enlighten us, nor improve our understanding.
Denier understanding? GRACE has to be a hoax and blog auditors know much more than physicists about the exact value of G. There is no drought in California.
The hockey stick as a physical reality is beyond reasonable doubt.
1) M&M do not say that explicitly, but they appear indifferent when “skeptics” misinterpret them as saying that, and describe the process in tendentious language. The MBH98 does not “mine for hockey sticks” – it gives greater weight to proxy data that more closely agrees with the temperature data over the calibration period. I suspect that had M&M not promoted their study by the use of deliberately evocative but ambiguous language, there would have been far less confusion on both sides about the significance of their findings.
2) Here are the means of the proxies and/or PCs taken to represent dense proxy networks for the 1980-1820 periods both with and without the ITRDB-North American network:
You can see that without the the two databases containing the (according to McIntyre) suspect data, the “blade” of the “hockey stick” is even more accentuated, but that the difference is small. MBH98 used nine principal components to represent the North American ITRDB group, suggesting that all the variability in that group was well represented.
The idea that the existence of the blade on the hockey stick (and hence a hockey stick shape) depends on just two groups of trees is without substance.
3) As already shown above, there is a good match between the number of selected proxies as per Preisendorfer’s rule N and those used by Mann. Further, as McIntyre himself notes:
If not sufficient, then PCs that meet the rule N cutoff can be rejected without violating the rule as it was intended to be applied.
More importantly, this is a distraction from Mann’s point at Real Climate. The point is not that there is a particular rule that must be followed, but that if you include approximately the same variance using standard PCA as Mann did using his variant of PCA, the outcome is effectively the same.
Looked at another way, M&M’s argument was (and is) not only that MBH should have used standard PCA, but also that they should have restricted the PCs used to explaining just 28.5% of the variance in the NOAMER tree ring series.
6) It is your argument that is disingenuous. Specifically, the blade of the hockey stick is produced by the regression of the data to 1820. Removing Gaspe and Bristlecone pines does not eliminate the blade from the hockey stick.
It is long past time that McIntyre and McKitrick stopped using vague terms like “hockey stick” so as to promote over interpretation of their results. IF you have a result, state it without figurative language. Eg, “If you remove data from two groups trees from the reconstruction, the uncertainty in the period 1400-1600 increases to the point where the reconstruction is not useful over that period.”
If you say instead “removing Gaspe and bristlecone pines destroys the hockey stick” you will be interpreted as saying it will eliminate the blade. McIntyre knows this. So does McKitrick. That is why the introduced the ambiguous, and figurative language.
Thanks to Tom Curtis, Pekka and others for pursuing the issue about PC selection and Preisendorfer’s Rule N. I’m glad miker613 didn’t feel mistreated overall. I just read an interesting exchange that took place today between AK and Jim D in the Steyn versus Mann: norms of behavior thread at Curry’s blog. Jim D sums it up quite nicely in my view, with his usual concise and focused style.
2) Actually short centered PCA was described in MBH98 as part of the main step of the reconstruction, but also used in the preliminary step without it being mentioned that that was done.
6) McIntyre (not McKitrick) has done a lot of work with a high level of competence but a persistent bias. Indeed, several persistent biases. One of his biases is that he insists that only methods suitable for conditions where there is no persistent bias in Signal to Noise ratio between different proxies. The other bias is against any paper showing high modern temperatures relative to the MWP or to the holocene as a whole, regardless of its technical merits. (You need only consider his response to Marcott et al. to see that bias in action.) Further, he deprives nearly all of his work of any scientific merit by searching for flaws, but not (typically) considering the consequences of correcting those flaws.
Other than that, I largely agree.
Well I don’t think it has and your comment is quite a typical example of the kind of exchanges that I find depressing “You’re all wrong, I’m obviously right, and isn’t that depressing”. However, I’m going to stick with what I’ve said already. I don’t really care all that much. Whether or not, today, we regard MBH98 as having been right, or wrong (neither term being particularly well-defined), doesn’t really matter given that we have much newer work that we can consider if we wish to understand our millenial temperature history.
Mann’s results keep being replicated. That’s why they affect policy.
Actually short-centred PCA was not described at all in MBH1998, nor was stepwise regression. These procedures only emerged post 2003.
You mean that this statement in MBH98
does not essentially describe short-centered? And that this statement
does not describe how they applied the selection rule?
To be honest, though, I have no great interest in restarting this whole discussion. My general view is that any discussion about MBH98 and MM05 is unlikely to be constructive and have seen nothing that convinces me that that view is without merit.
A technical comment. You write:
Writing that makes me wonder, whether you have understood that MBH98 explains much less than 28.5% of the variance in the NOAMER tree ring series with the two PCs. They get larger numbers, because short-centering adds to the variance and most what they get is from that added artificial variance. The variance of those proxies that show the strongest hockeystick get multiplied by roughly 3 (the highest multiplier is 3.7 for ca534, 16 proxies have a multiplier higher than 2).
Short-centering may pick roughly correctly the temperature signal, but it looses almost all other signal that might be relevant for the full PCA as that remaining variance may correlate with other time series.
MBH 1998 has been demonstrated to use a biased method to pick out a pattern of tree ring growth almost entirely based upon the Graybill bristlecone pines (Gaspe matters too) for the period before roughly 1600. Other methods can also be used to pick out the bristlecone pines, including using the PC4 of a correctly performed PCA. The issue is and always has been whether these are sensible proxies. If people spent more time on updating proxies and improving measurement and less in devising idiosyncratic methods which weight some proxies relative to others in ways that are far from transparent we might make some progress. Paying attention to what can be reasonably expected to be proxies for temperature would also be welcome. For tree rings we not only have to find trees whose growth is constrained mainly by temperature, but also to try to work out whether the conditions, if any, which produce sensitivity to temperature have changed over the period for which the proxy is being used. Assembling pre-existing proxies in ensembles and sorting them using fancy algorithms of unproven efficacy looks like low return work. M&M showed that the supposed Hemisphere-wide Mann construction was in fact just a bristlecone/Gaspe reconstruction. None of the comments on this thread have persuaded me that there are any serious defects in the M&M analysis. May of the comments are just repeating old misconceptions. Let’s leave MBH98 behind.
You quote “the proxy series and PCs were formed into anomalies relative to the same 1902–80 reference period mean”. This says absolutely nothing about how the PCs were calculated, which is where the short-centering comes in. PCAs have no natural units so, once calculated, they can be set up how you like.
Similary you quote”An objective criterion was used to determine the particular set of eigenvectors which should be used in the calibration as follows. Preisendorfer’s25 selection rule ‘rule N’ was applied to the multiproxy network to determine the approximate number Neofs of significant independent climate patterns that are resolved by the network, taking into account the spatial correlation within the multiproxy data set. “. The calibration does not seem to be the choice of the number of PCs to retain in the tree ring PCs.
Mikep. You lost. You’re fighting yesterday’s battles. Mann’s results keep being replicated. That’s why they affect policy.
Excellent idea. Let’s instead look at the state of the art reconstructions for the last two millenia.
Hee you go
We could also look at the carefully constructed and reviewd “sceptic” alternative:
jsam wrote “Mikep. You lost.”
Yeah, that’s what my team always tells the other team. I suppose it would help if you defined “lost”. There’s losing the math argument (that’s easy, no shame in it); the science argument (a bit embarrassing) and finally the policy argument where this “loss” isn’t evident.
M2. You’ve lost the argument about whether or not a hockey stick exists. It does.
Now talk about how to deal with it and stop nitpicking the work from getting on for two decades ago.
1) The variance left out that I quoted was from Michael Mann’s calculation of explained variance for standard PCA. Ergo, any issue relating to short centered PCA does not apply.
2) However, 2 PCs from short centered PCA purportedly explaining the same amount of variance as five PCs from standard PCA produce visually identical graphs when recombined. It follows that they explain (approx) the same amount of the original variance as the 5 PCs from the standard PCA. The only thing that differs is how that variance is distributed among the PCs. I think this is a key point. If the two PCs from short centered produce the same curve as the five from standard PCA, they explain the same variance. If not, “explain the variance” ceases to have any meaning.
verytallguy: Wait wait wait… Haven’t I seen that shape before…. What’s it called again? Hockey Stick?
What would the church lady say…
> I suspect that had M&M not promoted their study by the use of deliberately evocative but ambiguous language, there would have been far less confusion on both sides about the significance of their findings.
A meeting of the minds may be possible, after all:
I may have misunderstood, what you meant by this
I understood that to imply that MBH98 would explain significantly more than 28.5% by 2 PCs. I expected it to explain significantly less, but it seems that my expectation was a little wrong, as the actual value seems to be close to that, not significantly less.
I got curious enough to make my own calculations on, what the PCs of the short-centered analysis explain. The results are (I give the share of variance explained by N first PCs for N=1-10
Explaining, how I did the calculation is too lengthy for this comment. I say only that if involved forming an orthonormal basis for the space spanned by the 10 first PCs of the MBH98 analysis. That was necessary, because centering affects the orthonormality.
I expected somewhat smaller numbers as these are only little smaller than the corresponding shares of M&M analysis, but perhaps normalization raises these values enough to get so close to the shares of the M&M version of the calculation.
I cannot guarantee that my calculation is fully correct, but I made during the calculation some checks that raised my trust in it.
Pekka – RealClimate has computed the explained variance for differing numbers of PCs in both MBH and MM here. MBH reaches ~48% of explained variance with 2 PCs (using the _correlation matrix_ to calculate PCA), while MM requires 5 PCs (using the _covariance matrix_ to calculate PCA) to reach the same ~48% explained variance. The difference in calculation matrix is the primary explanation for the differing significant PC counts, not the centering, as I discussed here.
At 2-3 PCs for correlation matrix PCA, or 5-6 PCs for covariance matrix PCA, you have accounted for ~48% of explained variance, which is just about all of the general data patterns above noise level.
The get an inflated 48% of the inflated variance. All the extras variance comes from decentering that adds very much variance to time series that contain strong “hockeystickiness”. Most of that goes to PC1, that shows therefore a meaningless share. With more decentering that can be brought arbitrarily close to 100%.
My calculation checked, how much of the real variance of the proxy time series was explained by the space spanned by the PCs of the MBH98 paper. The rest was left totally out of those PCs. In addition the variability that was included was also distorted, but probably not very much.
The standard PCA with same normalization gives an upper limit for the variance that each number of PCs can explain. All decompositions that differ from that explain less.
I calculated also the centered PCs using MBH98 normalization. The results did not differ much from M&M results. First eigenvalue was smaller (17.57%) than that of M&M, two first combined explained slightly more (28.81%) etc. Not the main source of difference. The difference from the decentered case is really small with the exception of the two first PCs taken separately. At every step the centered PCs explain slightly more as the must, but the difference is smaller than I expected.
The conclusion remains that the two first PCs of MBH98 explain only 28.1% of the total variance of the normalized proxy time series.
Yes let’s leave MBH98 behind. But please don’t replace it with Marcott, which gets a hockeystick from a group of proxies none of which individually show a hockey stick shape by selective dropping low proxies at the end point thus raising the endpoint average. Pages 2000 is more interesting, though there are still issues. However the recent revision seems to give a picture with a MWP period very similar to today. See discussion at
Pekka – I’m quite puzzled by your results. Did you perform PCA on the variance-covariance matrix (default with princomp/R, which is what MM used) or PCA with the correlation matrix (as per svd/R and MBH)? Because according to W&A 2007 those two techniques produce 5 and 2 significant PCs respectively, with the centering only changing the order of the first two correlation matrix PCs. And with standardized (svd) PCA producing two significant PC’s with _both_ centering methods.
W&A further state that:
“The effect of using “princomp” without specifying that the calculation be performed on the correlation matrix (an alternate argument of “princomp”) forces the routine to extract eigenvectors and PCs on the variance-covariance matrix of the unstandardized proxy data, which by its nature will capture information in the first one or two eigenvectors/PCs that is primarily related to the absolute magnitude of the numerically largest-scaled variables in the data matrix. We have demonstrated that this method of PC extraction has the effect of shifting the actual temporal information common to the North American ITRDB proxy data into higher-order PCs, especially the fourth PC”
Your results would make sense with covariance PCA and short-centered data, but (IMO) not with correlation PCA as per MBH.
MikeP – Marcott et al stated in their paper that the last century or so of their reconstruction has lower uncertainty for that very reason. But what you are actually responding to is from the alignment with the latest CUR-EIV composite temperature reconstruction over 510-1450 years before present, which is then aligned with the instrumental record, the three records joined end-to-end over their common periods. The blade at the end of the graph you are responding to is _not_ from their proxies, and it is an error to claim that it’s affected by their proxy number drop-off.
The CUR-EIV reconstruction is from Mann et al 2008, which I expect you will object to due to the author, but it is in general agreement with all the other reconstructions over the last 16 years or so.
I didn’t use R, as did the calculations in Matlab. I have also R, but I have used it very little. Matlab is more convenient for ad hoc calculations that I don’t need to repeat. Thus I have not written code, but proceeded step by step using interactive capabilities of Matlab.
For the centered calculation I normalized the time series similarly with MBH98 based on detrended variance of the period 1902-80. That normalization might be essential as a few (at least one) of the time series have a very small variance and get thus multiplied by a big factor, but probably those time series have little weight even after normalization. After that normalization I don’t really know about any further arbitrariness as long as the problem is solved in one step by forming the covariance matrix and inverting it. In a stepwise approach it’s possible to renormalize the residuals after the determination of each PC. That might lead to a different orthogonal base, but such a method would be a strange choice (IMO).
I didn’t solve the short-centered case but took the solution (15 first PCs) from Mann’s material. Then I orthogonalized the eigenvectors with respect to the measure of the fully centerered time series, keeping the first eigenvector unchanged, The second was then the orthogonal combination of the first two PCs of MBH98, third similarly orthogonal to the first two, etc. Using the normalized orthogonal base I calculated the contributions to the variance explained by the space spanned by N first PCs.
All three sets of consecutive shares of explained variance have thus been obtained independently and differently. All are used to calculate, how much of the real variance of the time series is covered. As the results tell, the explanatory power differs mainly for the first few PCs (what they contain differs probably more than the amount of variance they explain). After a few steps the rest is probably quite similar also in the content of the residual, but it would require further work to be sure of that.
It seems that the explained shares differ substantially only, when extra “variance” is added through decentering. I have quotes around “variance” as it’s not any more real variance of the time series but something else used in the further calculation of MBH98 as if it were variance.
Pekka – Ah, that’s the problem, “…by forming the covariance matrix and inverting it…”. The default in MatLab is to perform PCA using the covariance matrix. You have replicated MM, but not MBH which used correlations. From the MatLab help pages:
“princomp centers X by subtracting off column means, but does not rescale the columns of X. To perform principal components analysis with standardized variables, that is, based on correlations, use princomp(zscore(X)).” (Emphasis added)
Apples and oranges, covariance PCA requires more PCs than correlation PC for this dataset (5 vs. 2) – and you won’t get the MBH significance levels unless you use the same techniques. Which means correlation matrix PCA.
Both are quite acceptable ways of performing PCA, as long as you retain all significant PCs. And the choice of covariance or correlation is _separate_ from full or short centering.
MatLab help link here.
Mikep: Again… you are pretending there is some sort of concern about the Pages 2k Paper without any basis for such thinking.
The fact that there is an arctic update well within the predicted error margins is not news worthy, let alone blog worthy. It is not controversial. If you look at the original paper and supplementary material it is clear and obvious what the concerns are, and the fact that there are more updates coming. I do not need to see a blog post from McIntyre to know and predict this.
Just so you know, there will be more updates coming, as the actual professionals in this field improve their knowledge.
Here’s the original paper;
The Update: http://www.nature.com/articles/sdata201426
And so you can see… not much different;
It is curious that McIntyre chose to show you the Kaufman 2009 comparison to the update, rather than the actual data graph of Pages2k 2013, and Arctic update 2014, as I have. (Kaufman’s graph is unrelated to the Pages2k graphs. They discuss the differences in the Pages2k paper, and Pages2k update.) If you compare to the new data to the original in the supplementary materials (Figure S2 | Proxy temperature reconstructions), it is well well within the error margins, so no fuss.
I look forward to finding out what minutia you expect us to pour over next.
thank you so much for your concerns over pages2k and Marcott, showing so well your intent to move forward. it is much appreciated.
From the realclimate article we can read about the MBH98 analysis:
That’s what I followed. Adding correlation matrix to that would make that statement moot.
Using correlation and covariance is the same after first rescaling to same variance for every time series. This specific scaling deviates modestly from correlation matrix based analysis, which is exactly the same as covariance based, when scaling is based on the full period.
Pekka – I would beg to differ, as would Wahl and Ammann 2007 and Michael Mann at RealClimate who lists the explained variances of the MM and MBH PCs. The line on normalization you quoted is for centering, not the matrix operation.
The covariance and correlation matrices do not have the same eigenvectors, nor the same significance values. I’t’s my understanding that the differences in PCs are due to the fact the relationship between raw and standardized matrices isn’t an orthogonal transform (n-D rotation), hence the maximum variance vectors are _not_ rotated versions of one another. [ Correlation PCA is preferred if the variables have different scales, as that can otherwise throw off the dimensional analysis with variables on larger scales contributing disproportionately to the variance. Both methods provide orthogonal basis functions for the variability of their respective matrices, and are both valid approaches to data reduction. ]
Did you rescale the covariance matrix and test? Have you been able to replicate the MBH significance numbers with short-centered correlation matrix PCA? That would be a very useful check. As it is, your statements on explained significance are entirely contradictory to W&A 2007, who were able to replicate both the MM and MBH results. And they found two significant PCs with full centering and correlation PCA, not five.
To complete the study I calculated the standardized case. Now the first values are
1 PC: 17.0%
2 PCc: 27.1%
3 PCs: 34.4%
4 PCs: 38.9%
5 PCs: 43.1%
Do you still maintain your doubts?
I have been looking at one issue: How much of the variability measured by variance each method explains. All explain about as much by the first two PCs. The one number obtained directly from short-centered PCA is totally non-comparable and actually meaningless as it depends very strongly on an arbitrary trick. It tells more about the strength of the trick than about anything else.
Yes, since full centered and standardized PCA with 2 PCs matched the MBH and MM 5 PC reconstructions. If those two PCs only had half the variation contained in them (as did MM using only 2 PCs), they would not match. See W&A 2007, all figures, reconstructions labeled “WA”.
All the other calculations have been straightforward as soon as I got the data in Matlab, except the estimation of the of the variance explained by the short-centered analysis. Those straightforward calculations show directly that all three normalizations give very similar results for the variance explained by the first few PCs (and by a larger equal number as well):
– no normalization (M&M according to RealClimate, I haven’t checked)
– full standard normalization (my latest numbers)
– normalization of MBH98 based in 1902-80 detrended variance.
Furthermore my analysis tells that also the short-centered PCA explains as much, but based on other sources the 28% it explains by two PCs are not the same 28% the centered method explains. (I haven’t yet looked at those plots, thus this is not my own observation.)
I´m also absolutely certain that the calculation of explained apparent variance of the short-centered PCA is totally incomparable with the other numbers. It’s worse than comparing apples to oranges, perhaps closer to comparing basketballs to oranges.
Pekka, I work from the principle that:
1) If two lines are isomorphic, their variance is identical if calculated from the same mean.
I say from the same mean because if the lines are Principal Components, the variance calculated in the standard manner will be calculated from the mean of the original set of data from which they are decomposed. That does allow the possibility that two Principal Components, otherwise isomorphic, have different variance because they are offset in the y-axis relative to the mean of the original series, but that case must be sufficiently unusual that in the case of two isomorphic or near isomorphic lines, it needs to be proved to be the case rather than assumed to be the case.
So, consider the MBH98 NOAMER PC1 (red) vs the M&M03 PC4 (blue):
Clearly they are not isomorphic in that if they were the graph would display just one colour. However, visually they are very close to being isomorphic. Ergo applying principle (1) above, they must have very close to the same variance explained. However, if I understand you correctly, you calculate the MBH98 NOAMER PC1 to have a variance explained of 16% relative to the standard PCA mean, rather than the 6.6% as calculated for the M&M03 PC4. That near 10% difference suggests strongly that your method is in error.
Further, consider the reconstruction using the first 2 PC from MBH98 NOAMER, and the first 5 PCs from M&M03:
Again near isomorphic. Granted that this is a reconstruction using all the 20 available proxies plus the Stahle PC1 in addition to the NOAMER PCs, and not a simple recomposition. The result, however, is consistent with the first two PCs of using the MBH98 method containing nearly the same total variance as the first five PCs of using the M&M03 method. (That would require PC2 from the MBH98 method to be equivalent to a recomposition of PCs 1-3&5 of the M&M method.)
I can now say something about the PCs that the two centered analyses that I have done produce. They do not push the hockeystick like behavior as far as PC4, but it comes mainly into PC2, in the case of the standard normalization also PC1 gets a fair share of that, but with the normalization of MBH98 almost the whole visible signal goes to PC2. Thus it seems to be the case that the totally un-normalized calculation of M&M is the only one that pushes it to PC4. A totally unnormalized PCA can produce very strange results, when the time series are not on a comparable scale for other reasons.
In the case of multi-proxy analysis there’s no obvious reason to expect that the scales are appropriate to start with unless it’s known that the original authors (or earlier processing on the time series) have followed rules that assure that. Therefore some normalization is almost certainly required, what it’s in detail is not that important. Thus the two cases I calculated differ a little, while M&M differs a lot in the content of the first few PCs (but not in their contribution to variance).
To avoid leaving a wrong impression of what I have learned from my calculations that concern the NORAM1400 network I summarize the results briefly. One essential point is also that all this concerns NORAM1400 only, similar issues are much smaller for periods that start after 1450.
I have now reproduced fully the calculations of MBH98 on that network as well as the calculation of M&M05. In addition I have calculated PCs using both standard centering and standardization of time series and standard centering with MBH98 standardization of time series scales.
What’s new in my calculation to the extent that I haven’t seen that elsewhere, is the determination of the shares of real variance explained by the MBH98 analysis, i.e. the observation that the share of two first PCs is 28% rather than 48% claimed highly misleadingly elsewhere.
The only alternative that gives clearly different final results is M&M05, which gives significantly less weight to all time series that have a clear hockeystick shape than to several other proxies, because the time series just happen to be scaled in that way in the original form. Their choice leads to pushing the “hockeystick signal” down to PC4. All other alternatives have that signal in two first PCs.
All alternatives explain about 28% of the total variance of the proxy time series by two first PCs. That’s a rather low value, but that contains most of the hockeystick in all other cases than M&M05. That low value means that two PCs cannot describe well all, what the time series might tell, which could have influenced significantly the final results.
Comparing with the paper of Wahl and Ammann (2007) I find that everything appears consistent. My analysis reveals something that’s not discussed in their paper, and their paper contains very much that I haven’t reproduced. More specifically their calculations show that the choices and methodological weaknesses of the MBH98 analysis didn’t distort the final results. The proxy data they used in their analysis leads to similar results under wide range of alternative choices. Thus their paper shows that what could have taken place didn’t do it in this case.
Some of the comparisons of W&A drop totally the two-step nature of the analysis getting again very similar results. This is the most direct proof of the smallness of the influence of methodological choices for the final results (In retrospect that’s perhaps the simplest approach that MBH98 could have chosen using small weights for time series included in the tight networks like NORAM to cancel the error from weighting them by their large number of original time series.)
jsam writes “M2. You’ve lost the argument about whether or not a hockey stick exists. It does.”
I have not argued whether hockey sticks exist.
It would be more correct to assert that I have not easily followed this argument in the first place.
Instead of arguing for or against it, I have spent several days of intense study on the topic of principle components analysis so that I can understand the language.
Having completed two passes over half a dozen tutorials, I have also reviewed the various arguments being made. I am not ready to make an argument; however, that’s not to say I lack a tentative faith-based opinion. I have studied PCA enough to recognize that it *could* produce hockey stick shaped graphs particularly if, as some here assert, the input isn’t exactly random but *chosen* to appeal to the PCA method of seeking greatest variance and correlation.
I am grateful to those here who have argued in detail their point of view providing examples, data and methods; thus stimulating me to finally delve into it for myself.
“So MBH1998 and 1999 only work if these peculiar trees – selected by the original collectors of the data because of their supposed sensitivity to CO2 and uncorrelated with local temperature -are good proxies for Northern Hemisphere global temperature.”
Well, a certain professor who’s held in high esteem in some quarters – [“I just finished listening to Murry Salby’s podcast on Climate Change and Carbon. Wow. […] If Salby’s analysis holds up, this could revolutionize AGW science.” Judith Curry] has shown (by assertion?) that CO2 follows temperature and must therefore be an excellent proxy for temperature. Trees that increase growth in response to CO2(that’s supposed to be a benefit of more CO2, right?) must also therefore be a proxy for temperature as well.
Which leads to the straightest shaft hockey stick temperature curve of all –
[Edited the graph -w]
Salby? You should be more sceptical.
I had the impression that Brian’s comment was ironic, but maybe I’m wrong – I do find it hard to tell these days.
ATTP – you could well be correct. In which case I owe Brian an apology. My phasor may be stuck in “mock mode, stun”.
Do the people who post here, there, and everywhere that “there have been a dozen replications of MBH, and _every single one_ shows a hockey stick” – have those people even heard of Hanhijarvi-13? I certainly hadn’t.
“Their reconstruction is based on a very large subset of the PAGES2K data (27 of 59 series) and uses no other data. But unlike PAGES2K, its medieval reconstruction has higher values than its modern reconstruction – a finding that has received negligible coverage. Because its methodology matches the PAGES2K methodology, the difference necessarily arises from Kaufman proxies that are not in the H13 network, including the contaminated Igaliku series, the upside-down Hvitarvatn series, Briffa’s (old) Yamal superstick.”
Same source, slightly different topic:
“I’ve now written a number of posts on PAGES2K (see tag)… [lengthy paragraph listing half a dozen issues] All of these criticisms were adopted in the PAGES-2014 non-corrigendum… In my initial response to the non-corrigendum corrigendum, I observed that there were other shoes that might still drop [several more issues that haven’t been dealt with].”
Note his charts on how major the changes to PAGES2K were – it moved about half the distance between the original and H13.
As they say, Read the Whole Thing.
Did people who quote PAGES2K as the last word in the hockey stick (about _fifty_ times in these comments) know about these issues? How could they, if they don’t read climateaudit? Who is pro-science, the ones who endlessly quote a link ’cause they like it or the ones chewing through the data and the real issues? The ones who post ad-hominems against McIntyre or say that his blog doesn’t exist because it’s not a journal, or the ones who saw that he made numerous correct points and _published_ them, making corrections to their paper [of course, without acknowledging him]?
So miker613 is peddling again, this time with the “pro-science” ringtone.
An interesting quote:
> I present this example, in part, as one further rebuttal of Nick Stokes’ fabricated claim that I have supposedly been reluctant to show the effect of criticisms on proxy reconstructions or that such graphics are in any way “inconvenient”.
Here’s what Nick said:
In Nick’s quote, “this” refers to the PC1 argument, not “criticisms” in general. The effect of the PC1 argument is still unclear with this [latest post].
I’m sure Nick wouldn’t disagree that the fiercest player in ClimateBall ™ history can use his old graphs when comes the time to slime researchers.
what a bizarre comment, and the CA post is equally bizarre.
The study (though not referenced at CA) appears to be this one
“Comparisons of the PaiCo reconstruction to recent reconstructions covering larger areas indicate greater climatic variability in the Arctic Atlantic than for the Arctic as a whole.”
The author is also an author on Pages2K
It’s discussed in the Realclimate Pages2K writeup:
“One of the new procedures used to reconstruct temperature is an approach developed by Sami Hanhijärvi (U. Helsinki), which was also recently applied to the North Atlantic region… … Hanhijärvi applied this procedure to the proxy data from each of the continental-scale regions and found that reconstructions using different approaches are similar and generally support the primary conclusions of the study.”
So this study seems to look at a subset of the northern hemisphere, and yes, when smaller regions are considered, there is more variation. Which is what everyone would expect.
What’s your point?
You seem a little overwrought. AFAICT, that ClimateAudit post is just about the Arctic; it’s not global.
Ad homs? Blog doesn’t exist? No acknowledgement? What?
The study is also discussed on CO2 Science, my goto resource for “pro-science” stuff:
Ah, the important question indeed…
==> “Who is pro-science, the ones who endlessly quote a link ’cause they like it or the ones chewing through the data and the real issues?
You ask who is pro-science? I would ask, “Who is anti-logic?”
Mike – if you object to the label of “anti-science” being applied to Stevie-Mac, why do you suggest it should be applied to other people who are working scientists, or who spend significant %’s of their time studying scientific evidence?
Is it logical to argue that because you don’t think that Stevie-Mac is “anti-science,” and that applying that label to him is fallacious, therefore someone else must be “anti-science?
“Mike – if you object to the label of “anti-science” being applied to Stevie-Mac, why do you suggest it should be applied to other people who are working scientists, or who spend significant %’s of their time studying scientific evidence?” Did you read what I wrote carefully? It included the PAGES2K authors.
I have no problem with scientists being called scientists. I have a big problem with McIntyre not being, given that he is obviously a major unacknowledged contributor to paleoclimatology.
On consideration, I think that Joshua is right; this was unclear:
“The ones who post ad-hominems against McIntyre or say that his blog doesn’t exist because it’s not a journal, or the ones who saw that he made numerous correct points and _published_ them, making corrections to their paper [of course, without acknowledging him]?”
What I was trying to say was that the first group (before the comma) are doing anti-science things IMHO, whereas the group after the comma (the PAGES2K authors) are doing a pro-science thing by dealing with truth where they find it.
As to who, if anyone, falls into the first group, I’ll leave that up to people’s consciences.
You seem a little overwrought. AFAICT, that ClimateAudit post is just about the Arctic; it’s not global.
The ones who post ad-hominems against McIntyre or say that his blog doesn’t exist because it’s not a journal, or the ones who saw that he made numerous correct points and _published_ them, making corrections to their paper [of course, without acknowledging him]?
Ad homs? Blog doesn’t exist? No acknowledgement? What?”
And the Pages2k report shows a rather large error margin in the arctic. That is how I conclude there will be more updates coming. There is no need for McIntyre’s blog to even exist. Just read the articles from the journals.
But I already have it on good authority that Miker613 can’t do that. He says he can’t understand math.
Something to consider is that if Steve McIntyre only publishes what he does on his blog, then he can really have no expectation of others knowing – or acknowledging – what he’s doing. You might be surprised as to how few people who work in climate science realise that these topics are being discussed on blogs.
“So this study seems to look at a subset of the northern hemisphere, and yes, when smaller regions are considered, there is more variation. Which is what everyone would expect.” As I said, Read the Whole Thing. McIntyre discusses what’s included and what’s not; I expect he disagrees with the characterization. But I’ll post your question there.
I’d be surprised if Steve McIntyre disagreed with the suggestion that we’d expect more variability if we consider local regions, rather than the whole globe. I don’t think it’s all that controversial a point.
Miker613: HAHAHAAHAAA!!!!! What a joke!
“I have no problem with scientists being called scientists. I have a big problem with McIntyre not being, given that he is obviously a major unacknowledged contributor to paleoclimatology.”
What exactly precisely has McIntyre contributed? Sweet F**k All near as I can tell. He’s got a couple of papers bitching about Mann. Backed up by years of whining about it online. (He can certainly earn a deleterious title for that I think.)
Meanwhile in the real world, there literally hundreds and hundreds of papers being poored over by hundreds and hundreds of experts. Then there’s the work that goes into understanding each regional component of paleo data for which there are hundreds of papers.
To say McIntyre is a major contributor isn’t a stretch, its preposterous.
*I* pointed out to you when you first went “huh” that the Arctic is not the whole planet. You blanked me then and you are still apparently misrepresenting exactly this point. Please stop it.
In the end, IMO, people on both sides play a game of personality politics. It’s information no matter who does it. It’s information when Stevie-Mac does it, it’s information when his detractors do it.
When I read through your comments in direct succession, it struck me that you’re being something of an “advocate” here – an advocate on behalf of Stevie-Mac and climateaudit.
On one level, on a level of science, there’s nothing wrong, IMO, with advocating for a careful process of argument and rebuttal.
But my sense is that your advocacy stretches beyond that (at least at times), which in the end only detracts from good faith discussion. That is the context in which I view the “pro-science” angle of your comment. Asking who is “pro-science” is just the flip side of accusing people of being “anti-science.” I think that an accusation of “anti-science” is weak rhetoric of a partisan – no matter who does it.
There is some variability in the ratio of science to personality politics in the discussions that take place at this blog.
I suppose that some folks walk in innocently and get attacked with an anti-science/personality politics club, but I think that what most often happens is that someone wades in carrying the anti-science personality politics club, and they get whacked back with a anti-science/personality club and then play a victim card. I think that the victimization usually. essentially, self-inflicted.
There are folks here who will engage with you in discussion of the science. I think that to the extent that is your interest and focus, you can construct your comments so as to avoid the personality politics game.
“Something to consider is that if Steve McIntyre only publishes what he does on his blog, then he can really have no expectation of others knowing – or acknowledging – what he’s doing.” Sounds possible. But Kaufman seems to have read it, and that’s good. Seems like a lot of people do read it.
I’m not even sure that publishing in a journal makes sense here; it would just slow things down, and a list of mistakes in someone else’s paper is not interesting enough to publish. I like this way better.
You mentioned that this post is on the Arctic reconstruction. But PAGES2K is really a combination of eight or so different papers, enough so that one of the reviewers thought that they need to be peer-reviewed separately. Is the claim that because Steve McIntyre didn’t post (as many times) on the other seven, that they must be just fine? I’m not sure that I would feel comfortable drawing that conclusion.
I’m not quite sure who you’re aiming that at. My point was simply that it would seem silly to expect people to acknowledge in peer-reviewed work what others might have done on a blog. Nothing wrong with it, but how would one know if they’d developed the idea themselves or noticed the blog posts. There are many people working in this field, many of whom are at least as capable as Steve McIntyre are working through and recognising these various issues.
As for the second paragraph of your comment, I don’t know where that came from.
Joshua, I agree with you generally. But this is a pretty civil blog, and there is a lot of contempt/mockery shown here for the more skeptical types. Some of which plenty of them deserve, of course, but I don’t see that discrimination – the scorn goes even on the ones who are obviously (to me) trying to do science. And as I said, this place is relatively civil.
If I could just ask, isn’t the very name of this blog “And Then There’s Physics”, a somewhat scornful, somewhat subtle, attack on anti-science skeptics? It sounded that way to me, but maybe I’m projecting.
Understand that the skeptical blogs do the same in reverse. That doesn’t cheer me up.
So NOW you realise that you have confused McI’s stuff about the Arctic with the *global* PAGES 2k reconstruction. At last. And your response to finally recognising your own howler is to claim that all of PAGES 2K is unreliable. Words fail me.
miker613: And Climate Audit isn’t a total mockery in your books? Its presumptuous at best.
Yes, unapologetically so. It is indeed meant to mock anti-science skeptics. It is, however, not intended to mock genuine skeptics.
‘Miker613: HAHAHAAHAAA!!!!! What a joke!
“I have no problem with scientists being called scientists. I have a big problem with McIntyre not being, given that he is obviously a major unacknowledged contributor to paleoclimatology.”
What exactly precisely has McIntyre contributed? ‘
I guess you’ve decided to self-identify as one of those I was talking about. What has he contributed? As near as I can tell, he just contributed half-a-dozen major corrections to PAGES2K, the sum total of which cut their hockey stick in half. (And suggested some more, which might make it go away entirely.) Did you RTWT, or any of it? Or you just can’t believe that they really listened to him, because you know better?
Has it never crossed your mind, Miker, that this is classic McI technique? Nitpick trivia and then insinuate systemic and fundamental error where none exists?
Have you learned nothing at all from this endless thread?
Can I ask that we tone down the McIntyre bashing somewhat. As I understand it, he did contribute to some of these corrections and we should be willing to at least acknowledge if so.
That is bollocks Miker. And you’ve just been told that it is bollocks. Stop it Miker. Now please.
For clarity Miker, I repeat: the Arctic is not the globe. As I have had to point out several times now. You are using “PAGES 2K” as if it were the global reconstruction and that is misleading to the very edge of dishonesty.
Miker613: How can you possibly conclude that the he’s contributed anything? You said that you personally can’t understand the math.
By the way, this is the exact definition of Dunning Kruger… Unskilled and Unaware.
Click to access kruger_dunning.pdf
=> “If I could just ask, isn’t the very name of this blog “And Then There’s Physics”, a somewhat scornful, somewhat subtle, attack on anti-science skeptics? It sounded that way to me, but maybe I’m projecting.”
Interesting question…and all that much more interesting because of Anders’ response to the question.
It hadn’t occurred to me that it was an attack… a bit of a blinder effect on my part, I guess
==> “But this is a pretty civil blog, and there is a lot of contempt/mockery shown here for the more skeptical types. Some of which plenty of them deserve, of course, but I don’t see that discrimination – the scorn goes even on the ones who are obviously (to me) trying to do science. And as I said, this place is relatively civil.”
I think discriminating between those categories inherently tough. As one example, from where I sit, McIntyre does not make it easy, at all, to discriminate his interest in the science from his interest in the personality politics. I don’t think the distinction in his focus is obvious in the least. Maybe I’m off in my assessment of that ratio as it applies to his input, but I am relatively certain that if he were really interested only in promoting discussion of the science, he would conduct his contributions differently.
So anyway, I can benefit from someone coming in here and presenting “skeptical” arguments. To the extent that can be done cleanly, I will benefit that much more. I’m fairly good (although certainly not perfect) at being able to identify unprovoked attacks, and to draw conclusions accordingly (that there is information conveyed in someone attacking w/o provocation).
“I’d be surprised if Steve McIntyre disagreed with the suggestion that we’d expect more variability if we consider local regions, rather than the whole globe.” We aren’t discussing the whole globe here. Well, I posted the question. His answer: “Steve: the issue in this post is not paico versus other methods, but the difference between the PAGES2K Arctic paico and the Hanhijarvi Arctic paico (from North Atlantic), using the North Atlantic subset of PAGES2K. The differences are too big and require explanation. Thus far, we’ve some explanation: contaminated and upside down data in the non-H13 PAGES2K data. To the extent that this shows that “contaminated and upside down data” “support the primary conclusions of this study”, I guess that this is probably a true statement, though not one that is very compelling.”
I’ll tell you what really made me nervous: Figure 2 in climateaudit (comparing to PAGES S3A). To me what it really shows is a divergence problem similar to Briffa’s famous one: if you don’t include the questionable proxies that Kaufman included and H13 left out, you just don’t match modern temperatures. And many of the extra ones Kaufman included are known to be problems.
Seems like the conclusion should be, _not_ that H13 has rediscovered the MWP, but rather that paleo temperature proxies don’t work well. If they can’t reproduce modern temperatures, they can’t be trusted for the past.
“Miker613: How can you possibly conclude that the he’s contributed anything? You said that you personally can’t understand the math.” True, it’s not based on the math. It’s based on his posts, linked in his post today, which came out shortly after PAGES2K 2013 came out. And now half a dozen corrections in PAGES based on those criticisms. I haven’t checked any of this myself (though I did see his original posts); if McIntyre is making it up, it should be easy enough to catch him. For now that’s my provisional opinion, and I guess ATTP thinks that’s reasonable as well.
McIntyre and of course a echo, miker, had a sudden urge to show Mann’s 2008 reconstruction was on the right track. With other proxies. North Atlantic.
“McIntyre does not make it easy, at all, to discriminate his interest in the science from his interest in the personality politics.” I agree with that; he clearly doesn’t have a lot of respect for his opponents.
On the other hand, perhaps they earned that – that’s how they continue to treat him as well. My clear impression from posting here is that many commenters here have no idea that he was right a good fraction of the time – and that the reason they have no idea is that the other side never admits it. We have some great quotes from Robert Way to that effect – but the only reason we have them is that a private forum at SkS was hacked; even he never said such things in public and still won’t.
I find it humorous that Miker claims he is a skeptical type but is unable to understand the math or the physics. It saves me a lot of time because I can safely ignore his points of argument, indeed, after reading just a few of his comments, I can safely ignore his skepticism and comments altogether. If Miker finds that rude, or uncivil, that’s his problem, not mine. YMMV. DFTT.
miker613: Its hard to speak to you since you say you don’t understand what you are told.
MBH98 had huge error margins. Finding something different in that isn’t exactly progress, its expected. Pretending McIntyre is totally right… his work still produces a graph inside the error margins. This is not significant. Summing up all the years of work with Carrick’s graph.. its still in the error margins. That’s hardly ground breaking news.
Error margins are how we statistically (math you state you don’t get) prove the significance of something. MBH98 stated you should find results inside those lines 95% of the time. Carrick graciously did this.
As for your supposition that McIntyre is right all the time. I don’t care. No one does.
On behalf of Mr Mashey and for our miker61.3.
> The author is also an author on Pages2K
Interestingly, Korhola is one of the authors too. And Kaufman gets to be slimed. Wonder why?
In other news, a presentation on mitigation:
So still arguing for a hot MWP and high sensitivity, Miker? Is there a coherent physical model in there somewhere or is it all just obfuscatory noise?
Willard: You’re going off topic. We are required to look uncritically at the minutia presented.
> Environmental organizations are generally considered experts in preventing climate change although many of their solutions have proved downright destructive, write Atte Korhola and Eija-Riitta Korhola.
Honest brokers may agree.
Let’s just revisit what PAGES 2K (2013) actually says, because it is highly germane to the obfuscation currently being perpetrated by McI. Bold added for emphasis:
Note the key words: no global and synchronous multi-decadal intervals.
It is not sufficient to identify a regional warm period in – say – the Arctic and then insinuate that P2K is broken. For that, it is necessary to identify a global and synchronous multi-decadal event. Global and synchronous. Read the words.
McI has not done that nor has he come anywhere remotely close to doing so. In fact as far as I can tell he has not even attempted to do so. Do you understand what is being said here, Miker?
in an effort to move things beyond MBH98/99 and McIntyre discussions, i have recently encountered a series of 3 (~1 hour) video lectures from Bala Rajaratnam on paleoclimate reconstructions (first video here: http://www.youtube.com/watch?v=xzc8tqnwjcs&list=UUq4ZArLhYqop3H-vmiItcIQ&index=9). These takes one from the PCR techniques of the Hockey stick to BHM techniques to the latest Graph-EM approach. an interesting watch along with all the other videos for the math inclined.
Thanks for the second link – I rarely look at video clips longer than 10 minutes, so the text article was informative. Background is always useful.
“Has it never crossed your mind, Miker, that this is classic McI technique? Nitpick trivia and then insinuate systemic and fundamental error where none exists?”
Has it never crossed your mind that this is classic realclimate/SkS/etc. technique? To call something trivia that actually changes the result in a major way, and that makes the result (for the Arctic, at the very least) actually look wrong? And then blame it on McIntyre?
Eija-Riita retweeted this on the 2014-10-04:
By some serendipity, Benny comes in the comment thread:
> And then blame it on McIntyre?
What’s that “it”, again?
Instead of defending McI’s indefensible tactics, you need to consider what was written about what PAGES 2K actually said and how this bears on McI’s insinuations and complete failure to make a point. Global and synchronous, Miker. Global and synchronous.
Read the words.
Hmmm, as far as I can tell, according to Roger Pielke Jr nothing will ever work because it’s too difficult, too costly, or too inconvenient. Seems rather amazing we’ve got to where we are today.
No, because it isn’t.
I don’t really want this whole discussion to escalate (I can’t see a pleasant or constructive outcome) but consider this sentence,
What does “wrong” mean in this context. It’s perfectly normal for results to change as anaylsis methods improve or new data is collected. That doesn’t make earlier results wrong. This would be especially true if the new results don’t materially change what you would conclude from the analysis. Of course, there will be occasions when the result changes so much that one might reasonably conclude that the earlier results was actually wrong. Again, though, this doesn’t really mean anything other than things can change.
“McI has not done that nor has he come anywhere remotely close to doing so. In fact as far as I can tell he has not even attempted to do so. Do you understand what is being said here, Miker?”
Yes, I understand. Do you understand what we’re talking about? We’re talking about PAGES2K Arctic, and whether it’s right or not.
Could McIntyre point out problems in other regions of the PAGES2K reconstructions? Dunno. Maybe, probably. He wasn’t talking about it, and has made none of the claims you seem to be reacting to.
On the other hand: Do I have the right to wonder if a group that published an Arctic reconstruction which was 50% wrong (from their corrections) and maybe all wrong (if the rest of the corrections McIntyre suggests have a similar impact) – might have done slipshod work elsewhere? I do. Sorry, but the impression I’ve gotten from this particular issue _so far_ is that they really should consult with McIntyre before they publish, to avoid getting things upside-down. I’m going to have trouble listening when people point to this study as the gold standand.
‘What does “wrong” mean in this context.’ Reasonable question. 1) Wrong means to use techniques like upside-down proxies, which have been rejected for a long time already. McIntyre didn’t invent the idea that these were upside-down, he just pointed out that the specialists who made these cores etc. said it goes this way, and it was used in the study that way.
2) Wrong means to turn one proxy back upside-up, when there’s another almost identical one from right there which remains upside-down.
3) Wrong is to get a certain result, claim that H13 got “very similar” results, and actually they are entirely different.
Remember, we are only speaking about the Arctic, so please don’t someone jump in and insist again that the _global_ PAGES result remains correct. Truth is you probably have no idea if it does or not; you’re just hoping.
You can, of course, hold whatever view you like but I suspect that there are many bright people working in the PAGES2K consortium who will be doing all this checking anyway. With all due respect to Steve McIntyre, the idea that an international consortium really should consult with someone who published a few papers about 10 years ago and is known mainly for writing a blog, is a little absurd. That’s not to say that McIntyre can’t make a positive contribution or that what’s he noticed isn’t valid, simply that it’s really not how these things can, or should, work.
basically all you’re saying is that you don’t understand this field, but you’ve chosen to follow whatever mci says, regardless of what everyone expert in the field says.
That’s not a problem, but it is dull
You’re missing my point. My point wasn’t that they haven’t made mistakes. My point was whether or not one would conclude that the new result is so different that the earlier result is completely wrong.
Anders… “the idea that an international consortium really should consult with someone who published a few papers about 10 years ago and is known mainly for writing a blog”
Could you imagine if oil refineries were designed and built that way? After it blows up.. “I dunno, the guy seemed smart, too bad none of the experts got along with him.”
McI has the gift of self promotion. Some fall for it. The McIntyre Factor is increased in value over the years.http://scienceblogs.com/deltoid/2007/09/20/the-mcintyre-factor/
And my point, Miker, is that those who want to make the case that there’s nothing special about modern warming need evidence for a global and synchronous “MWP”. Pointing to the Arctic doesn’t provide that evidence. I’ve stated this so clearly and so often now that I’m beginning to wonder if you are failing to understand it on purpose.
And of course, if there was a global and synchronous “MWP” as warm as or warmer than the present – thus rendering it less exceptional – then that would indicate a relatively sensitive climate system. One which would react to all radiative perturbations, including a significant increase in CO2 forcing.
Somehow, you never quite get face-on to that point either although it is fundamental to your implied but carefully unstated argument that yes, all these millennial reconstructions “get rid” of the “MWP”. And that, of course, is a conspiracy theory, which is where we get right down to the core of what you are peddling here.
NOT to find a hockeystick in NOAMER is quite a feat. By just computing a simple mean of the NOAMER-values we get this:
Mean of scaled scores
And why not use scaled scores in the PCA? Because the two first PC’s had hockeystick features. The first is much like mean of all series.
Another cherrypick from MM of course.
I notice that anyone with an interest in and knowledge of climate change during the past two millennia is welcome to join the community-wide effort to update and publish the next generation database of temperature-sensitive proxy climate records of the last two millennia.
Jsam: Where have I heard something about that before….
Other than that, he’s a great military man. 🙂
So miker613 plays copycat again:
So let’s continue to pay due diligence to the claim that Hanhijärvi, Tingley & Korhola 2013 (HTK13) “has undeservedly received almost no publicity”:
A citation here:
Another over there:
Wait. Does it mean HTK13 is being cited by Pages 2K?
The concept of receiving “almost no publicity” is an interesting one.
On a meta level again:
This ‘auditing’ of climate papers business is a subtractive approach, not an additive one. The ‘audits’ nit-pick various papers, pointing out both real and imagined issues, few if any of which make any significant difference to the conclusions or to overall science. And most of which would be addressed in the normal course of things, by later papers with improved data and techniques. But there’s not one positive contribution, not one new result or expansion of our knowledge, no new insights into the working of the climate.
In short, nothing added, only (and only sometimes) subtracted. I’ve always thought that energy could be applied in more productive avenues.
stewart, in #52:
An interesting response in #55.
I would take what Korhola.has to say seriously – except that he seems to be a scientist who actively engages in political discussions about policy.
As such, his views should be rejected out of hand.* No scientists who mix activism with their science should be trusted. They have no expertise** in policy development and implementation, and thus should just stay out of it.
* Of course, if I agreed with his views on the politics, then his views should be taken seriously
** Of course, if I agreed with him, his lack of specific expertise would not be relevant. You see, it is only when I disagree with experts, that referring to their expertise comprises a fallacious appeal to authority.
Well, auditing. So what McI does isn’t auditing. One could call it a political attack by way of rote bookkeeping, I suppose. miker can have fun with the linked material.
I like the second half of that comment willard.
My how times change. (Sorry for going OT)
Oh, look, some real Canadian auditing by actual auditors.
Steve: Don’t get me depressed. The clucks in charge also claim taxes on energy don’t work. (The raving total success of the fuel emmissions tax in BC seem to be excluded from such Orwellian doublespeak. In fact it contradicts all that is known about tax impacts on consumption.)
Typically auditors have this thing called a mandate, which is definition for what they are to do. Work performed without a clear definition becomes endless.
Willard’s site is the Never Ending Audit. I suppose that is an assumption that his work will never be done. Considering that ClimateBall has no rules, auditing must be performed to the same exacting standards. 🙂
Anders, is this heading longest thread territory?
This stood out to me:
1. Adaptation – a wonderful idea! But who’s going to pay the $150+ billion the IPCC estimates adaptation will cost per year?
Does this guy somehow think that adaptation is optional? This is, I assume, the ultimate aim of the Economist world view.. if adapting to global warming would be too expensive, then global warming won’t happen! Economics trumps mere Physics!
I’m going to apply this to my daily life. Physics says my car needs diesel to run. BUT economics says that I can’t afford to fill it up. Therefore my car will run without fuel! I could get to like this.
mitigation is not optional either – fossil fuels are finite.
more fundamentally sustainability is not optional. All we can influence is what the end result looks like and how we get there.
Benny may not deny there are already costs, Andrew, comparable costs to the figure he figures, but how could we know if they’re costs until we see which hands invisibly pay for them?
Benny may only want names.
I don’t quite understand your point
One of the beautiful aspects of the arguments of SWIRLCAREs (someone who is relatively less concerned about recent emissions) is the illogic of arguing in favor of adaption to the exclusion of mitigation, without addressing the accompanying economic questions.
They argue that we need to adapt (not mitigate), yet many of them are in ideological opposition to any realistic and functional mechanism for funding and implementing adaptation. What becomes clear is that adaptation is a prop, an empty rhetoric to use in opposing mitigation.
Those who can afford to adapt as it becomes necessary will do so if they need to in order to survive. Those how can’t afford it won’t, won’t adapt and won’t survive.
Andrew Dodds, here’s a Top Tip to help you with your fuel bills:
It was in reaction to the question, which implies that if people refuse to pay then we won’t ‘do’ adaptation; ‘Global warming would be expensive so we’ll ban it/it won’t happen’. It give me the mental image of, for instance, the residents of Miami stoically wading around whilst refusing to acknowledge the water – ‘If we wear waders that means the Warmists have won!’.
Or people in California expecting water to come out of their taps long after the area has become a desert. Or states legislating sea level rise out of existence.
Ok, he is possibly arguing that we merely shouldn’t pay for – as an example – flood defences in a collective, rational and organised manner before disaster hits, merely react in an ad hoc manner when the emergency happens. Not much better, frankly..
Being from Finland and having met both Korholas a few times I can make some remarks. Eija-Riitta and Atte Korhola were married, but divorced a couple a years ago. Eija-Riitta Korhola was Euro Parliamentarian from the conservative party until latest election. Atte Korhola was more active in policy discussion during their marriage than he has been since. My impression is that he has recently been spending a lot of effort in work related to PAGES2k, having Hanhijärvi in his research group may have contributed to that.
Hanhijärvi comes from Aalto University (my home university as well). His doctoral dissertation was on Computer and Information Science at and not all related to climate science, but is certainly relevant for the methodology on paleoclimatology. It’s title is Multiple Hypothesis Testing in Data Mining. Here are a few sentences from its abstract:
One approach discussed in his thesis is biclustering that has at least superficial similarity with pairwise comparison, but the Hanhijärvi et al paper does not refer to his earlier work.
The latest update of PAGES2k Arctic database includes this acknowledgement:
Thus they have certainly had their say in the PAGES Arctic 2k use of proxies.
Pingback: An interesting aside about gravity | …and Then There's Physics
Andrew Dodds says “Or people in California expecting water to come out of their taps long after the area has become a desert.”
I suggest a small correction: California has been a desert for a very long time. The All-American canal (for instance) was constructed in recognition of that fact. Los Angeles gets (some of) its water from 160 miles away at what used to be Owens Lake. Prior to canals being built, the entire southern half of the San Joaquin valley was pretty much arid. Technically it still is.
You are correct in conclusion however that many people DO expect miracles from the kitchen faucet: “If you build a faucet, it will come!”
I’m wondering what miracle you have in mind; what human endeavor will turn California into a garden spot?
Joshua says “Those who can afford to adapt as it becomes necessary will do so… Those how can’t afford it won’t… survive.”
Stated very simply. Thank you. With 7 or 8 billion people on earth it isn’t even a Utopian’s wet dream to adapt everyone.
Andrew Dodds wrote “Physics says my car needs diesel to run. BUT economics says that I can’t afford to fill it up. Therefore my car will run without fuel! I could get to like this.”
Yes, it will run for as long as you are able to coast downhill.
AnOilMan wrote “The raving total success of the fuel emmissions tax in BC … contradicts all that is known about tax impacts on consumption.”
Hopefully my edit doesn’t change the meaning. I suggest that “all that is known” be replaced with “all that I know” since otherwise you are claiming to know everything that is known so that you know about this contradiction.
Tax impacts on consumption depend upon the “demand flexibility of price” and how quickly people can adapt and what choices exist. British Columbia is a cold wet place that consequently demands rather a large energy subsidy. That suggests inflexible demand and a willingness to pay the tax, especially if other taxes are by law required to be reduced:
“Meanwhile, it’s reduced corporate and personal income taxes. Economic theory suggests this swap of taxes will not hurt the economy and may even help”
So what you’ve got is a transition period where some taxes go up, some go down, and globally power stations are becoming more efficient.
“BC’s fuel consumption is also down. Over the past six years, the per-person consumption of fuels has dropped by 16% (although declines levelled off after the last tax increase in 2012).”
The implication is that BC has “hit the wall” of demand inflexibility.
“But the BC experiment makes that line harder to sustain. “There’s very little evidence—zero evidence—that carbon taxing is related to jobs” says Brandon Schaufele at the University of Western Ontario.
More university idiots. Revenue neutral *shifts* jobs; let us see BC raise ALL taxes then return and report.
” BC cement makers say they’ve lost a third of their market share to US and Asian imports. Farmers facing competition from non-carbon-taxed jurisdictions have wrestled back rebates from the government.”
Shifting jobs. Taxing one thing but not another shifts jobs. That is “adaptation” to government meddling; never mind waiting 100 years to adapt to climate change.
Michael 2 –
I don’t think you get the car analogy. The point is that some people seem to think that if fixing or adapting to global warming is declared too expensive, then global warming won’t happen. Madness, but still a logical inference from their statements.
As far as sustainability and adaptability go.. I see no physics-based reason why we can’t have a world of 10 billion people all living at first world or better standards with a much lower overall environmental impact. None at all. Plenty of political reasons, some engineering difficulties, and an economics profession that seems to regard being wrong as a badge of honor, but not real reasons.
Michael 2 — I don’t want to talk at you. You are not worth of my time. I although IMO that is the first post you’ve produced that appears intelligent and well thought out.
After BC’s tax was implemented BC’s economy and employment outpaced Canada.
Farmers are already using subsidized fuel. So i guess its game over for them. (Costs of production are skyrocketing… so they are done for.) I get the limit of the tax. I also get the risks of shifting jobs. There isn’t much to be risked in this case since the tax isn’t very steep. The only jobs that can be shifted would need to be fuel intense manufactured products. (I’m not sure how that affects cement. Don’t care.) To go deeper they’d need equalize carbon taxes externally. That’s not happening with this federal government.
Come to think of it… farmers are really screwed. They’ve been using third world labor all this time and the Temporary Foreign Worker program just got slammed because of a few greedy businesses firing Canadians in order to hire foreigners. Cost of producing oil is going up… (BC is imported from the US, and the Canadian dollar is going down. Yup. screwed.)
Then again… lower Canadian dollar means labor went down way more than the fuel costs.
Michael 2, you say ” British Columbia is a cold wet place”. I’ll grant you it can be wet, but where the vast majority of the population lives (Vancouver and Victoria) it doesn’t get that cold. In fact, the climate in that area is often called mediterranean.
BBP: Dude… I grew up in Victoria. I’ve been to the Mediterranean. And Vancouver/Victoria are not the Mediterranean. WiIlard is right, the Okanagan is the mild area, best known for growing fruit, and of course its famed peach riots. 🙂 In fact you can recognize TV shows shot in Vancouver by the grey light of the sky.
V&V tend to be a mild humid cold (rarely very negative, but that wind cuts right through you). I think its more like the UK. The rest of the province to the north is freaking cold at least that’s what my friends in Prince George say.
But this is all changing because of Global Warming. Pine Beetles are no longer dieing off in winter because they aren’t as cold as they used to be, in fact they are expected to hit the North West Territories by 2020. The Pine Beetles are already closing mill towns, and costing jobs in BC. This causes people affected to be financially liquidated, and move on becoming Global Warming refugees.
67% loss in Timber Industry alone. There goes 100,000 jobs!
Sea Level Rise will also impact them heavily.
Anoilman, I used to live in Victoria too, and I said it was ‘often called mediterranean’, which is true (even if the accuracy is debatable, which is why I said ‘called’). I could give you links, but it’s not really the point. The point is that the majority of BCs population live in what is the mildest climate in Canada, much milder than a quick glimpse at a map would suggest, so Michael2 saying it was cold was misleading.
I currently live in the Dallas area (much to my regret) and the winters may average warmer (I haven’t checked, but I would need to, which tells you something), but the summers are much hotter so total energy for indoor ‘climate control’ is actually higher.
BBP: I only think you need to look at the stuff before the smile. 🙂 The rest was just me talking.
I’ve lived in California and South Africa. SA had excellent passive cooling for homes. (Vents floor and ceiling all over the place. You’d open them in summer.) The house I lived in California wasn’t efficient at all. I think active cooling like we do now is a mistake.
If you ever look at older buildings they had windows floor and ceiling to open. Now… we have one window half way down the wall.
‘‘often called mediterranean’, which is true (even if the accuracy is debatable, which is why I said ‘called’)’
Google: Canadian Riviera
And that area is pretty nice, and indeed has a relatively mild climate, but that’s good marketing, too. As far as I can tell, the main plus in AGW for B.C. is improved ability of grape-growing in the Okanagan, which is also relatively mild., Otherwise, not a lot of plusses.
BBP says “Michael2 saying it was cold was misleading.”
“Wet cold” is considerably more penetrating than “dry cold” because of the latent heat of water vapor. A jacket that will keep you warm at -10 C in the dry mountain west will fail to keep you warm at +8 C if the humidity is high, which at Vancouver (or Victoria) will usually be the case.
Anyway, many homes in that area including Seattle and much of the Pacific Northwest is heated by electricity provided by hydropower. Consequently the “incidence” of a carbon tax is on the minority of the population allowing a democratic society to “stick it to the users of carbon”. The elasticity of demand is greater because many or most do not need petroleum for heating.
Thus, their experiment is not exportable to the rest of Canada or most of the United States.
Demographics favor the experiment. Basically you can ignore everyone outside of the Vancouver metropolitan area. The Seattle metropolitan region is similar in its impact on elections; everyone east of Ellensburg might as well not bother voting.
“In October 2013, British Columbia had an estimated population of 4,606,371 (about 2.5 million of whom were in Greater Vancouver)”
When I was considerably younger I spent a few summers around Campell River on Vancouver Island. It’s *cold*. Actually going for a swim is not for the faint-hearted. Needless to say it is a great place to escape the heat further south and I am delighted for them that they wish to keep their city clean and bright.
On other news, a house I visited in Santa Rosa, California, had very effective passive cooling. Its walls were hollow and the outside of the inner wall “seeped” a bit of water. Vents at the base of the outside admitted air which rose between the walls as the root heated, this flow of air across the wet inner wall cooled it considerably. As with OilMan’s experience, it was an older house, probably Depression era construction.
AnOilMan says: “67% loss in Timber Industry alone. There goes 100,000 jobs!”
Very bad indeed especially as trees are probably the number two renewable energy source for the Pacific Northwest (not much sunshine; too many mountains to be building windmills).
Pingback: Centro Tecnológico – Quem é & hellip; e então há Física?