John McLean, PhD?

A recent PhD thesis from James Cook University has been receiving a reasonable amount of attention on sites that either dispute anthropogenic global warming (AGW), or its significance (I won’t link to them, but you can probably find them if you want). The PhD is by someone called John McLean. His PhD supervisor was initially Bob Carter, who died a couple of years ago (Stoat’s post about his death caused a bit of a furore). After Bob Carter’s death, he was then supervised by Peter Ridd, who was fired earlier this year by James Cook University.

If you really want to read the thesis, you can download it here. It has two main parts, one of which considers problems with the HadCRUT4 data, and the other considers alternative causes for the warming, and that bleaching on the Great Barrier Reef has happened before. It’s all very amateurishly written; it’s more like blog science, than something that could be submitted for a PhD.

The discussion of problems with the HadCRUT4 data is very odd. It’s well known that HadCRUT4 suffers from coverage bias. However, there are a number of other global temperature datasets that account for this issue and produce results that are broadly consistent with the HadCRUT4 data (HadCRUT4’s coverage bias actually leads to it showing slightly less warming than those datasets that do account for this). It’s possible that there are problems with some of the actual data, but there is lots of data, so a problem with a small fraction of this data is almost certainly of negligible significance. It seems highly unlikely that those who work with these datasets haven’t checked to see how the results might be impacted by potential data issues. You can also sample subsets of the full dataset. Doing so produces results that are consistent with the full dataset.

The next part of the thesis suggests that the observed warming is a consequence of a combination of ENSO events and changes in cloud cover. This appears to be based on a paper he published in 2009 and one he published in 2014 (I had a link to this, but it was giving warnings when some tried to access it, so I’ve removed it). Turns out that I’ve already discussed the latter. Essentially, it ignores that clouds are a feedback not a forcing, so it’s just nonsense. I don’t think anyone ever published a response, but it seems to have been mostly ignored.

There was, however, a response published to the first paper which says

The suggestion in their conclusions that ENSO may be a major contributor to recent trends in global temperature is not supported by their analysis or any physical theory presented in their paper, especially as the analysis method itself eliminates the influence of trends on the purported correlations.

Essentially, the analysis removed the trend, so couldn’t say anything about what might be causing the long-term warming. As far as I can see, the thesis makes no mention of this response.

I’ve probably already wasted enough of my time discussing this thesis, so will stop here. I don’t know how theses are examined at James Cook university, but it would be quite interesting to know who the examiners were. I’ll finish by posting a Media Matters video that discusses McLean’s work.

This entry was posted in Climate change, ClimateBall, Global warming, Pseudoscience, Satire, The scientific method and tagged , , , , , , , . Bookmark the permalink.

112 Responses to John McLean, PhD?

  1. Nick Stokes says:

    I’ve been quoting a disclaimer that he makes, with commendable honesty, on p 4:

    “This thesis makes little attempt to quantify the uncertainties exposed by this investigation, save for some brief mention of the impact certain issues might have on error margins, because numerous issues are discussed, and it would be an enormous task to quantify the uncertainties associated with the many instances of each. It has been left to others to quantify the impact of incomplete data, inconsistencies, questionable assumptions, very likely data errors and questionable adjustments of the recorded data.”

    How anyone could pass a thesis which says
    “It has been left to others to quantify the impact”
    is beyond me. What is a pHd for?

    An oddity – if you click on the link to his 2014 paper that he features in the second part, from the pdf (html seems OK) it comes up with warning on my setup – do you really want to go there? Odd for an academic publisher. The paper is actually published in a dodgy journal.

  2. Nick,
    So, his thesis basically just asking questions.

    I’ll remove the link to the PDF. I can’t find another source.

  3. James Cook University may want to have a second look at its procedures for awarding PhDs.

  4. I am amused to see that this description of Dr Mclean says, “He maintains that theories and opinions about scientific matters should take second place to what observations and data show.”

  5. This observation leads to another observation:

    ” the observed warming is a consequence of a combination of ENSO events and changes in cloud cover. This appears to be based on a paper he published in 2009 and one he published in 2014 (…). Turns out that I’ve already discussed the latter. Essentially, it ignores that clouds are a feedback not a forcing, so it’s just nonsense.”

    There is also the argument that ENSO behavior is a consequence of a combination of and changes in wind strength. But since wind is a feedback and not a forcing, that argument should be nonsense as well. Or if not nonsense, at least reconsidered, as wind may also be a common-mode byproduct.

    Consider this scenario dealing with forcing. Will the raft move forward?

  6. Keith McClary says:

    He has written a “report is based on a thesis for my PhD”

    The biggest news about climate change (not from the IPCC)


    and is selling it for $8.00 :

    Home

  7. Hank Roberts says:

    Hmmmmm.

    From the Guardian article:

    “… A JCU profile page, which is no longer live, said Ridd “raises almost all of his research funds from the profits of consultancy work which is usually associated with monitoring of marine dredging operation”.

    That consultancy was carried out through the Marine Geophysics Laboratory, which Ridd led, and which has carried out work on several coal terminal expansion projects.

    Ridd’s case against JCU is adjourned until 9 June….”

    https://www.theguardian.com/environment/2018/may/21/university-fires-controversial-marine-scientist-for-alleged-conduct-breaches

  8. “Monitoring of marine dredging operation”

    Does anyone know, is that marine dredging for ports for coal exports?

  9. Hank says:

    > coal?

    See Guardian quote above.

  10. Nobody clicks on links on the internet. They main function is to emphasis parts of the text.

    Thanks.

    A JCU profile page, which is no longer live, said Ridd “raises almost all of his research funds from the profits of consultancy work which is usually associated with monitoring of marine dredging operation”.

    That consultancy was carried out through the Marine Geophysics Laboratory, which Ridd led, and which has carried out work on several coal terminal expansion projects.

  11. Steven Mosher says:

    I pointed out one of his errors on Judiths.
    she invited him to do a post

  12. Everett F Sargent says:

    Hey, now that Dr. John McLean, PhD, has identified all those errors in the HadCRUT4 dataset, the good “so clled” doctor should write some code to correct for those errors. He should call it … wait for it … homogenization. 😉

    Oh wait, another denier already did that, something called the BEST ever. :/

  13. Steven,
    Do you have a link? I’ve had a quick look, but I can’t find a guest post on Climate Etc. by John McLean (did he turn down the offer?)

  14. Steven Mosher says:

    Offer was made. Not sure he accepted.

  15. Steven Mosher says:

    Some of the stations he identified were never used because they lack 20 years of data in the 1951 to 1980 peroid

  16. Steven Mosher says:

    Some of the stations he identified were never used because they lack 20 years of data in the 1951 to 1980 peroid

    Guys with legit PhD should be pissed.

  17. Some of the stations he identified were never used because they lack 20 years of data in the 1951 to 1980 peroid

    Thanks, I wondered if there would be something like that. Do you know if any of the other errors he highlighted were already known about?

  18. Dave_Geologist says:

    Hmmm. PhD from “College of Physics, James Cook University”.

    AFAICS from the University website, there is no College of Physics. Physics is part of the College of Science and Engineering. Although the thesis is listed on the University website.

    If you can’t even get the title page right….

  19. Dave_Geologist says:

    I’m not too pissed because it’s not my University. Those in the field know which are outstanding, which are OK and which are crap (in some cases you have to drill down to department level though). Also, because I stayed in the field, my reputation is based on my publications, conference appearances, editorial work etc. Not my decades-old PhD.

    If I was a JCU Physics PhD I’d be really, really, really pissed. Especially if I was a recent graduate, or had moved into an area like engineering where my peers would be less plugged-in to academia.

  20. Dave,
    Indeed, I’d be pretty ticked off if I’d just spent 3-4 years working on a detailed research project, submitted it for a PhD, been examined, passed and then discovered that something as poor as John McLean’s had also got through. It’s one reason why it would be interesting to know who the examiners were. Were they people who should have been able to identify the flaws, were they people who were credible researchers but maybe from an area where you wouldn’t necessarily expect them to notice the errors (although the style of the thesis should have been a give-away) or were they carefully selected to as to get this thesis through?

  21. Dave_Geologist says:

    I don’t really agree with Stoat’s line about science advancing one funeral at a time. In my experience, there are always a few cranks who take their outdated and supposed views to their graves. But the rest of the field had moved on years or decades earlier. The cranks keep their outdated views in the public eye, but they’re ignored by their peers. Did Fred Hoyle ever change his views? Did anyone teach them or publish on them in the last few decades of his life, other than as a failed counterpoint to the Big Bang? Didn’t they even have a bit of fun at his expense, letting the theory be known by what he had coined as a pejorative term?

    I came in at the tail end of the plate tectonics revolution, which is often presented to support that paradigm, but if it applied at all that was only in the USA. Outside the USA, it was always seen as an interesting hypothesis which didn’t have quite enough consilience and lacked a physical mechanism. Not as something heretical that could lose you your job. So once the discoveries of the 1960s were absorbed, it very quickly went mainstream. By the time I was an undergrad it was in the lectures, in textbooks, and there was even a glossy popular-science book published by the Geological Museum (London) to accompany its Story of the Earth exhibition, which had a section on continental drift (IIRC with a model of a subduction zone).

    I was astonished to later learn from Americans that (a) they were ten years behind the rest of the world and (b) that it had become a touchstone issue where “until a few years ago you couldn’t get tenure unless you rejected continental drift; now you can’t get it unless yow accept continental drift”.

    For years I avoided Oreskes’ book on the subject because i expected it would be all revolution-and-Kuhnian-paradigm-shift, and I’d be screaming at the page “but it wasn’t like that in Europe; or in the international literature”. Turns out she did her PhD in the UK, was aware of American exceptionalism, and it’s actually a balanced book. Although for an in-depth read, you’re better of with the book she edited, containing essays by the main players. They, however, make fewer concessions to the general reader.

  22. Dave_Geologist says:

    I suspect the latter ATTP, but maybe I just have a suspicious mind. I am a bit pissed, but more at the University. At the risk of going all Two-Cultures on you, there are a lot of schools and subjects out their that have devalued the standing of a PhD already. Although I’d like to think that science departments have higher standards. Maybe my views are coloured by having been an industry scientist, but surely in academia, everyone has a PhD so it isn’t seen as something special?

  23. Okay, I’ve just done a little bit of extra work (would be interesting to get Steven Mosher or Victor’s take on this). On page 112 and 114 of the thesis, there is a list of stations that are outliers in terms of mean temperature, or standard deviation. However, if I download the CRUTEM4 data (the land data used in HadCRUT4 – which you can download here) then either those stations aren’t included, or they start after the period when there were supposedly data problems, or the months when there are problems appear to be flagged.

    I gather that Nick Stokes has pointed out the McLean’s analysis is based on the raw data, not on the data that has been checked and cleaned before being used in the analysis. This would indeed seem to be the case (would be good if someone more knowledgeable than me would comment).

  24. Dave,

    Although I’d like to think that science departments have higher standards. Maybe my views are coloured by having been an industry scientist, but surely in academia, everyone has a PhD so it isn’t seen as something special?

    In a sense, maybe, but there still a recognition that we have to maintain some kind of standard. We can get penalised if too many of our PhD students don’t complete. However, this doesn’t mean that we would find some way to get someone through a PhD if they haven’t done enough work that would qualify as suitable (original work that makes a significant contribution to the field). What we do do, though, is make sure students are suitably supervised, have a suitable project, and are given suitable support when there are problems.

  25. dikranmarsupial says:

    I’d suggest a PhD is effectively an apprenticeship to be a researcher, so a pass should be an indication they are ready to start work as a journeyman (e.g. a research assistant or junior faculty member). That includes the skills necessary to competently perform an experimental (or theoretical) investigation of some research question (whether in academia or industry).

  26. Dikran,
    I think that’s about right. The goal is for someone to develop the skills that allow them to undertake independent research. These skills are very useful, irrespective of what career they then follow.

  27. Dave_Geologist says:

    I’d suggest a PhD is effectively an apprenticeship to be a researcher

    Coincidentally, I’ve just been making that exact point elsewhere, that learning how to do research is far more important than the content of your PhD. It’s why some industries recruit lots of PhDs: they don’t have to serve their apprenticeships on the company dime.

    I wasn’t belittling PhDs ATTP, they should indeed be rigorous and well supervised. And not handed out cheaply. A few years ago, as an external examiner for a taught MSc course, I had to support the staff against someone from the Dean’s office saying that they weren’t handing out enough Distinctions (the equivalent of an undergraduate First), and it made the course less attractive to students. I pointed out that we only target certain universities and courses in our recruitment, and if word got around that one has cheapened its degrees, it’d fall off our list. That would be far more damaging, once word got around prospective students. Part of the sales pitch for vocational MScs is how many got jobs in the industry, how many in supermajors (the blue-chip equivalent), how many in other O&G companies, and how many in service companies.

    Oil and Gas generally recruit vocational MScs, with PhDs only in specialist areas (which is how I got in, but I’ve dinked between specialist and generalist since). From time to rime I encountered a situation where you had to apply PhD-research-type methods, and then you could see the difference between people with MScs vs. PhDs. Without wishing to appear snobbish, the MScs tended to think they knew how, because they generally have to complete a 10 or 12 week “independent” project. But they’re spoon-fed and hand-held compared to a PhD. They never get anything that’s undoable or may turn into a dead end, and it’s short and focused so they don’t have to do the integration required to pull together and write up several years’ work. And they do some presentations to people they know, but never have to tackle the lion’s den of a major conference.

  28. Steven Mosher says:

    “Okay, I’ve just done a little bit of extra work (would be interesting to get Steven Mosher or Victor’s take on this). On page 112 and 114 of the thesis, there is a list of stations that are outliers in terms of mean temperature, or standard deviation. However, if I download the CRUTEM4 data (the land data used in HadCRUT4 – which you can download here) then either those stations aren’t included, or they start after the period when there were supposedly data problems, or the months when there are problems appear to be flagged.”

    yes

    1.5 degrees

    took me 30 seconds

  29. Steven,
    Thanks. I’ve also looked through some of the other outliers. Some do seem to be in the dataset used, but some do indeed seem to have been flagged.

  30. I’ve also just gone through some of the stations in Berkeley Earth and in many cases, those that McLean’s thesis highlights as outliers do indeed fail quality control. How did this thesis get through?

  31. Steven Mosher says:

    ’ve also just gone through some of the stations in Berkeley Earth and in many cases, those that McLean’s thesis highlights as outliers do indeed fail quality control. How did this thesis get through?”
    i will have to check. a long time ago I did a similar check of crutem. its on my blog from years ago.

  32. Steven Mosher says:

    found it.
    there is a step before gridding where 5sigma
    values are tossed.

    5 Sigma in CRU

  33. dpy6629 says:

    It is of course true that PhD quality has been going down over the long term due to the needs of Universities to recruit and retain graduate students to serve as cheap labor on research grants and for teaching, and to pay tuition. In fact the University establishment has been growing rapidly with strong government support in the form of guaranteed student loans in the US where student loan debt is now over $1 trillion. The soft money race and the “entrepreneurial” model for research has led in my opinion to poor quality and a stagnation of fundamental progress in many fields including CFD for example.

    Its really hard these days in CFD to find PhD’s who have done fundamental work. The vast majority can get by with just running codes written by others and doing some questionable designs or even worse contribute to the vast oversupply of ‘software systems’ often built up from old components but made easier and easier to run so that questionable results can be generated even by those totally unaware of the uncertainties. There are lots of bright graduate students but it seems to me that they are not being well served by the Universities.

  34. Its really hard these days in CFD to find PhD’s who have done fundamental work.

    You’re probably just not looking hard enough.

  35. JCH says:

    Starts with denier Carter, and finishes with denier Ridd, and DY uses it to do what?

  36. dpy6629 says:

    ATTP, We look very hard and indeed find a few. What I perhaps should have said is that even at top rated Universities the percentages who have done fundamental work is declining substantially. And the vast majority of the “research” is application of software or software oriented, both generation of software and running the software.

    We just had an experience with a recent PhD from one of the top rated institutions who was some kind of coordinator for an open software system. His resume was 14 pages long and contained the most blatant exaggerations and superlatives. He had a whole string of publications that were of the “we improved the software system and ran this case” variety. In some cases the results obtained were of low quality. But that was rather effectively hidden in many cases by failure to compare to more credible results. Powerful executives are often impressed by this nonsense and try to get these guys in the door.

  37. angech says:

    Steven Mosher says: October 21, 2018 found it.there is a step before gridding where 5sigma
    values are tossed
    “Looking through the data you will find that in the US you have feb anomalies beyond the 5 sig mark with some regularity. And if you check google, of course it was a bitter winter. Just an example below. Much more digging is required here and other places where the method of tossing out 5 sigma events appears to cause differences(in apparently both directions).”
    Why would one throw out proven [It was a bitterly cold winter] data Steven?

  38. Steven Mosher says:

    Getting a bit closer ATTP

    There are three datasets to keep clear

    1. The source: https://www.metoffice.gov.uk/hadobs/crutem4/data/station_files/CRUTEM.4.6.0.0.station_files.zip

    2. The stations As Used: https://crudata.uea.ac.uk/cru/data/temperature/crutem4/crutem4_asof020611_stns_used.zip

    3. The Gridded data, which combines data within a grid cell
    https://crudata.uea.ac.uk/cru/data/temperature/CRUTEM.4.6.0.0.anomalies.nc

    The first has about 10K stations
    The second about 5K

    Things to note

    A) They have not made their data available in a format that is easy to audit. easily readable column data.
    met files look like this
    Number= 080110
    Name= ASTURIAS/AVILES
    Country= SPAIN
    Lat= 43.6
    Long= 6.0
    Height= 127
    Start year= 1968
    End year= 2018
    First Good year= 1968
    Source ID= 73
    Source file= Jones
    Jones data to= 2014
    Normals source= Data
    Normals source start year= 1968
    Normals source end year= 1990

    This should be in a separate metadata file, keyed to obs by station ID.

    B) Note; The stations AS USED is not up date, appears to be 2011.

    C) Apto Otu. Is in the first ( with normals , for the period 1961-1990)
    It does have values of 80+C for 3 months, against a normal of the mid 20s
    1978 24.5 24.9 24.7 81.5 24.5 83.4 83.4 -99.0 -99.0 -99.0 -99.0 -99.0

    D) The Gridded values for this location, has an anomaly of ~C for these odd months. Surrounding
    Grids are anomaly between 0 and 1

    E). The cell center is -72.5 7.5 so one would need to check all the stations in
    -67.5,-77.5, 2.5,12.5 Area
    1978 24.5 24.9 24.7 81.5 24.5 83.4 83.4 -99.0 -99.0 -99.0 -99.0 -99.0

    1978 26.9 -99.0 -99.0 -99.0 -99.0 -99.0 28.2 28.4 27.6 27.8 27.9 26.9
    1978 26.8 27.5 27.5 28.0 28.2 28.3 28.2 28.4 28.0 27.7 27.8 26.8
    1978 27.2 27.8 26.9 26.1 27.6 27.5 27.6 27.7 27.9 27.4 26.5 25.4
    1978 21.9 22.9 21.6 21.2 21.7 21.8 22.1 22.7 21.6 20.8 21.1 21.1
    1978 29.2 31.0 30.6 27.9 27.5 26.2 26.6 25.7 27.0 27.6 28.3 28.6
    1978 12.5 13.6 13.6 13.8 13.9 13.3 12.7 12.7 13.1 13.2 13.2 13.2 (elevation 2500m)
    1978 27.4 29.6 28.2 25.6 25.2 24.2 24.5 24.1 25.2 25.7 26.1 26.2
    1978 23.8 24.7 24.0 22.7 22.9 23.0 23.5 24.1 23.8 23.6 23.5 23.5
    1978 24.1 25.4 24.6 23.6 23.4 23.5 23.8 24.1 24.0 24.0 23.6 23.8
    1978 28.5 29.3 28.2 26.5 27.1 27.2 28.6 28.6 29.0 27.8 27.2 26.9

    Thats all the stations in that grid cell.

    The 80C temps would have about a 60C anomaly from the baseline ( like we show in berkeley)
    The final grid shows an Anomaly of ~9C. Since there are 11 stations.. thats a total of
    ~100C in anomaly.. If 60 of this is one station, the other 10 would total 40.. or about 4C
    per station, need to check that, but the only way I can see them averaging to 8C anomaly is if
    the 83C temps have actually made it through the 5 sigma test

    Looks like I may be wrong about this one station in John’s work.

    What CRU/MET could do.

    1. Adopt a friendlier style for Auditing. Basically this is just file format. I know in the past some
    folks have adopted styles that deliberately make it tough to ingest data.. They can never make
    it impossible, and its silly to make it harder
    2. Put all the datafiles in one place.. source files, “as used stations” “after qc” passes.
    3. Post the code for audit. The first time they posted code 1 guy found some bugs, same with
    GISS and clear climate code. NONE of these bugs change the answer materially, but
    the quality of the code does improve.

  39. Steven Mosher says:

    Looking further.
    The issue comes down to how outliers are defined and removed.
    CRU seem to take a subset of the data and calculate a STND dev.
    For Apto
    Standard deviations= 0.6 0.6 0.5 11.9 0.5 11.8 12.0 0.6 0.5 0.6 0.6 0.7
    The 11.9, 11.8 and 12 are the result of those 80C numbers
    5x of this allows the 80C in.

    most QC will include a check for max too high, and then a check if Farenheit has
    has been switched for C

    Not sure if CRU does these checks

  40. Hyperactive Hydrologist says:

    Surely if the PhD is any good he will publish parts of the thesis as papers? I don’t know what the average is but my wife published four papers, with the possibility of a fifth, based on her thesis.

  41. angech says:

    As said by others he does appear to be that rare breed of skeptic climate scientist.
    I do not think his work disproves AGW, just clarifies that more work needs to be done.With or without the outliers there has been some warming. Is he the guy that picked up the NH/SH grid problem 2 years ago?

  42. Steven,
    Thanks. So, that station may actually be included. It is a pity that the stations used data file is a bit old.

    angech,
    It’s clear that these issues (if they are indeed issues that have not been noticed) make virtually no difference to the results.

  43. dikranmarsupial says:

    Does anyone know whether McLean contacted anyone at CRU to discuss the errors he thought he had found?

  44. dikranmarsupial says:

    SM “1. Adopt a friendlier style for Auditing. Basically this is just file format. I know in the past some folks have adopted styles that deliberately make it tough to ingest data.. They can never make it impossible, and its silly to make it harder”

    Hanlon’s razor* suggests it is more likely that they started work on this many years ago and are using a legacy file format that suited their needs then, and they don’t have a pressing need to change it for their own work. Implying they are using this format for malign reasons is not a good way to get it updated. Science is not particularly well funded, so if you want work done, somebody has to pay the costs of updating the software and researchers that are used to using it the way it works now.

    * or at least more politely worded versions – thirty year old programming styles can easily look a bit “stupid” to modern eyes, but that doesn’t mean they are.

  45. paulski0 says:

    Steven Mosher,

    Yeah, I found the same thing about Apto Otu a couple of weeks ago. The current CRUTEM data for the relevant grid cell shows a large divergence from Berkeley in the offending months, indicating that the error in the raw data was not found by the CRUTEM team and is present in the final dataset.

    In the spirit of pointing out minor errors, I will say it took me much longer to find that out than it should have done because McLean misspelled the station name in his thesis.

  46. Dave_Geologist says:

    Steven

    What Auditors could do.

    1. Adopt a friendlier Auditing style. Basically this is just nit-picking. They can never make 10k readings have zero nits. Is it any wonder scientists and lay observers think Auditors are not acting in good faith? There seem to be plenty of Planks in McLean’s eye, yet you focus on a (possible) Mote in CRUs. If it was so obvious, did BEST pick it up?

    2. Don’t make unsupported and probably unfounded accusations of bad faith. While there is abundant evidence of Auditors engaging in bad faith, CRU was multiply investigated and multiply cleared of the false charges against them, and yet the zombie claims continue to be repeated. Compare to M&M’s one-in-a-hundred cherry-pick from their Monte Carlo output. Their use of a noise model which massively overstates the autocorrelation length in the observational data, which was the only thing that allowed them to generate a hockey stick from noise. As demonstrated when it disappeared once the correct autocorrelation length was used. OK maybe M&M made a seines of dumb mistakes, but weren’t they supposed to be the math whizzes handing the dumb geographers their butts?

    “I know in the past some folks have adopted styles that deliberately make it tough to ingest data.. They can never make it impossible, and its silly to make it harder”. Without Evidence (TM), as oppose to paranoid conspiracy theorising, is it any wonder that lay observers think Auditors are not acting in good faith?

    Absent Evidence (TM), methinks you owe CRU an apology.

    3. “Post the code for audit. The first time they posted code 1 guy found some bugs, same with
    GISS and clear climate code. NONE of these bugs change the answer materially, but
    the quality of the code does improve.”

    Oh, they have posted their code after all then. In a form you could understand well enough to modify. And it makes zero difference to the result. Another thing Auditors could do: when you get a null result, say so up front. Or if it’s trivial, just pass on the advice. Replace “CRU made a mistake, I fixed it, and BTW it makes no difference to the conclusions” with “CRU result X is robust, I did find a minor bug and fixed it for them”. You know perfectly well that 99% of the deniosphere won’t read past “CRU made a mistake”, and will add “it was detected by leading BEST scientist Steven Mosher”

  47. Marco says:

    I wonder whether people are perhaps reading too much into Mosher’s comment?

    In the meantime, we’re still waiting for the GWPF review of the temperature data…

  48. dpy said:

    “The vast majority can get by with just running codes written by others “

    No one in employed in software engineering talks like that. If you were going in for a job interview and said that you have experience “running codes”, they’d be looking for the next candidate.

  49. Dave said:

    “Their use of a noise model which massively overstates the autocorrelation length in the observational data, which was the only thing that allowed them to generate a hockey stick from noise. “

    This is the way I understand it.

    Daily temperature cycles are auto-correlated. Seasonal temperature changes are auto-correlated. ENSO is an auto-correlated signal. The AGW signal is auto-correlated by definition. Precessional and other long-period orbital cycles leading to ice age cycles are auto-correlated. These have auto-correlation lengths that vary from order of days to 1000’s of years. The real issue is how well we can understand each of the auto-correlated behaviors that causes variability in temperature. A red noise model with a specific mean auto-correlation length inserted in this mix is only a place-holder for some other possible but currently unknown source of auto-correlated variability.

    Shorter: White noise is the only non-auto-correlated behavior

    The way that “accounting for auto-correlation” is applied in statistical terms is to guess at what a possible red-noise component might be. They might be seriously misapplying this technique unless they understand all the potential sources of variability other than red noise that can be accounted for.

  50. dikranmarsupial says:

    Marco, perhaps ;o)

    Having said which, auditing this sort of thing is a useful activity, although independent reproduction with similar aims, but different methods is better (if not BEST).

  51. dikranmarsupial says:

    [Edited your link. Do not forget “http”. -W]

  52. Eli Rabett says:

    Perhaps some Aussie here could help Eli. The entire STEM set up at James Cook looks like a comprehensive college in the US, not a PhD granting university. One science and engineering group (not a department, the stink of consultants is thick on that one), a physical sciences group (not a department, again consultants) four physicists (they got rid of Ridd) no geoscience group. What are they doing granting doctorates? Where does the support come from? This looks a lot like Penn State Altoona rather than Penn State University Park

  53. Dave_Geologist says:

    Paul, the correlation length can be derived from the observations. Obviously for a particular noise model. Wrong noise model may introduce problems, but at least the model should be consistent with the observed data. Mann’s model was consistent with the observations. M&M’s model was arguably a slightly superior choice. Although IIRC the conclusion of subsequent experts was that while Mann’s wasn’t the optimum choice, it wasn’t wrong in the sense of invalidating his results. And some argued that the two models were equally valid. However, even if M&M did choose a technically superior model, they used synthetic data with a substantially longer correlation length than the observed data exhibited. So while it may have been a superior model, it was rubbish-in, rubbish-out. When M&M’s model was re-run with synthetic data better matched to the observations, the hockey-stick disappeared. With the real-data correlation length, in the order of a couple of decades IIRC, you can only get a short hockey-stick blade before it veers off randomly in another direction. With M&M’s correlation length, approaching a century IIRC, you can get a satisfactorily long blade. Which is fine in an imaginary world unconstrained by real data. Trouble is, we have real data to test that imaginary world against, and it doesn’t match.

  54. Dave, There are well-known auto-correlation length of 50 to 100 years in the data that has not been completely explained yet, but usually associated with the AMO and PDO (and possibly related to LOD variations at the same scale). So it’s not a matter of over-estimating the auto-correlation length, but exaggerating the signal strength impact it may have in the data.

    I think the auditor’s problem is that they picked a random walk model that is unbounded, and not a model associated with Ornstein-Uhlenbeck dynamics, which places an energetic bound on the random walk excursions. A naive statistician such as McIntyre could have easily picked a random walk model with a feasible correlation-length but with the wrong bounds.

    Yet we also know that there is an excursion that is quite large and operates at very long correlation lengths, and that has to do with ice-age cycles.

    My point that there is still a rationale for understanding the long correlation times associated with natural climate cycles.

  55. The Very Reverend Jebediah Hypotenuse says:

    Steven Mosher sez:

    I pointed out one of his errors on Judiths.
    she invited him to do a post

    One cannot help but wonder –
    If Mosher had pointed out two of McLean’s errors, would Curry have invited him to do two posts?

    Marco sez:

    In the meantime, we’re still waiting for the GWPF review of the temperature data…

    And that paradigm-shattering paper by Anthony Watts.

    Note – The SI unit of measure for quantifying the impact of McLean’s thesis, the GWPF review, and the Watts paper, is, of course, the Monckton.

  56. Steven Mosher says:

    McLean’s eye, yet you focus on a (possible) Mote in CRUs. If it was so obvious, did BEST pick it up?

    I checked 1 of his claims.
    I was wrong in my initial reaction.
    Looking deeper I see how the 80C slipped through.
    Yes we caught it in Berkeley We apply a bunch of QC.
    As well as checking if somebody used farenheit instead of C.

    Does it matter. No. Back in 2010 when I thought I had
    Found McClain type errors..with 5 sigma events I quickly found out that QC.. while nice..didnt disappear any warming. Its warts.

    The other problem is there may be legit 5sigma events that you should keep.

  57. Steven Mosher says:

    #
    Why would one throw out proven [It was a bitterly cold winter] data Steven? :”

    They used a rule. Calculate a std dev using data from 1941 to 90. Then look at all the data and toss 5 sigma events.

    That will toss some real events and keep some impossible ones..

    My preferred method and what we do at Berkeley is compute before and after QC.

    QC rulz are never perfect… or we would call it perfection control.

  58. I think it’s also worth bearing in mind that you often want some kind of well-defined rule; you don’t want to rely too much on judgements, because that can introduce a bias. The problem with a rule, though, is that it can rarely be perfect. If you make the requirement too strict you can eliminate good date. If you make it too relaxed, you can include too much bad data. Typically, you try to set it so that it catches enough of the bad data, without eliminating too much of the good data. Of course this requires some kind of judgment (although you can test to see how changing the rule impacts the results) but if you make clear what your rule is, then others can at least check to see how it impacts the analysis.

  59. Everett F Sargent says:

    Given the discussion of QC here … I find it rather remarkable that McLean only found 70 “so called” errors.

    If the GMST anomaly time series were really of little to no value, one would expect a failure rate of ~50% (coin flip).

    In other words, the number 70 itself has absolutely no context whatsoever.

  60. angech says:

    Steven Mosher says: # Why would one throw out proven [It was a bitterly cold winter] data Steven? :”They used a rule. Calculate a std dev using data from 1941 to 90. Then look at all the data and toss 5 sigma events.”
    What a relief, that scary Antarctic sea ice loss of 6-8 sigma 2 years ago never really happened. Wipes brow. [only joking].
    No I appreciate your past comments on removing the really odd results. I was just musing that we only think we know the SD of natural variation and we may not have it quite right for the unique ice/water/atmospheric water mix we have. Perhaps we need to reassess extreme weather events outside that date range.

  61. jdmcl says:

    I haven’t seen so many ignorant comments in a long time.
    Some people seem to think they can dictate what universities should accept as PhD topics.
    Some people seem to think that checking on the data on which billions of dollars are spent every year is a waste of time.
    Others are ignorant about universities changing the names of their departments.
    Others seem to think that papers should be judged only by what the person has written in the past or who they associate with (regardless of how incomplete reports of those things are). Others seem quick to accuse but very slow to apologize when they are proved wrong.
    Others try to interpret “70 problems” as 70 errors when in fact most of those 70 issues are mere summaries and the individual errors probably run into the tens of thousands.
    Lift your game people and this could be a decent blog.

  62. jdmcl,

    I haven’t seen so many ignorant comments in a long time.

    You clearly haven’t read many comments on those climate blogs that dispute AGW.

    Some people seem to think that checking on the data on which billions of dollars are spent every year is a waste of time.

    Nope, but it is important to understand the significance of possible errors/issues.

    Others are ignorant about universities changing the names of their departments.

    Quite possibly, but not sure how people are meant to know.

    Others seem to think that papers should be judged only by what the person has written in the past or who they associate with (regardless of how incomplete reports of those things are).

    I think it is worth being aware of what people have done in the past.

    Others try to interpret “70 problems” as 70 errors when in fact most of those 70 issues are mere summaries and the individual errors probably run into the tens of thousands.

    Probably isn’t a very strong claim. Also, there are millions of data points used to produce these global temperature datasets. Even if there are tens of thousands that are potentially in error, it’s still only a fraction of a percent. I also looked at Berkeley and many of those that were highlighted as having issues failed Berkeley Earth’s QC.

    Lift your game people and this could be a decent blog.

    Thanks, I appreciate the feedback.

  63. Joshua says:

    John –

    Any guest posts in the works?

  64. Here’s something maybe jdmcl can clarify. If the data points listed in Table 5.3 and 5.4 are the best examples of problem data, why is that I when go to the station data (bottom of this page) does the third example in Table 5.3 (BARQUISIMETO) appear to be a data point that is indeed flagged in the station data used file?

  65. Dave_Geologist says:

    So it’s not a matter of over-estimating the auto-correlation length, but exaggerating the signal strength impact it may have in the data.

    Well yes Paul, in the same way that a Fourier transform of real data isn’t just a spike but has non-zero values over a wide range. I could rephrase it as: M&M’s synthetic data had all its power in a place where the real data had almost none of its power. The consequences would be the same.

  66. Dave, A red noise power spectrum with no random walk bound goes as ~1/f^2 while an Ornstein-Uhlenbeck bounded spectrum as ~1/(f^2+a^2) where the factor a cuts off the power at low frequency values. The issue is that unbounded random walk has huge power at low frequencies, or long time periods.Like I said this is probably naivete on the part of amateur auditors such as McIntyre that do not understand real physical processes.

  67. Willard says:

    > You clearly haven’t read many comments on those climate blogs that dispute AGW.

    Presumably not (H/T ScottK at Judy’s):

    MEDIA STATEMENT

    For Immediate release

    COOL YEAR PREDICTED

    It is likely that 2011 will be the coolest year since 1956, or even earlier, says the lead author of a peer-reviewed paper published in 2009:

    Our ENSO – temperature paper of 2009 and the aftermath by John McLean

    The paper, by John McLean, Professor Chris de Freitas and Professor Bob Carter, showed that the Southern Oscillation Index (SOI), a measure of El Nino-Southern Oscillation (ENSO) conditions, is a very good indicator of average global atmospheric temperatures approximately seven months ahead, except when volcanic eruptions cause short-term cooling.

    The lead author, McLean, points to a fall in temperatures that began in October last year, seven months after the abrupt shift to La Nina conditions, and according to last month’s data is still continuing.

    “The delayed response is important for two reasons.” McLean says, “Firstly the high annual average temperature in 2010 was due to the El Nino that ended around.

    […]

    “The historical data also casts serious doubt on the hypothesis that carbon dioxide causes dangerous global warming,” says McLean. “Since 1958 there’s been a 30% increase in atmospheric carbon dioxide and if this had a major influence on temperature we’d expect to see clear evidence of the temperature continually rising above what the SOI suggests it should be, but this is not happening”.

    http://climaterealists.com/index.php?id=7349

    Just Asking Questions hasn’t been harmed in the making of this press release.

  68. Willard says:

    Interesting timing. Newsie at Tony’s:

    Just ahead of a new report from the IPCC, dubbed SR#15 about to be released today, we have this bombshell- a detailed audit shows the surface temperature data is unfit for purpose. The first ever audit of the world’s most important temperature data set (HadCRUT4) has found it to be so riddled with errors and “freakishly improbable data” that it is effectively useless.

    Mosh Made Tony Do It.

    ***

    Interesting update:

    I don’t insult and I don’t accuse without investigation. And if I don’t know I try to ask.

    Awaiting AT’s audit in 3, 2, 1.

  69. The Very Reverend Jebediah Hypotenuse says:


    Lift your game people and this could be a decent blog.

    How touching.

    Dr McLean, the first two citations in your doctoral thesis in Physics are to economy-related articles by Joanne Nova, a “blog-researcher” who appears to have no advanced academic background in any science or non-science discipline, a person who’s crank-index on economic theory is surpassed only by her crank-index on the physics of climate science.

    Apparently, you could not be bothered to go to freely available principle sources to support the main claims in your introductory paragraph – a paragraph that you seem to suggest provides a justification for your entire thesis-audit.

    In addition, it’s interesting to note that your first-stated concerns regarding the integrity of the temperature record have more to do with monetary expenditures and policy implications than, say, physics.

    And then there’s this Very Serious Concern:

    I was aghast to find that nothing was done to remove absurd values… the whole approach to the dataset’s creation is careless and amateur, about the standard of a first-year university student.

    – John McLean, quoted in:
    http://joannenova.com.au/2018/10/first-audit-of-global-temperature-data-finds-freezing-tropical-islands-boiling-towns-boats-on-land/

    Tell you what:
    Lift your game. Dr McLean, and you could be a decent scientist.

  70. Willard says:

    It might be a proper time to recall this ClimateBall double bind:

    [Vlad] I was aghast to find that nothing was done to remove absurd values…

    [Estr] These are errors in the raw data files as supplied by the sources named. The MO publishes these unaltered, as they should.

    [Estragon goes on with his life and corrects the data for analysis.]

    [Vlad] They’re tampering with the data!

    How can contrarians lose with such double bind? They can’t:

    Can Contrarians Lose?

  71. verytallguy says:

    [vlad] The science of Global Warming is highly debatable – John McLean’s devastating PhD thesis embarrasses scientists by showing how shoddy their work is!

    [estr] [sighs] The thesis is nonsense. Most “errors” never made it through quality checking and those that did are not material to the conclusions.

    [vlad] See! The debate continues! How can we ruin our economy when scientists can’t even agree amongst themselves!

  72. Everett F Sargent says:

    jdmcl,

    Finding 70 “so called” problems only get’s you almost half way to a really decent PhD IMHO. :/

    Most people would then do the necessary QC on the 70 by writing their own source code and comparing those two data sets (new code including the “so called” 70 QC issues) and original final HadCRUT4 anomaly time series to quantify their differences.

    I’m quite sure CRU will do that part (the hard part mind you, actual QC quantification) as I’m also quite sure that there will be no significant differences between the two final anomaly time series (either in the grid or globally or anywhere in between).

    Part II of your thesis (1st paper, the other two “so called” to be published) was published in a pay-to-play predatory journal. :/
    http://www.scirp.org/journal/Pay.aspx?JournalID=209

    That makes one E&E, two pay-to-play and one real (with a very nice Comment to that paper from real climate scientists pointing out your own rather poor low frequency filtering (which removes the trend, D’oh!)).

    [Mod: Just going to remove the last part of this comment. Didn’t seem necessary.]

  73. John Hartz says:

    How do we know that “John McLean” is a real person?

  74. Everett F Sargent says:

    I’m pinching two quotes (h/t NS and VTG) …

    ““This thesis makes little attempt to quantify the uncertainties exposed by this investigation, save for some brief mention of the impact certain issues might have on error margins, because numerous issues are discussed, and it would be an enormous task to quantify the uncertainties associated with the many instances of each. It has been left to others to quantify the impact of incomplete data, inconsistencies, questionable assumptions, very likely data errors and questionable adjustments of the recorded data.“
    (p. 4 of thesis)

    “I was aghast to find that nothing was done to remove absurd values… the whole approach to the dataset’s creation is careless and amateur, about the standard of a first-year university student.”
    – John McLean
    (origin of this specific quote is unknown AFAIK, but taken as is from JN)

    If a “first-year university student” could kind of do HadCRUT4 but for a PhD student “it would be an enormous task” could not kind of do HadCRUT4, then did John McLean receive a JCHS degree?

  75. Everett F Sargent says:

    h/t to The Rev not VTG, sorry about that one.

    John McLean, PhD?

  76. Ken Fabian says:

    I’m not sure why Australia has produced so many pretend climate scientists. Carter, Plimer, McLean, Doug Cotton. Dai Davies is a new name to me, just encountered – one more who disputes the basic science. Needless to say, they appear much beloved by that other notable Australian (former Australian, now US citizen) Rupert Murdoch – or at least of his paid opinionators.

  77. Everett F Sargent says:

    John McLean (circa 12 Oct 2007)

    Click to access McLean_ipcc_evidence.pdf

    “None of the three agencies has allowed an independent audit of its data and methods so we cannot be confident that these issues are properly addressed. The only published papers that discuss the accuracy of the methods for calculating the average temperatures are those written by the respective organisations themselves, a situation that would be deplored in most other fields.”

    John McLean (circa 23 Nov 2010)
    https://jennifermarohasy.com/2010/11/average-temp-anomalies-showing-only-warming-john-mclean/

    “Out of curiosity I created a graph of annual average temperature anomalies based on HadCRUT3 temperature data but omitting 1943-1971. I don’t for a moment believe that the HadCRUT3 data is accurate and reliable, however, I found the graph interesting.”

    John McLean (circa 28 Aug 2010)

    Click to access mclean_we_have_been_conned.pdf

    “The IPCC uses temperature data supplied by the UK’s Hadley Centre for Climate Prediction and the Climatic Research Unit at the University of East Anglia. The CRUTEM3 dataset is used for temperatures over land, HadSST2 for sea temperatures and HadCRUT3 for combined land and sea (i.e. global) temperatures.

    In work for a forthcoming document I have analysed the HadCRUT3 temperature dataset and identified numerous problems with it. Most, if not all, of these problems will be present in the data used by the IPCC, or at least assumptions will have been made in order to try to minimise or remove them.”

    :/

  78. Everett F Sargent says:

    “circa 28 Aug 2010”
    should be
    “circa 18 Aug 2010”

  79. I was hoping John might come back and answer some of our queries. Maybe we can give him an opportunity to do so?

  80. paulski0 says:

    Ken Fabian,

    I’m not sure why Australia has produced so many pretend climate scientists.

    Supply and demand. Australia is easily the world’s largest coal exporter.

  81. Ken said:

    “I’m not sure why Australia has produced so many pretend climate scientists. Carter, Plimer, McLean, Doug Cotton. Dai Davies is a new name to me, just encountered – one more who disputes the basic science”

    That’s the tip of the iceberg of Aussie pseudos. There are many more than this number that have infiltrated various blogs over the years, all with uniquely bizarre traits. Don’t forget that “angech” here is one of them. On Judith Curry’s blog alone, there are a couple of dozen prolific Aussie commenters.

  82. Willard says:

    FWIW, the Murdoch empire started in Australia:

    Media magnate Rupert Murdoch got his start in a chain of Australian newspapers. But few consumers realize today how expansive his empire has become.

    With 84-year-old Murdoch handing over control of daily operations at 21st Century Fox to his son James it’s worth taking a look at what, exactly, fell under Murdoch’s control this year. Below are holdings in which Murdoch’s two big companies, 21st Century Fox and News Corp, hold significant stakes, according to SEC annual report filings for 2014.

    The land down under is coincidentally the biggest coal exporter in the world.

    In any event, ClimateBall contrarians are first and foremost an anglosphere problem:

    Not only is the United States clearly the worst in its climate denial, but Great Britain and Australia are second and third worst, respectively. Canada, meanwhile, is the seventh worst.

    What do these four nations have in common? They all speak the language of Shakespeare.

    https://www.motherjones.com/environment/2014/07/climate-denial-us-uk-australia-canada-english/

    I have no idea why Muricans or Brits would wonder about Aussies, so I’d thread lightly in the following comments, please.

  83. Will the pseudo-science arguments become more and more blatantly wrong now that the Trump administration has taken the lead in asserting obviously false statements as true? They no longer need to rationalize based on anything more than a sliver of truth, so building arguments as per Ned Nikolov may be the new norm. A rational discussion is no longer possible because their intended audience is not us.

  84. Joshua says:

    Paul –

    … now that the Trump administration has taken the lead in asserting obviously false statements as true?

    That’s only one component of the larger strategy.

    I think the larger strategy is best summarized by Trump:

    “There’s no proof of anything,” the president told reporters Tuesday in the Oval Office. “There’s no proof of anything. But there could very well be.”

    Or we could go with Pence:

    ‘It’s inconceivable that there are not people of Middle Eastern descent in a crowd of more than 7,000 people advancing toward our border,’ Pence said.

  85. BBD says:

    Australia? Amateurs. Where is the Australian Monckton? Tell me that.

  86. The Very Reverend Jebediah Hypotenuse says:


    I have no idea why Muricans or Brits would wonder about Aussies

    On nonce, it doth improveth thy spirit to wend down and danceth with ye poor wretches in steerage.


    Where is the Australian Monckton?

    Obama is Kenyan, ergo Monckton is Australian. Next.

  87. Willard says:

    > A rational discussion is no longer possible because their intended audience is not us.

    The audience never was the orthodoxy, but the ClimateBall contrarians in the trenches. They need stuff to lob while mimicking responsiveness. Take JohnM himself:

    Lordy, lordy, lordy! You mean that researchers can’t test hypotheses by making predictions and seeing if they come true or not?

    And please tell us all how the many predictions made using climate models have turned out.

    The answer to the first rhetorical question is that testing predictions isn’t as clear-cut as a naive falsificationnist might presume (hint: it’s an inferential matter), and the answer to his second one is that he begs something that is at best misleading (hint: see first hint), e.g.:

    Hansen in that 1988 congressional testimony nailed it, adds Texas A&M scientist Andrew Dessler. “You could have reached an alternative conclusion” based on the science at that time, he says, pointing to the 1990 IPCC conclusion that the observed warming at that point was consistent with global warming evidence, but also with natural variability.

    “He was kind of out on a limb on one end of how you could read the data,” Dessler continued. “But it turned out he was right.”

    That’s a view shared in the video also by climate scientists Eric Rignot of the NASA Jet Propulsion Laboratory and Zeke Hausfather of the University of California Berkeley. Lawrence Livermore National Laboratory’s Ben Santer says the mounting evidence since Hansen’s 1988 testimony clearly shows that natural variability alone “can’t come anywhere close” to causing the actual human-caused warming Hansen had projected.

    https://www.yaleclimateconnections.org/2018/06/judgment-on-hansens-88-climate-testimony-he-was-right/

    ***

    All in all, the Contrarian Matrix is about building a network of distrust. Networks of distrust may be a Good Thing. The one built by ClimateBall contrarians lacks constructiveness. It suffers from being a degenerate network.

    That’ll be the topic of my next post.

  88. John Hartz says:

    Paul: If you have not done so already, I recommend that you read David Roberts’ excellent article…

    Why conservatives keep gaslighting the nation about climate change by David Roberts, Energy & Environment, Vox, Oct 23, 2018

  89. BBD said:
    “Australia? Amateurs. Where is the Australian Monckton? Tell me that.”
    You are right. Plimer is a poor man’s Monckton.

  90. Ken Fabian says:

    We (Australia) do have James Cook University I suppose – providing credentials to another generation of climate science deniers it appears. And yes we have coal. Lots and lots and lots of coal – and nowadays even the coal too deep or crappy to dig up can be exploited as Coal Seam Gas. Enough to give the world dangerous climate change all by ourselves – and enough short sighted self interested miners and supporting members of parliament to actually do it. Plus they have shown they have the agility to blame poor science communication and greenies when we get it.

    Rupert Murdoch still owns the largest part of Australia’s print media and lots of radio and television and has extraordinary influence over the make-up and the policies of Australian governments – most recently having played a part in deposing an Australian Prime Minister, who’s meaningless gestures about climate action and emissions to appease the political middle ground were apparently too much to bear. I understand UK governments have also risen and fallen with his involvement and his direction of editorial positions. Australia’s next biggest media owner btw is heavily invested in coal mining.

    Mr Murdoch does seem much taken by US notions of freedom of the press, ie that media owners have the freedom and right to use their power and influence to promote partisan political positions and have the right and freedom to use disinformation for that purpose. Disinformation looks to me like the principle mode for obstructing climate responsibility and accountability.

  91. angech says:

    Ken Fabian says:
    “We (Australia) have coal. Lots and lots and lots of coal – and nowadays even the coal too deep or crappy to dig up can be exploited as Coal Seam Gas. Enough to give the world dangerous climate change all by ourselves –”
    Ken people buy it overseas for heating, transport, agriculture, lighting and food production, clothing and roads.
    We do not do those things, they do. You go ahead and stop them, tell them how bad they are trying to improve their life styles. Well?

  92. Steven Mosher says:

    Giss have a nice page

    https://data.giss.nasa.gov/gistemp/updates_v3/

    Finding nits is a thankless boring task. its nice that GISS acks people

  93. Steven Mosher says:

    “Others try to interpret “70 problems” as 70 errors when in fact most of those 70 issues are mere summaries and the individual errors probably run into the tens of thousands.”

    Over the course of the HAD gridded land series there are millions of records. I would expect
    well over 200K will be possible outliers. Even with the best QC I expect thousands that will be questionable QC decisions.

    The first order of business is estimating the potential impact of QC. That’s been done.
    you get the “same” answer with and without QC. you get the same answer if you throw in random errors larger than the errors described in his work.

    you get the same answer if you correct the spelling mistakes he found. However, you typically dont want to correct station spellings as the mispelling can be an important clue as to a documents linage. you might in the end do “adjusted” metadata where you corrected things like station name, station location, station elevation, etc. but these corrections are better if they are upstreamed

  94. Steven Mosher says:

    Here are all the QC codes for GHCN. As an example only of some of the crap you see in
    the raw data that skeptics seem to love

    E = Identify different stations that have the “same” 12
    monthly values for a given year (e.g. when all 12 months
    are duplicated and there are at least 3 or more non-
    missing data months, and furthermore values are considered
    the “same” when the absolute difference between the two values
    is less than or equal to 0.015 deg C)

    D = monthly value is part of an annual series of values that
    are exactly the same (e.g. duplicated) within another
    calendar year in the station’s record.

    R = Flag values greater than or less than known world
    TAVG extremes.

    K = Identifies and flags runs of the same value (non-missing)
    in five or more consecutive months

    W = monthly value is duplicated from the previous month,
    based upon regional and spatial criteria (Note: test is only
    applied from the year 2000 to the present, and in general
    only for near real time produced data sources).

    I = checks for internal consistency between TMAX and TMIN.
    Flag is set when TMIN > TMAX for a given month.

    L = monthly value is isolated in time within the station
    record and flagged, when:

    1) a non-missing value has at least 18 missing values
    before AND after in time., or

    2) a non-missing value belongs to a “cluster” of 2
    adjacent (in time) non-missing values, and the cluster of
    values has at least 18 missing values before AND after the
    cluster, or

    3) a non-missing value belongs to a “cluster” of 3
    adjacent (in time) non-missing values, and the cluster of
    values has at least 18 missing values before AND after the
    cluster.

    O = monthly value that is >= 5 bi-weight standard deviations
    from the bi-weight mean. Bi-weight statistics are
    calculated from a series of all non-missing values in
    the station’s record for that particular month.

    S = Flags value when the station z-score satisfies any of the
    following algorithm conditions.

    Definitions:

    neighbor = any station within 500 km of target station.
    zscore = (bi-weight standard deviation / bi-weight mean)
    S(Z) = station’s zscore
    N(Z) = the set of the “5” closest non-missing neighbor zscores.
    (Note: this set may contain less than 5 neighbors,
    but must have at least one neighbor zscore for
    algorithm execution)

    Algorithm:

    S(Z) >= 4.0 and < 5.0 and "all" N(Z) = 3.0 and < 4.0 and "all" N(Z) = 2.75 and < 3.0 and "all" N(Z) = 2.5 and < 2.75 and "all" N(Z) < 1.6
    S(Z) -5.0 and “all” N(Z) > -1.9
    S(Z) -4.0 and “all” N(Z) > -1.8
    S(Z) -3.0 and “all” N(Z) > -1.7
    S(Z) -2.75 and “all” N(Z) > -1.6

    T = Identifies and flags when the temperature z-score compared
    to the inverse distance weighted z-score of all neighbors
    within 500 km (at least 2 or more neighbors are required)
    is greater than or equal to 3.0.

    M = Manually flagged as erroneous.

    Quality Controlled Adjusted (QCF) QC Flags:

    A = alternative method of adjustment used.

    M = values with a non-blank quality control flag in the “qcu”
    dataset are set to missing the adjusted dataset and given
    an “M” quality control flag.

    X = pairwise algorithm removed the value because of too many
    inhomogeneities”

    1. You can be pretty confident this does not catch all bad values
    2. You can be confident it removes some good values.
    3. Applying QC does not change the physics of c02 or erase the fact that it is warming.
    4. The important estimates of trend and warming since the start of the record is not materially
    changed by the application of QC, or by the shortcomings of QC.

  95. angech says:

    ” the raw data that skeptics seem to love” Nothing wrong with raw data is there?Surely.
    The only reason I can see for slamming raw data is that there must be something in the raw data that the slammers do not like.
    Any guesses?

  96. angech,
    The point is that the raw data may not – as it stands – be suitable to test the hypothesis being tested. That’s why you need to do checks and corrections.

  97. izen says:

    @-angtech
    “Any guesses?”

    The raw data (especially oceans) shows MORE warming than the QC data.

  98. dikranmarsupial says:

    “The only reason I can see for slamming raw data is that there must be something in the raw data that the slammers do not like.”

    If you are going to troll, at least drop the emotive hyperbole and make it a little less blatant. Here is a more reasonable version:

    “The only reason I can see for performing quality control and homogenisation on the raw data is that there must be something in the raw data that would cause analysis of the raw data to be misleading.”

    no kidding…

  99. JCH says:

    The only reason I can see for slamming raw data is that there must be something in the raw data that the slammers do not like.

    In essence, he has just defined his fatal flaw. Trapped in his paper bag forever.

  100. angech says:

    Feeling a bit emotional today, sharemarkets and family. Will take the advice on board again.
    “In essence, he has just defined his fatal flaw. Trapped in his paper bag forever.”
    One of the better paper bags to be trapped in despite comments to contrary.
    Temperature observations are the most likely indicator of our respective positions and continue improving. My views will only ever be substantiated by prolonged pauses or future falls.

  101. JCH says:

    The oceans are preparing:

    To demolish your ridiculous expectations:

    Right through to the end of the month, and beyond:

  102. Dave_Geologist says: October 21, 2018 …
    “Plate Tectonics: An Insider’s History Of The Modern Theory Of The Earth” by Oreskes
    Dave your posts are always interesting, often refreshing. As a USA child wow’ed by the plate tectonics ‘revolution’ – I had no idea. Thanks for the tip, will definitely read the book.

  103. izen says:

    @-angtech
    “My views will only ever be substantiated by prolonged pauses or future falls.”

    The test of objectivity is not that you rely on evidence to substantiate your views, but that you respond to evidence that refutes them.

    My views will be subject to revision by prolonged pauses or future falls.

    Will your views be up for consideration if the warming trend continues, how many more years will it take ?

  104. Joshua says:


    “My views will only ever be substantiated by prolonged pauses or future falls.”

    An interesting concept, that a existing views can only ever be substantiated by future evidence.

  105. dikranmarsupial says:

    How very Popperian! ;o)

  106. Pingback: Why do we do research? | …and Then There's Physics

  107. Ken Fabian says:

    Angech – “Ken people buy it overseas for heating, transport, agriculture, lighting and food production, clothing and roads.
    We do not do those things, they do.”

    Oh yes, Australia’s coal dealers and spruikers do like the drug dealer’s defence. Yet it is a long established legal principle that the supplier does bear a burden of responsibility for the harms arising from use. Especially when done knowingly. And under common law the customers’ enjoyment of benefits does not let the supplier off the hook.

    And then there is that ‘market distortion’; the enduring amnesty fossil fuels enjoy on their externalised costs. ie the attractiveness of Australian coal and extent of it’s use is influenced by institutionalised and legislatively supported dodging of responsibility for the harms.

  108. Dave_Geologist says:

    Thanks citizenschallenge. “Plate Tectonics: An Insider’s History…” is the compilation by insiders at the time (one of whom, Dan McKenzie, instigated another paradigm shift in the numerical understanding of extensional sedimentary basins – although as with plate tectonics, the observational groundwork had been done by others). Of course they’re written with 20/20 hindsight. The other book “The Rejection of Continental Drift: Theory and Method in American Earth Science” is her solo STS effort. You can accept or reject her explanation for the American exceptionalism, but either way the contrast is well presented.

  109. Re: “However, there are a number of other global temperature datasets that account for this issue and produce results that are broadly consistent with the HadCRUT4 data (HadCRUT4’s coverage bias actually leads to it showing slightly less warming than those datasets that do account for this)”

    You’re right to point out that other near-surface temperature analyses help largely validate HadCRUT4’s overall trend (while showing that HadCRUT4 under-estimates recent warming). So that does argue against McLean’s point. I think some other points will help provide further context.

    First, proxy records also confirm the overall warming pattern. For example, take the following paper:
    “Global warming in an independent record of the past 130 years”

    Table 1 of the paper shows a warming rate of 0.043 +/- 0.011 K/decade for 1880 – 1995. That’s consistent HadCRUT4’s trends of 0.052 K/decade:
    https://www.esrl.noaa.gov/psd/cgi-bin/data/testdap/timeseries.proc.pl?dataset1=HadCRUT4&dataset2=none&var=2m+Air+Temperature&level=1000mb&pgT1Sel=10&pgtTitle1=&pgtPath1=&var2=2m+Air+Temperature&level2=1000mb&pgT2Sel=10&pgtTitle2=&pgtPath2=&fyear=1880&fyear2=1995&season=1&fmonth=0&fmonth2=11&type=1&climo1yr1=1981&climo1yr2=2010&climo2yr1=1981&climo2yr2=2010&xlat1=-90&xlat2=90&xlon1=0&xlon2=360&maskx=0&zlat1=-90&zlat2=90&zlon1=0&zlon2=360&maskx2=0&map=on&yaxis=0&bar=0&smooth=0&runmean=1&yrange1=0&yrange2=0&y2range1=0&y2range2=0&xrange1=0&xrange2=0&markers=0&legend=0&ywave1=&ywave2=&cwavelow=&cwavehigh=&cwaveint=&coi=0&Submit=Create+Plot

    Of course, a more formal assessment would need to compare these two analyses with matching spatial coverage. But this is at least prima facie support for HadCRUT4’s overall warming trend up to 1995. Some other papers on proxy-based trends are also helpful on this point, as are papers that don’t use the instrumental temperature record. See, for instance:

    “Independent confirmation of global land warming without the use of station temperatures”
    “Global and hemispheric temperature reconstruction from glacier length fluctuations”
    “Early onset of industrial-era warming across the oceans and continents”
    “A global multiproxy database for temperature reconstructions of the Common Era”
    “Reconstructing paleoclimate fields using online data assimilation with a linear inverse model”
    “The last millennium climate reanalysis project: Framework and first results”

    Second, this seems to be the same McLean who made an incredibly poor temperature prediction that SkepticalScience covered before:


    https://www.skepticalscience.com/comparing-global-temperature-predictions.html

    I’ve always had some doubts about whether someone could make a temperature prediction that ridiculous. But based on the sort of work McLean is now producing, I’m more confident that he could predict something that ridiculous.

  110. Pingback: 2018: A year in review | …and Then There's Physics

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.