Watt about John Cook?

John Cook, who is one of that authors at Skeptical Science, has developed a survery to measure the consensus in climate research. Anthony Watts over at Watts Up With That (WUWT) has already decided that it is a fraudulent survey designed to be biased from the start.

One of the reasons is apparently because someone called Brandon, who writes for The Blackboard, apparently tried to be generous. He asked John Cook to explain the method behind the survey and got the following response

I use an SQL query to randomly select 10 abstracts. I restricted the search to only papers that have received a “self-rating” from the author of the paper (a survey we ran in 2012) and also to make the survey a little easier to stomach for the participant, I restricted the search to abstracts under 1000 characters. Some of the abstracts are mind-boggingly long (which seems to defeat the purpose of having a short summary abstract but I digress). So the SQL query used was this:
SELECT * FROM papers WHERE Self_Rating > 0 AND Abstract != ” AND LENGTH(Abstract) < 1000 ORDER BY RAND() LIMIT 10.

Brandon’s interpretation is that there are about 12000 papers from which John Cook selects those that have an abstract with fewer than 1000 words and that have been self-rated by their author. From these papers he selects 10 whose abstracts the participant needs to assess. Brandon then concludes that this survey is therefore much smaller than John Cook suggests and hence that John Cook is lying (a little ironic given that his judgement of whether or not John Cook is lying is based on information provided by John Cook himself).

Anyway, I interpreted John Cook’s response differently – although I could be wrong. This could be cleared up very easily, so maybe someone who knows better could clarify. If you go to Web of Knowledge and search for papers, published between 1991 and 2011, on the topics of “global warming” or “climate change” you find 73000 papers. I assumed that the 12000 John Cook is talking about are those with abstracts shorter than 1000 words and which have been self-rated by their authors. This is a pretty large sample and so the size seems fine. The only problem would be if there was some reason why papers with short abstracts and that have been self-rated are not a representative sample. I can’t see a reason, but maybe there is one.

Alternatively, if John Cook does mean that he is selecting from the 12000 and therefore that the actual sample size (those with 1000 word or less abstracts and that have been self-rated) is smaller than 12000, then it would be good to know how big the actual sample is. It may still be a perfectly fine sample of papers. I’ve done the survey and I recommend that others do it too and that they do it honestly. I must admit, that I don’t know that I quite understood what I was doing at first, so I would recommend making sure you understand what the survey is asking you to do and read the paper abstracts carefully.

Advertisements
This entry was posted in Anthony Watts, Climate change, Global warming, Watts Up With That and tagged , , , , , , , , . Bookmark the permalink.

10 Responses to Watt about John Cook?

  1. Rachel says:

    I must admit that my interpretation was the same as Brandon’s – that his sample was taken from the 12,000 – but I can see that this might not be correct. Either way, it doesn’t mean very much without knowing the sample size. It could be that his query returned 10,000 papers or only 10 papers. Although I happen to know it is at least more than 20 because I took the survey and so did my husband (Ben) and we both had completely different abstracts.
    Strangely, it gave me an average rating for my analysis and then the average rating the authors gave their own papers. But it didn’t do this for Ben.

  2. Yes, I’m starting to feel the same as you – that Brandon’s interpretation might be correct (although his accusation of lying is somewhat extreme). It would be useful to know how many papers satisfy the conditions of his query. It it were a couple of hundred I might be concerned that the sample would now be too small. If it is still many thousands, that would probably be fine – unless one can show that the query options somehow skew the sample.

  3. MikeH says:

    I am not sure I am following your logic. The sample size of interest is the number of respondents, not the the number of papers.

  4. Okay, yes the number of respondents is important but so is the number of papers and whether or not they are a representative sample of all possible papers from the chosen period. Ideally we would want (as far as I can tell) a large sample of papers and a large number of people assessing those papers. I was simply commenting on whether or not Brandon’s accusation that John Cook was lying had any merit.

  5. In fact, I think it is really the number of papers that are important. The study is trying to determine whether or not there is “consensus” amongst papers published on “climate change” or “global warming”. Hence we want a sample of papers that it a reasonable representation of all such papers published in the relevant period. We’re not testing the respondents, we’re using them to determine the consensus amongst the papers. Of course, the study would need a large number of respondents so as to get through all the papers and (presumably) assess papers more than once so as o check that there is agreement amongst the respondents.

  6. Marco says:

    I don’t think it matters for the study at hand whether the sample is representative of all possible papers (although information about that will likely be in the ERL paper that will be published soon (see https://skepticalscience.com/Be-part-of-landmark-citizen-science-paper-on-consensus.html – note that the search term was not “climate change” but “global climate change”, which will reduce the number of papers).

    From my reading of the survey, its aim is to determine how a large group of readers understand an abstract of a scientific paper, versus the expert-based opinion of, amongst others, the authors themselves. The reason to have a large number of papers is to prevent too much overlap between respondents – some of the papers will have been discussed on websites, which may skew the results in how people understand the paper. Also, people talk to each other and thus may discuss a particular paper and how it should be read, giving a “consensus” opinion on the same abstract of two individual responses. With one thousand papers duplication is already limited, and even more limited amongst people who are in contact. You really do not need 12,000 papers to make this into a very, very minor potential problem, so it’s the proverbial fornication with Formicidae (as we say where I come from) just to complain about something, anything, to discredit whatever the outcome will be.

    So far, it looks that the outcome will actually show that ‘random’ people see less “consensus on AGW” in the abstracts than the authors themselves (based on the comments I’ve read where people are invariably at a lower endorsement than the author endorsement).

  7. Okay, thanks. Maybe I’ve misunderstood the goal of the survey – although having read it again it’s not that clear that it is to determine how a random sample of people assess these papers, rather than an attempt to use the people to assess the papers. Maybe it is a combination of the two. If, however, it is indeed to check how a large sample people assess these papers, compared to how they’re assessed by the authors, then the important issue is the number of respondents, rather than the number of papers (although you would still need a decent sample of papers). This would seem to make Brandon’s accusation even more extreme, though.

  8. Marco says:

    Well, there’s no use ranking these papers using the public as a goal by itself, since that ranking has already been done! It thus must be to be able to do a comparison. To me that would make most sense, and is an interesting aspect that would have implications that are much wider than climate science only: the way “normal” people with an interest in the field assess the science vs the way the actual authors and more expert people in the field assess the science.

  9. Yes, that makes sense. I think I had assumed that this was a bit like the Galaxy Zoo project in which the public are used to help with some study (citizen science). If they’ve already been ranked by experts, then that shouldn’t really be necessary. Although, in climate science there does seem to be a lot of people who appear to trust the views of the general public more than they trust the views of the experts 🙂

  10. This follow-up post on WUWT, seems to really be missing the point. It does seem – based on the clarifying comments above – that the goal of the survey is to compare how the general public would assess these papers with how they were assessed by their authors. Hence, once you complete the survey you’re given a score and told how this compares to the score given by the authors. WUWT seem to see this as evidence for further a bias in the survey, when in fact it might just be exactly the point of the survey. The author of the WUWT post goes on to say

    Maybe the full reduction is to papers that not only have short abstracts but were also self-rated by authors.

    Well, yes! That’s precisely what John Cook said in his email response to the questions posed by Brandon Shollenberger and which was included in the post by Brandon that is linked to from an earlier WUWT post.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s