Watt about four key charts?

I don’t look at Anthony Watts’s blog, Watts Up With That (WUWT), very often, but I glanced at it today and noticed a guest post called four key charts for a climate change skeptic. Anthony Watts’s pre-amble says

Skeptics often get asked to show why they thinks climate change isn’t a crisis, and why we should not be alarmed about it. These four graphs from Michael David White are handy to use for such a purpose.

However, the rather amazing thing about the post (okay, maybe not that amazing) is that each one of the four key charts is deceptive and misleading.

The first chart purports to show 10000 years of climate change. Not only is it from a single site (an ice core from central Greenland – GISP2) but it is also presented as if it extends all the way up till today. However, the GISP2 ice core data is presented as years before present (BP) and the final data point is for 95 years before present. Also, before present in ice core data is actually relative to 1950, not now. This dataset actually ends in 1855 and so is not only for a single site (which would be expected to show more variability than the whole globe) it doesn’t even show any of our recent warming. If you consider what has happened in central Greenland since the mid-1800s, there appears to have been substantial warming. This Skeptical Science post explains all of this in quite some detail.

The second chart is intended to illustrate that climate models have failed by comparing model projections with observations. However, the chart indicates that we’ve only observed about 0.2oC of warming since 1980, and this is simply not true. Both surface and satellite datasets suggest that we’ve probably had at least 0.6oC of warming since 1980, which actually compares quite well with models. If anyone wants a fair comparison between models and observations there’s Realclimate posts for satellite and surface datasets. The comparison is clearly far better than the chart presented in the WUWT post.

The third chart is rather bizarre as it simply illustrates what happens if you change the y-axis scale. You can make the scale large so as to make the warming appear small, even though nothing has actually changed. The post actually says

To make your point or hide the truth you may change the representation of the data. Both of these charts show the same numbers.

It almost seems as if the author is suggesting playing with the y-axis scale so as to change the appearance of the warming. This seems blatantly dishonest, but maybe one should at least applaud the author for their openness.

The final chart is intended to illustrate the banality of climate change by comparing the last 140 years with the last 10000 years. The implication is that the climate has always changed and that the magnitude of the change we’ve recently experienced is nothing unusual. The comparison, however, is between global temperatures and – again – data from a single ice core in central Greenland. We expect much more variability locally than globally, so the comparison is clearly not fair. Furthermore, both datasets are plotted on the same graph, which makes the modern warming appear very slow in comparison to past changes.

Therefore, these clearly are not key charts for anyone who wants to be genuinely skeptical. If one is genuinely skeptical then one would like to see charts that don’t deceive and misrepresent. It’s one reason why the term skeptical is clearly not appropriate for anyone who promotes such charts. One could put inverted commas around skeptical as that would indicate a form of pseudo-skepticism, but even that seems generous. Other labels are, however, often criticised for being a form of unpleasant namecalling. However, if some are going to knowingly promote misleading charts, maybe they can’t really expect any better. It’s not as if doing so would somehow hamper genuine dialogue since genuine dialogue with those who promote such charts is almost certainly impossible.

Posted in Anthony Watts, Climate change, ClimateBall, Global warming, Research, Science, Watts Up With That | Tagged , , , , , , | 28 Comments

A new baseline?

Ed Hawkins and colleagues have a new paper called Estimating changes in global temperature since pre-industrial times, which Ed also discusses in this post. The basic suggestion seems to be that we should probably be defining pre-industrial as the period 1720-1800, rather than as the period 1850-1900, which is often what it is assumed to be. As I understand it this is largely because our emissions probably started in the mid-1700s, rather than the mid-1800s, and that, consequently, there is a some warming that is missed if you define the baseline as being 1850-1900, rather than as 1720-1800.

This has, however, cause some confusion because some are suggesting that this new baseline means that we’re closer to the 2oC limit than we realised, with others claiming that this is not the cause because the limit was defined with respect to a known baseline, and so doesn’t change. However, it almost appears as though noone is entirely clear with respect to what baseline the 2oC limit was defined.

Sometimes, however, a temperature target is also associated with a carbon budget, which is the total mount of carbon we can emit while still giving us a certain chance of remaining below that temperature. As far as I can tell, this is clearly determined relative to when we started emitting, so if the temperature target is relative to a different period, then this would seem to suggest that the temperature target and the carbon budget are not entirely consistent. However, given that the difference is of order 0.1oC and we’re unlikely to set a target more finely than 0.5oC, it’s not clear that we would necessarily change anything. Would we really change the target to 2.1oC if we think that 2oC from 1850-1900 is the “right” target and that there was 0.1oC warming between when we started emitting and this baseline period. I can’t see why.

Something to bear in mind is how much of the carbon budget we have left. As this Carbon Brief post shows, we have about 25 years left at current emissions if we want a 50% chance of staying below 2oC. This seems like rather a tough task and it does seem that a large number of people think it extremely unlikely that we will avoid 2oC of warming. If so, why are people arguing about what the correct baseline should be when we are unlikely to meet the target whatever baseline we use? It would seem to me that what we should be doing now does not depend on whether we use 1850-1900, or 1720-1800.

This isn’t to suggest that the paper isn’t an interesting paper and that being clearer about the baseline wouldn’t be a good idea. I’m just not clear as to what difference it really makes. If we’re already at a stage where we’re going to miss a target however we define the baseline, then apart from some extra clarity, I really can’t see what overall difference this makes. I could, of course, be missing something, so feel free to point it out in the comments if I am.

Posted in Climate change, Global warming, IPCC, Research, Science | Tagged , , , , , , | 24 Comments

Guest post: Do ‘propagation of error calculations’ invalidate climate model projections?

This is sort of a guest post by Patrick Brown. Patrick contacted me to ask if I’d be willing to highlight a video that he made to discuss a suggestion, by someone called Pat Frank, that ‘propagation of error calculations’ invalidate climate model projections. I first noticed this when Pat Frank had a guest post on Watts Up With That (WUWT) called Are climate modelers scientists (the irony of this title may become apparent). He also presented a poster at the 2013 AGU meeting, gave a talk at the Doctors for Disaster Preparedness meeting, he has a video that is linked to in Patrick Brown’s introduction below, and his ideas were then discussed in a recent magazine article titled A fatal flaw in climate models. Just for background, what he is suggesting is that there is a large cloud forcing error that should be propagated through the calculation and that then produces such a large uncertainty that climate model projections are completely useless. I won’t say any more, as Patrick’s video (below) explains it all. It’s maybe a bit long, but it covers quite a lot of material, explains things very nicely, and I found it a very worthwhile watch. Patrick Brown’s post starts now.

Do ‘propagation of error calculations’ invalidate climate model projectsions?

As a climate scientist I am often asked to comment on videos and writings that challenge mainstream views of climate science. Recently, I was asked for my thoughts on some claims made by Patrick Frank regarding ‘propagation of error’ calculations and climate models. I took a look at Dr. Frank’s claims and considered his arguments with an open mind. As I reviewed Dr. Frank’s analysis, however, I began to feel that there were some serious problems with his methodology that end up totally undermining its usefulness. I outline the issues that I have with Dr. Frank’s analysis in the video below.

Links: The same video on Patrick Brown’s blog.

Posted in Climate change, Climate sensitivity, ClimateBall, Global warming, Research, Science, Watts Up With That | Tagged , , , , , | 85 Comments

Clutching at straws GWPF style

Since I have a few minutes spare I thought I would highlight another laugh aloud post from the Global Warming Policy Forum (GWPF). It’s about Arctic sea ice growing back to 2006 levels. Wow, amazing, what a turnaround after spending a reasonable fraction of the past year at record lows.

Credit: National Snow and Ice Data Center

Credit: National Snow and Ice Data Center

I thought I would go and look at the data. The figure on the right shows the sea ice extent for 2012 (dashed line), 2016 (red), 2017 (light blue) and 2006 (purple), together with the 1981-2010 average and the 2\sigma standard deviation. It does indeed show that the sea ice extent is the same as 2006, on 23 January, and only on 23 January. In other words, the GWPF claim is based on sea ice extent on a single day in 2017 being the same as it was on the same day in 2006. Do they not understand the concept of variability? You can treat that as rhetorical.

I do find it hard to believe that those involved do not get just how ridiculous such a claim actually is, so have to assume that this is partly to deceive those who might not, and partly just clickbait. I fully expect to see claims elsewhere that Arctic sea ice has grown back to 2006 levels, with links back to this GWPF post. As I’ve said before, sometimes all you can do is laugh.

Posted in Climate change, ClimateBall, Comedy, Research, Satire, Science, The philosophy of science, Uncategorized | Tagged , , | 61 Comments

The warmest year…again

I haven’t, yet, written anything about 2016 becoming the warmest year in record. That’s partly because it appears to have been virtually certain that it would be for a few months now, and partly because it’s been extensively covered elsewhere. There’s Realclimate, Sou, Stoat, Tamino and Carbon Brief, to name but a few. Essentially, 2016 is a record in all of the major surface temperature datasets, NASA, NOAA, Berkeley, and HadCRUT. This is also the third year in a row in which global surface temperatures have broken the record, something that has not happened since records began.

By adjusting for ENSO events, it can be shown that 2016 would still be a record in the NASA and Berkeley datasets, but not in the HAdCRUT and NOAA datasets. This is, however, mainly because the latter two datasets don’t cover the Arctic as well as the former two datasets, and the Arctic has been particularly warm. There seems to be a bit of a fuss about the role played by the recent ENSO events, but I think that rather misses the point; the last 3 years have each been records. Removing the effect of ENSO (and, in some cases, solar and volcanoes) doesn’t change this, it simply illustrates the likely underlying anthropogenic trend. As this Realclimate post illustrates, that we’re continuing to break records is itself indicative of an underlying trend; if the climate were stationary, we’d expect the number of records to decrease with time, not increase.

It seems to me that there is also a chance that we could well be heading for a period of accelerated warming. I might regret suggesting this, but Gavin Schmidt seems to suggest the same in this article. We appear to have had a period of slower than expected warming, and we can’t simply build an ever increasing planetary energy imbalance; surface temperatures will eventually have to increase to close the energy gap. I guess it’s possible that the recent warm years might have closed the energy gap enough that we won’t see much additional acceleration. On the other hand, there are indications that the pattern of sea surface warming, in particular differential warming across the Pacific, has lead to more negative cloud feedbacks and, potentially contributed to the slower surface warming. Maybe this can continue, but I don’t know how long one can sustain differential warming across a major ocean basin. It will be interesting to see what the ocean heat content does in the coming years.

Anyway, that’s about all I was going to say. I suspect the next few years are going to be interesting, for many different reasons. It will be intriguing (although, probably also rather frustrating) to see all the various different ways in which these recent records will be dismissed and what will be promoted when 2017 fails to be another record, which seems quite likely.

Posted in Climate change, Climate sensitivity, ClimateBall, Gavin Schmidt, Global warming | Tagged , , , , , , , , , | 54 Comments

Eddington and the first test of General Relativity

Thanks to Steven Mosher on Twitter, I came across an article that discusses Arthur Eddington’s attempt to test Einstein’s Theory of General Relativity. The basic story is that Newton’s Theory of Universal Gravitation assumed that gravity was a force that acted between two masses and that this force acted instantaneously. In 1915, however, Albert Einstein proposed that rather than gravity being a force that acts instantaneously across distance, what actually happens is that masses curve spacetime and that this then influences the behaviour of all other masses in the universe; gravity is then a manifestation of this curvature of spacetime

Credit: ESA/Hubble & NASA

Credit: ESA/Hubble & NASA

One consequences of General Relativity is that if light passes close to a massive body, it will be deflected. The massive body will essentially act like a lense and, hence, this is often referred to as gravitational lensing, an example of which is shown in the image on the right. The bright orange object in the centre of the image is a massive elliptical galaxy. The blue horseshoe is an image of a much more distant galaxy that has passed almost directly behind the elliptical galaxy and the light from which has been deflected to produce a horseshoe-like image.

When Einstein first proposed his theory of General Relativity it was not easy to test, as the large telescopes we have today didn’t exist at that time. One way to do so, however, was to observe stars close to the limb of the Sun. The light from these stars would pass very close to the Sun and be deflected. Doing this, however, required making observations during a Solar eclipse and then making comparison observations at a different time to see if the light from the these stars did indeed deflect when it passed close to the limb of the Sun. To be clear, if you treat light as a particle, Newtonian gravity would also predict a deflection, but it is smaller than that predicted by General Relativity.

In 1919, two groups went to try and test General Relativity during a solar eclipse. One, including Arthur Eddington, went to Principe off the coast of Africa, and another went to Sobral, in Brazil. The story of these expeditions, and the results of their observations, is what the article I mentioned earlier is about. What makes it interesting is that the claim is that Arthur Eddington was a supporter of General Relativity and essentially massaged the analysis so as to produce a result that was consistent with General Relativity. The suggestion is that if he was being honest he would have presented an inconclusive result, rather than one that was seen as confirming Einstein’s theory.

I had heard something like this before, so seeing this made me think this might be an interesting thing to discuss. How do you judge someone in such a circumstance? It’s a long time ago, and they’ve been proven correct, but they potentially fiddled their results so as to appear to have confirmed what is now clearly one of the greatest scientific breakthrough’s of the 20th century. They were just lucky that their intuition turned out to be correct. However, before doing so I thought I would just look into this a bit more, and came across a paper called Not Only Because of Theory: Dyson, Eddington and the Competing Myths of the 1919 Eclipse Expedition. It argues that

a close examination of the views of the expedition’s organizers, and of their data analysis, suggests that they had good grounds for acting as they did, and that the key people involved, in particular the astronomer Frank Watson Dyson, were not biased in favor of Einstein.

Dyson, the Astronomer Royal at the time, being principal organiser and director of the two expeditions.

The claims against Eddington include that the results from his observations taken in Africa – which were consistent with General Relativity – were biased, and that he unjustifiably argued against some of the results obtained using the observations from Brazil, which were more consistent with the Newtonian prediction than the prediction from General Relativity. It turns out, however, that in 1979 (60 years after the original expeditions) some of the observations were reanalysed, the results of which were published in this paper. All of the observations that were reanalysed produced results consistent with General Relativity, even those that originally were more consistent with the Newtonian predicition, and produced an average that was within one standard deviation of the General Relativity prediction.

So, it seems that maybe the analyses that produced results consistent with General Relativity were not necessarily biased and that there was some justification for discounting some of those that were not consistent with General Relativity. Of course, I don’t know if there are valid criticisms of the 1919 analysis, or not, but I think the big picture issue here is more subtle than simply completely right, versus flawed and wrong. I think it’s worth bearing in mind that any form of cutting edge research is difficult. Researchers may be using methods that are new and not fully tested. They may have to make decisions about the analysis that will require some amount of subjective judgement. It’s particularly difficult when dealing with a primarily observational area, like astronomy, where there may be factors that will be beyond their control, and so even very careful planning doesn’t guarantee that they won’t later encounter unexpected problems.

It’s possible that it may later become clear that some of the judgements were poor, or that the analysis method wasn’t optimal. However, it’s far easier to recognise this in retrospect, than in advance. Science is a process in which we learn both from our mistakes and from our successes, and in which we develop our understanding over time; we don’t regard something as confirmed after a single study, even if it is by some of the leading researchers of the time. Of course there are some things, like outright fraud and plagiarism, that are clear indicators of scientific misconduct; making a judgement that others might later disagree with doesn’t, however, typically qualify.

So, it seems to me that the main message in this story is how messy science can be. It can involve making risky measurements/observations to test new hypotheses. It can involve making somewhat subjective judgements by people who cannot be completely free from bias. It can involve developing new, or modifying old, methods in order to carry out observations, or to analyse what is observed. It’s not perfect, and probably can’t be. However, over time, we can develop an ever increasing confidence in our understanding of something, even if not all steps in the process are perfect. We don’t trust Einstein’s Theory of General Relativity because of what Eddington did in 1919; we trust it because it has continued to pass tests, the most recent of which was the first detection of gravitational waves. That doesn’t mean, however, that the work done in 1919 didn’t make a significant contribution to the overall process.

Update: I originally credited the highlighting of the article to Willard, but it was actually Steven Mosher. Now corrected.

Posted in Research, Science, The philosophy of science, The scientific method | Tagged , , , , , , | 68 Comments