Judith Curry confuses laypeople about climate models

Judith Curry has written a report for the Global Warming Policy Foundation called Climate Models for the layman. As you can imagine, the key conclusions is that climate models are not fit for the purpose of justifying political policies to fundamentally alter world social, economic and energy systems. I thought I would comment on the key points.

  • GCMs have not been subject to the rigorous verification and validation that is
    the norm for engineering and regulatory science.

Well, yes, this is probably true. However, it’s primarily because we only have one planet and haven’t yet invented a time machine. We can’t run additional planetary-scale experiments and we can’t go back in time to collect more data from the past.

  • There are valid concerns about a fundamental lack of predictability in the complex
    nonlinear climate system.

This appears to relate to the fact that the system is non-linear and, hence, chaotic. Well, that it is chaotic does not mean that it can vary wildly; it’s still largely constrained by energy balance. It will tend towards a state in which the energy coming in, matches the energy going out. This is set by the amount of energy from the Sun, the amount reflected, and the composition of the atmosphere. It doesn’t have to exactly match this state, but given the heat capacity of the various parts of the system, it is largely constrained to remain fairly close to this state. Also, for the kind of changes we might expect in the coming decades, the response is expected to be roughly linear. This doesn’t mean that something unexpected can’t happen, simply that it is unlikely. Also, that some non-linearity might trigger some kind of unexpected, and substantial, change doesn’t somehow reduce the risks.

  • There are numerous arguments supporting the conclusion that climate models
    are not fit for the purpose of identifying with high confidence the proportion
    of the 20th century warming that was human-caused as opposed to natural.

This seems like a strawman argument. There isn’t really a claim that climate models can identify with high confidence the proportion of the 20th century warming that was human-caused as opposed to natural. However, they can be used to estimate attribution, and the conclusion is that it is extremely likely that more than half of the observed increase in global average surface temperature from 1951 to 2010 was caused by the anthropogenic increase in greenhouse gas concentrations and other anthropogenic forcings together (e.g., here ). Additionally, the best estimate of the human-induced contribution to warming is similar to the observed warming over this period (e.g., here ). One reason for this is that it is very difficult to construct a physically plausible, and consistent, scenario under which more than 50% of the warming is not anthropogenic.

  • There is growing evidence that climate models predict too much warming from
    increased atmospheric carbon dioxide.

This is mainly based on results from energy balance models. I think these are very interesting calculations, but they don’t rule out – with high confidence – equilibrium climate sensitivity values above 3K, and there are reasons to be somewhat cautious about these energy balance results. There are also indications that we can reconcile these estimates with estimates from climate models.

  • The climate model simulation results for the 21st century reported by the Intergovernmental Panel on Climate Change (IPCC) do not include key elements of climate variability, and hence are not useful as projections for how the 21st century climate will actually evolve.

This seems to be complaining that these models can’t predict things like volcanic activity and solar variability. Well, unless we somehow significantly reduce our emissions, the volcanic forcing will probably be small compared to anthropogenic forcings. Also, even if we went into another Grand Solar Minimum, the reduction in solar forcing will probably only compensate for increasing anthropogenic forcings for a decade or so, and this change will not persist. Again, unless we reduce our emissions, these factors will almost certainly be small compared to anthropogenic influences, so this doesn’t seem like a particularly significant issue.

The real problem with this report is not that it’s fundamentally flawed; it’s just simplistic, misrepresents what most scientists who work with these models actually think, and ignores caveats about alternative analyses while amplifying possible problems with climate models. Climate models are not perfect; they can’t model all aspects of the system at all scales, and clearly such a non-linear system could respond to perturbations in unexpected ways. However, this doesn’t mean that they don’t provide relevant information. They’re scientific tools that are mainly used to try and understand how the system will evolve. Noone claims that reality will definitely lie within the range presented by the model results; it’s simply regarded as unlikely that it will fall outside that range. Noone claims that the models couldn’t be improved, it’s just difficult to do so with current resources; both people to develop/update the codes and the required computing resources. They’re also not the only source of information, so noone is suggesting that they should dominate our decision making.

Something to consider is what our understanding would be if we did not have these climate models. Broadly, our understanding would be largely unchanged. We’d be aware that the world would warm as atmospheric CO2 increased, and we’d still have estimates for climate sensitivity that would not be very different to what we have now. We’d be aware that sea levels would rise, and we’d be able to make reasonable estimates for how much. We’d be aware that the hydrological cycle would intensify, and would be able to make estimates for changes in precipitation. It would, probably, mainly be some of the details that would be less clear. If anything, without climate models the argument for mitigation (reducing emissions) would probably be stronger because we’d be somewhat less sure of the consequences of increasing our emissions.

I think it would actually be very good if laypeople had a better understanding of climate models; their strengths, their weaknesses, and the role they play in policy-making. This report, however, does little to help public understanding; well, unless the goal is to confuse public understanding of climate models so as to undermine our ability to make informed decisions. If this is the goal, this report might be quite effective.

Posted in Climate sensitivity, ClimateBall, Judith Curry, Science, The scientific method | Tagged , , , , , | 47 Comments

Not even giving physicists a bad name!

credit : xkcd

credit : xkcd

When I’m trying to have a bit of a dig at physicists (of which I’m one) who think they somehow know better than climate scientists, I’ll post the cartoon on the right. When I came across this interview with William Happer, who may become Trumps’s next science advisor and who is also covered in this Guardian article, I immediately thought of it. However, I don’t think it’s quite right. The cartoon is – I think – meant to illustrate that physicists come sometimes be rather arrogant. They can think that physics is very difficult, that everything else is quite simple by comparison, and that they could step into other fields and easily solve what’s been puzzling others for ages. Happer doesn’t come across as a physicist who is just a bit arrogant; he comes across as someone who has completely forgotten how to do science altogether. A great deal of what he says is simply untrue, and demonstrably so.

For example, he says

In 1988, you could look at the predictions of warming that we would have today and we’re way below anything [NASA scientist Jim] Hansen predicted at that time.

You can look at Hansen’s 1988 paper. The prediction was that we’d warm by something between 0.4 and 1oC between the late 1980s and now. We’ve warmed by about 0.5oC. You can even plot the temperature datasets over Hansen’s predictions (H/T Nick Stokes) and it’s clear that we’re not way below anything predicted. This Hargreaves & Annan (2014) paper actually says his forecast showed significant skill. Furthermore, he considered a number of different possible future emission pathways; we’ve followed – as far as I’m aware – one closer to the middle of what he considered, so it’s not surprising that his high emission scenario forecasts more warming than has been observed. Also, his model has an ECS that is towards the high end of the range that is considered likely. Overall, his forecast is quite remarkable.

Happer continues with,

the equilibrium sensitivity, is probably around 1 degree centigrade, it’s not 3 1/2 or whatever the agreed-on number was. It may even be less. And the Earth has done the experiment with more CO2 many, many times in the past. In fact, most of the time it’s been much more CO2 than now. You know, the geological record’s completely clear on that.

Well, this is utter nonsense. The geological record is consistent with an ECS of around 3oC and is largely inconsistent with an ECS below 1oC. You can find the various climate sensitivity estimates here. We’ve already warmed by about 1oC, are only about 60% of the way towards doubling atmospheric CO2 (in terms of the change in forcing) and are still not in equilibrium. It’s utterly ridiculous to suggest that the ECS might be below 1oC. How anyone can suggest this is bizarre, let alone someone who is meant to be a highly regarded physicist.

Possibly the most bizarre thing he says (which is quite something, given all the other things he’s said) is:

I see the CO2 as good, you know. Let me be clear. I don’t think it’s a problem at all, I think it’s a good thing. It’s just incredible when people keep talking about carbon pollution when you and I are sitting here breathing out, you know, 40,000 parts per million of CO2 with every exhalation.

What’s exhaling got to do with it? The reason CO2 is accumulating is the atmosphere is because we’re digging up carbon that has be sequestered for a very long time, and burning it in a very short time; releasing CO2 into the atmosphere. Us exhaling CO2 would be entirely carbon neutral if we weren’t digging up and burning fossil fuels. Also, how can he know that it is good? This is almost entirely about risk. The more fossil fuels we burn, and the faster we do so, the more we will change the climate and the faster it will change. Can we adapt to these changes, both in terms of the magnitude and the speed? The answer may not be definitive, but there is pretty convincing evidence to suggest that continuing to emit increasing amounts of CO2 into the atmosphere may produce changes that will be very difficult to deal with. This doesn’t definitively mean that we shouldn’t do so, but suggesting that it is not going to be a problem at all, is nonsensical. There are scenarios under which parts of the planet essentially become uninhabitable.

What Happer says is so beyond anything reasonable, that I don’t think it’s fair to regard him as giving physicists a bad name. Even if physicists can sometimes be a bit arrogant, I don’t think they’re so arrogant as to say things without bothering to check that what they’re saying is actually true. It’s unbelievable that he’s being seriously considered as a science advisor. Oh, hold on, it’s for the Trump administration; I take that back, he’ll probably fit in perfectly.

Update:

Skeptical Science has a nice post that discusses Hansen’s 1988 predictions. He also made predictions in 1981, that are also pretty spot on.

Nick Stokes has a more recent post comparing Hansen’s prediction with observations and also has a post that discusses his scenarios.

Posted in Climate sensitivity, ClimateBall, physicists, Science, Sound Science (tm), The scientific method | Tagged , , , , , , , | 66 Comments

Guest post: On Baselines and Buoys

One of the key criticisms of Karl et al. (2015) is that it used a dataset that adjusted buoy data up to ship data – the suggestion being that, in doing so, they produced more apparent warming than if the ships were adjusted down to the buoys.  In a guest post below, Zeke Hausfather shows how it makes no difference if you adjust the buoys up to the ships, or the ships down to the buoys.

Guest post: On Baselines and Buoys

Much of the confusion when comparing the different versions of NOAA’s ocean temperature dataset comes down to how the transition from ships to buoys in the dataset is handled. The root of the problem is that buoys and ships measure temperatures a bit differently. Ships take their temperature measurements in engine room intake valves, where water is pulled through the hull to cool the engine, while buoys take their temperature measurements from instruments sitting directly in the water. Unsurprisingly, ship engine rooms are warm; water measured in ship engine rooms tends to be around 0.1 degrees C warmer than water measured directly in the ocean. The figure below shows an illustrative example of what measurements from ships and buoys might look like over time:

zeke_post_1Buoys only started being deployed in the early-to-mid 1990s. Back then about 95 percent of our ocean measurements came from ships. Today buoys are widespread and provide over 85 percent of our total ocean measurements, so it’s useful to be able to combine ships and buoys together into a single record. One option would be to ignore the temperature difference between ships and buoys and simply average them together into a single record. This is what the old NOAA dataset (version 3) effectively did, and we can see the (illustrative) results in the figure below:

zeke_post_2Now, this approach of simply averaging together ships and buoys is problematic. Because there is an offset between the two, the resulting combined record shows much less warming than either the ships or the buoys would on their own. Recognizing that this introduced a bias into their results, NOAA updated their record in version 4 to adjust buoys up to the ship record, resulting in a combined record much more similar to a buoy-only or ship-only record:

zeke_post_3Here we see that the combined record is nearly identical to both records, as the offset between ships and buoys has been removed. However, this new approach came under some criticism from folks who considered the buoy data more accurate than the ship data. Why, they asked, would NOAA adjust high quality buoys up to match lower-quality ship data, rather than the other way around? While climate scientists pointed out that this didn’t really matter, that you would end up with the same results if you adjusted buoys up to ships or ships down to buoys, critics persisted in making a big deal out of this. As a response, NOAA changed to adjusting ships down to match buoys in the upcoming version 5 of their dataset. When you adjust ships down to buoys in our illustrative example, you end up with something that looks like this:

zeke_post_4The lines are identical, except that the y-axis is 0.1 C lower when ships are adjusted down to buoys. Because climate scientists work with temperature anomalies (e.g. change relative to some baseline period like 1961-1990), this has no effect on the resulting data. Indeed, the trend in the data (e.g. the amount of warming the world has experienced) is unchanged.

What the folks at the Global Warming Policy Forum have been trying to do is to compare “Up to Ships” and “Down to Buoy” records without accounting for the fact that they are on separate baselines (e.g. they are not both showing anomalies with respect to a common climate period). The graph they show, using our illustrative example, looks something like this:

zeke_post_5However, when we put both on the same climatological baseline, we see there is in fact no difference between the two lines:

zeke_post_6Similarly, here is what the actual graph comparing ERSSTv4 (which adjusts buoys up to ships) and an early draft version of ERSSTv5 (which adjusts ships down to buoys) looks like. When put them on the same baseline, however, we see that the new version 5 is nearly identical to the old version 4:

zeke_post_7Here the old NOAA record is shown in blue, while the new NOAA record is shown in red. Its clear that the difference between the two is quite small, and in no way changes our understanding of recent warming.

As Peter Thorne, one of the authors of the upcoming version 5 of NOAA’s ocean dataset told Carbon Brief:

 “It’s worth noting that the ERSSTv4 and ERSSTv5 series are virtually indistinguishable in recent years and that the comparison does not include the data from 2016. The recent changes that were made for ERSSTv4 are largely untouched in the new version in terms of global average temperature anomalies. Therefore, as currently submitted, ERSSTv5 would not change the bottom-line findings of Karl et al (2015)… The change in long-term global average time series in the proposed new version is barely perceptible when the series are lined up together with the same baseline period, and much smaller than the uncertainties we already know about in the existing dataset.”

He continues:

 If ever there was a storm in a teacup, this was it. There is no major revision proposed here and anyone who tells you otherwise fundamentally misunderstands the submitted paper draft (which at this juncture should be the sole provenance of the editor and reviewers per the journal’s policy).

We should let peer review complete its course. Then, and only then, we can discuss this new analysis in more depth.

In the Daily Mail last week David Rose quoted John Bates as saying that “They had good data from buoys. And they threw it out and “corrected” it by using the bad data from ships.” This statement is patently false. Not only did NOAA not “throw out” any buoy data, they actually gave buoys about 7 times more weight than less reliable ship data in their new record. As we discussed in our recent Science Advances paper, relying on the higher quality buoy data removed some bias in recent years due to the changing composition of the global shipping fleet.

At the end of the day what matters is not that ships were adjusted down to buoys or buoys up to ships, what matters is that the offset between ships and buoys was effectively removed. This is now done by all groups producing sea surface temperature records, including NOAA, the U.K.’s Hadley Centre, and the Japanese Meteorological Association.

 Author: Zeke Hausfather is a climate/energy scientist who works with Berkeley Earth and is currently finishing a PhD at the University of California, Berkeley.

Posted in Climate change, ClimateBall, Global warming, Science | Tagged , , , , , , , , , , | 270 Comments

David Rose: From the bizarre to the ridiculous

After last week’s article, in which he produced an extremely misleading figure, David Rose has doubled down with a new article asking how can we trust global warming scientists if the keep twisting the truth? He claims that

A landmark scientific paper –the one that caused a sensation by claiming there has been NO slowdown in global warming since 2000 – was critically flawed. And thanks to the bravery of a whistleblower, we now know that for a fact.

This is despite the very same whistleblower now saying

The issue here is not an issue of tampering with data, but rather really of timing of a release of a paper that had not properly disclosed everything it was. …..

…Bates said the NOAA study relied on land data that were “experimental.” Typically, NOAA officials can publish research that relies partially on experimental data, as long as the data are properly identified

So there’s no tampering with data; at best, it’s simply that the paper did not disclose that the land data were experimental. This post by Peter Thorne might also suggest that even this may not be strictly true; all the datasets had been presented in publications that had already appeared. I’m not even quite sure what is meant by “experimental”; it’s a research papaer, what other sort of data should they have used? In this article Bates is further quoted as saying there was “no data tampering, no data changing, nothing malicious.”

In his article, David Rose then goes on to makes the following claim

It turns out that when NOAA compiled what is known as the ‘version 4’ dataset, it took reliable readings from buoys but then ‘adjusted’ them upwards – using readings from seawater intakes on ships that act as weather stations.

They did this even though readings from the ships have long been known to be too hot.

No one, to be clear, has ‘tampered’ with the figures. But according to Bates, the way those figures were chosen exaggerated global warming.

Well, this would either suggest that David Rose still does not understand anomalies, or is someone who simply cannot be trusted. The fundamental point is that it has become clear that there is a difference between the readings from ships and the readings from buoys. This discrepancy needs to be reconciled, but it doesn’t matter whether you adjust the ships to the buoys, or the buoys to the ships; ultimately anomalies will be computed. The data that is ued will be relative to a baseline, so it doesn’t matter if you move one up, or the other down. Let me stress, it makes no difference whether you adjust buoys to the ships, or the ships to the buoys; the resulting anomalies will be exactly the same! Choosing to adjust the buoys up to the ships does not exaggerate global warming; it produces exactly the same result as adjusting the ships down to the buoys.

If fact, as Phil points out in this comment, this was recognised in a paper published in 2008:

Because ships tend to be biased warm relative to buoys and because of the increase in the number of buoys and the decrease in the number of ships, the merged in situ data without bias adjustment can have a cool bias relative to data with no ship–buoy bias. As buoys become more important to the in situ record, that bias can increase. Since the 1980s the SST in most areas has been warming. The increasing negative bias due to the increase in buoys tends to reduce this recent warming. This change in observations makes the in situ temperatures up to about 0.1°C cooler than they would be without bias. At present, methods for removing the ship–buoy bias are being developed and tested.

The requirement to make an adjustment because of a ship-buoy bias has, therefore, been known for almost 10 years. My understanding is that Karl et al. didn’t even actually make this adjustment, they simply included this new dataset in their analysis to compute global surface temperatures.

At the end of the day, not only does it appear that the “whistleblower” is walking back his claims, and is now suggesting that the problem was simply procedural, rather than a problem with the actual results in the paper, but the results in Karl et al. have already been confirmed in Hausfather et al. (2017). Furthermore, David Rose continues to make claims in his new article that are highly misleading, while accusing global warming scientists of twisting the truth. It seems to me that there are some who think that it’s worth reaching out to David Rose because he’s a decent chap. Well, given that he continues to publish misleading articles, this would seem to suggest that reaching out is unlikely to achieve much – at least in the sense of getting David Rose to not publish nonsense. As they say, a leopard can’t change its spots.

Posted in Climate change, ClimateBall, Global warming, Research | Tagged , , , , , , , | 113 Comments

Messing about with model-obs comparisons

I was just messing about with some model-observation comparisons and just thought I would post some of the results. I don’t claim that I’ve done these properly, so use with care. I will, however, explain what I did, so that it should be clear (and so that people can highlight any errors in what I’ve done). I went to KNMI Climate Explorer and downloaded the monthly tas data (near surface air temperature) for the CMIP5 RCP4.5 runs (selecting one member per model which produces 42 model outputs). Once I selected this, I produced two different outputs; one baselined to 1951-1980 (to compare with GISTemp) and one baselined to 1961-1990 and masked to be -70S to 80N (to compare with HadCRUT4).

I then went to the Met Office and downloaded the monthly HadCRUT4 data. I then went to GISTemp and downloaded the monthly mean global surface temperatures (which requires selecting this at the bottom of the page and then saving the ouput as plain text). I then simply plotted the surface temperature data over the model ouput and also plotted a multi-model mean. The resulting figures are below.

A few additional comments. I don’t know if I’ve done this correctly (these kind of comparisons are invariably a bit more complicated than it may at first seem) but I have tried to compare like with like. Although I have tried to take coverage bias into account when comparing the models with HadCRUT4, I haven’t used blended model output – I’m only using the near surface temperature from the models, while the temperature datasets are a combination of near surface temperatures and sea surface temperatures. I also haven’t tried to produce any kind of uncertainty interval for the models; I’ve simply plotted the monthly model outputs for all 42 models. Therefore, as I said above, if you do use these, use them with care.

cmip5_hadcrut

cmip5_gistemp

Posted in Climate change, ClimateBall, Global warming, Science | Tagged , , , , , | 37 Comments

Doing science

This whole furore about Karl et al. has got me thinking more about how we actually conduct science/research. There is, I think, a perception that doing science/research involves following a fairly well defined set of procedures about which there should be little disagreement. The reality is that it is much more complex, with there being quite large differences between disciplines and sometimes even within disciplines. It often seems that much of the criticism about climate science comes from those who have some kind of relevant experience, and who then seem to think that everything should happen as they think it should; without considering that what works in their area, may not in another.

For example, I was at a meeting a few weeks ago at which one of the speakers pointed out that cosmology/astronomy was one of the few research areas that is primarily observational; climate science being one of the others. You can’t really do experiments. There is no control (we’re studying a single universe, or a single planet). We can’t go back and redo observations if they aren’t as good as we would like. Observations are often beset by problems that were not anticipated and that you can do little about. Understanding and interpreting observations requires models of various levels of complexity. All these factors mean that the details of how research is undertaken in these areas might be quite different to how it would take place in another. This doesn’t mean that the underlying scientific philosophy is different, just that the details of how it is undertaken might differ from what would happen in other areas.

In some cases, it is possible to develop a well-defined observational and analysis strategy, but in many cases it is not. Either you’re trying to use some data to do something that was not anticipated when the data was collected, or something unanticipated happens when the data is being collected that then requires some kind of correction. You might argue that in such circumstances there should be a process that is checked and authorised, but who should do this? Also, scientists ultimately want to do research, publish their findings, and let it be scrutinised by the scientific community. Following a well-defined procedure to the letter doesn’t somehow validate research results, and not doing so doesn’t somehow invalidate them. Our understanding increases when more and more studies (ideally by different groups of researchers) return results that are largely consistent; it isn’t really based on a sense that the research obeyed some well-defined procedure.

Something else to bear in mind, is that research is carried out by humans, and not by robots. Not only are they typically trying to solve problems that are perceived as of interest, they would also like others to be interested in what they have done. They try to write their papers in a way that highlights what might be of interest. There’s nothing wrong with this at all; we’re not funding research so that people can do things that are boring, and there’s no point in doing something interesting if people don’t notice.

However, there are certainly cases where researchers are regarded as having hyped their work too much (and some where they may not have hyped it enough). There are – in my view – even valid criticisms of the manner in which Karl et al. framed their results. However, precisely defining the correct framing is probably not possible, and that some might object does not necessarily mean that it was wrong. I’m, of course, not suggesting that everything that is done does not deserve criticism, or that there aren’t cases where it’s obviously deserved. However, there are many where it’s not clear, and where the critic may simply not have sufficient understanding to make the claims that they’re making.

At the end of the day, research is never easy and rarely works as expected; if it did, the answer would probably be obvious. It can, of course, be perfectly reasonable to criticise how research is done, and how it’s presented. However, this would ideally be in the interests of improving our overall understanding, not undermining it.

Posted in ethics, Research, Science, The philosophy of science, The scientific method | Tagged , , , , | 9 Comments

Expose: David Rose does not understand baselines

Never failing to disappoint, David Rose is back with a new expose on how world leaders were duped into investing billions over manipulated global warming data. It refers to a paper published by Karl et al. in 2015 in which they suggested that there were [p]ossible artifacts of data biases in the recent global surface warming hiatus. David Rose has, however, found a whistleblower who has come forward to highlight how the data in this paper was manipulated. This whistleblower is a recently retired NOAA employee who apparently has an impeccable reputation. Of coure, David Rose appears not to have consulted any others who might also have impeccable reputations. Also, to suggest that this one paper was the primary influence in recent decisions about global warming is utterly bizarre, especially as this paper didn’t really change our basic understanding at all.

Credit : Zeke Hausfather

Credit : Zeke Hausfather

I probably don’t have to say very much (I may, as usual, fail) since Zeke Hausfather has already written a Carbon Brief post and Victor Venema has also already covered it. The basic suggestion in David Rose’s article is that the authors of the Karl et al. paper didn’t follow the correct procedure for verifying the data that was used and didn’t archive it properly. He also claims that the lead author insisted on choies that maxmised warming and minimised documentation. Well, as far as I can tell, all of the necessary data is here. Also, the David Rose article appears to largely ignore the recent Hausfather et al. paper which indicates that the corrections made in Karl et al. (2015) are consistent with buoys and satellites (illustrated in the figure on the right).

David Rose’s article also includes the figure below, which purports to show that the NOAA data was adjusted to show higher temperatures. Well, this is immediately odd in that the issue is really the trend (i.e., how fast is it changing), not the actual temperature values. Also, the difference is almost entirely because NOAA presents their temperature anomalies relative to a 1901-2000 baseline, while HadCRUT4 presents theirs relative to a 1961-1990 baseline. If you shift them to have the same baseline, the discrepancy goes away. The immediate conclusion one might draw is that the figure below is intentionally misleading, but I wouldn’t rule out the possibility that David Rose simply does not understand the concept of a temperature anomaly, despite have written many articles about them.

Credit : David Rose - Mail on Sunday

Credit : David Rose – Mail on Sunday

There’s probably not much more to say. The Karl et al. corrections appear to have been confirmed by (or, are consistent with) Hausfather et al. (2017), and the temperature anomaly figure in David Rose’s article is highly misleading. Yes, he’s found someone to complain about how the authors of Karl et al. conducted themselves, but a scientist going to the media to do so, is also a highly unusual way to conduct oneself. At the end of the day, we’d really like to better understand how global temperatures are changing and – as it stands – it appears that Karl et al. made a positive contribution to our understanding. That’s ultimately the goal of research.

Posted in Climate change, ClimateBall, ethics, Research, The scientific method, Uncategorized | Tagged , , , , , , , | 281 Comments