Advocacy and scientific credibility

To the surprise of few, I suspect, it appears that scientists can advocate without damaging their, or the scientific community’s, credibility. It’s reported in this paper, [d]oes Engagement in Advocacy Hurt the Credibility of Scientists? and is discussed in this article.

The bottom line appears to be that there are forms of advocacy that do not negatively impact credibility, but that advocacting for something specific may do so:

Our results suggest that scientists who wish to engage in certain forms of advocacy may be able to do so without directly harming their credibility, or the credibility of the scientific community. …..Therefore, at a minimum, it is a mistake to assume that all normative statements made by scientists are detrimental to their credibility.

That said, negative effects may occur, depending on the specific policy endorsed.

I think this is somewhat similar to what I’ve always thought; how a scientist’s advocacy is received depends on whether it is something strongly supported by the scientific evidence, or something that is clearly strongly influenced by their own views/opinions. Pointing out that addressing climate change will require reducing emissions, might be a form of advocacy but it is strongly supported by the evidence and isn’t very specific (it doesn’t say how to do so, and doesn’t even rule out continuing to use fossil fuels). Advocating for something very specific, however, could influence a scientist’s credibility.

Maybe the most insightful comment was from Simon Donner (H/T Doug McNeall), quoted in this article

“public audiences are arguably more comfortable with advocacy by scientists than scientists are with advocacy by scientists,”

Yup, certainly my impression, although I would add that another group who are uncomfortable are those who don’t like the implications of what the evidence suggests.

Anyway, I think this all seems reasonably obvious to me (okay, that doesn’t mean that it’s right); I think most people would expect scientists/researchers to speak out if their research indicates that there are risks associated with various activities. The researchers just have to be a little careful about how they do so – it’s better to present information that is strongly supported by the evidence, rather than advocating for specifics that might depend on personal opinions more than on the actual evidence. Having said that, I don’t think scientists/researchers should not do the latter, they should simply be very clear that they are expressing their personal opinion, rather than expressing a view that is strongly supported by the scientific evidence.

Other posts:

Gavin Schmidt on Advocacy.

Science and Silence.

Science and Policy.

Posted in advocacy, ethics, Policy, Politics, Research, Science, The philosophy of science, The scientific method | Tagged , , , , | 55 Comments

Catastrophe, hoax, or just “Lukewarm”?

I thought I would post this video of a talk given by Tim Palmer. The basic message is that climate change is neither a hoax, going to be catastrophic, or going to be benign (Lukewarm). What climate science can (and does) provide is a quantifiable risk of various outcomes, from relatively benign to catastrophic. In my view, a key point is that this depends both on how sensitive our climate is to external perturbations, and on our future emission pathways.

What I found particularly interesting in the video (which is where it’s set to start) is a demonstration of that even though the system is chaotic, we can still estimate the response to an external forcing. If a system is chaotic, then it is very sensitive to initial conditions. This makes it difficult to precisely determine a future state. If, however, we know the range of possible states, then we can also determine how some external influence might bias the system so that it preferentially ends up in certain states, rather than others. Therefore, even though our climate is chaotic does not mean that we can’t estimate how it will respond to external perturbations (in this case, typically referred to as external forcings).

At around 30 minutes there is also a discussion of energy balance models and why we shouldn’t necessarily trust their lower climate sensitivity estimates. The basic argument is that we haven’t yet doubled atmospheric CO2 and so these estimates will likely miss some of the non-linearity in the response. I would argue that this is on top of them not even ruling out – with high confidence – higher climate sensitivity values. Anyway, I’ll stop there. The video is below.

Posted in Climate change, Climate sensitivity, Global warming, IPCC, Research, Science | Tagged , , , , , , , , | 119 Comments

Oh no, not again

Somehow a paper arguing that the increase in atmospheric CO2 is mostly natural has managed to pass peer-review. Gavin Schmidt’s already covered it in a Realcimate post. Gavin Cawley’s paper is, in a sense, a pre-emptive response to this new paper. I’ll make a few comments similar to what Gavin has already said in the Realclimate post and then make a somewhat broader point.

The summary of the paper says

  • The average residence time of CO2 in the atmosphere is found to be 4 years.

The confusion here is between residence time and adjustment timescale. Given the various fluxes of CO2, an individual CO2 molecule will only stay in the atmosphere for a few years before being taken up by one of the natural sinks. However, this doesn’t mean that an enhancement of atmospheric CO2 will decay in only a few years, because there is both a flux of CO2 out of the atmosphere and into the atmosphere – the molecule leaving the atmosphere is replaced. The residence time might only be a few years, but the adjustment timescale of ~100 years, or longer. I discuss this in more detal in these two posts.

The paper then says two related things:

  • The anthropogenic fraction of CO2 in the atmosphere is only 4.3%.
  • Human emissions only contribute 15% to the CO2 increase over the Industrial Era.

The key point is that atmospheric CO2 has increased by about 40% since pre-industrial times and it is all anthropogenic. The short residence time of an atmospheric CO2 molecule, however, means that not all of the enhancement will be made of up of molecules that had an anthropogenic origin. This, however, does not mean that the enhancement is somehow not anthropogenic; without our emission there would be no enhancement in the first place.

I thought, however, that I would comment on something that appears to be often misunderstood. The paper says it is estimated that the removal of the additional emissions from the atmosphere will take a few hundred thousand years and implies that this is wrong (through determining a very short residence time). I discuss some of this in these two posts, but I’ll elaborate a bit more here.

There are quite a large number of timescales associated with drawing down atmospheric CO2, but – in a simple sense – when we emit CO2 into the atmosphere, it mixes between the various reservoirs (atmosphere, ocean, biosphere) until – on a timescale of centuries – it reaches a new equilibrium, which is then drawn down over a timescale of thousands of years via weathering (ultimately taking more than 100 thousand years to fully recover). Some seem to think that it should settle back to the intial concentration, but it can’t because we’ve essentially added new CO2 to the system. Eli has a nice animation in this post.

That it will take more than one hundred thousands years for atmospheric CO2 to return to pre-industrial values is partly based on past changes (such as the Paleocene-Eocene Thermal Maximum – PETM) and partly on the carbonate chemistry of seawater. If you work through the carbonate chemistry calculation, you can show that there is something that is now called the Revelle factor (which I discuss here). This is the ratio of the fractional change in atmospheric CO2 to the fractional change in dissolved inorganic carbon in the ocean, and it is about 10.

This tells us that if we add CO2 to the system, once it’s distributed through the ocean/atmosphere system, the fractional change in atmopheric CO2 will be 10 times greater than the fractional change in dissolved inorganic carbon in the oceans. I discuss some of this in this post. Also, if you consider the amount of carbon in the ocean and atmosphere, you can show that between 15% and 30% of our emissions will remain in the atmosphere once ocean invasion is complete. At this stage, atmospheric CO2 is further drawn down through weathering, which is very slow and, hence, it will take more than 100 thousand years to ultimately return to pre-industrial levels.

The point I’m getting at is that the long timescale over which atmospheric CO2 will slowly return to pre-industrial levels is a consequence of the carbonate chemistry of seawater and weathering; you can’t assess this by simply considering the short-timescales fluxes into, and out of, the various reservoirs. So, not only does this new paper confuse residence time and adjustment timescale (amongst various other confusions) it also infers things about the long timescale over which atmospheric CO2 will recover using an analysis that is completely inappropriate. If you want to read a paper this does this analysis properly, you should read the atmospheric lifetime of fossil fuel carbon dioxide, by Archer et al. (2009).

Of course, some might argue that this post wasn’t really necessary, as any paper suggesting that the rise in atmospheric CO2 is not anthropogenic is obviously nonsense, but sometimes it’s worth delving into this in more detail, although maybe this is more for my own benefit than for the benefit of others. It is my blog, though 🙂

Posted in Climate change, ClimateBall, IPCC, Pseudoscience, Research | Tagged , , , , , , , | 64 Comments

Intellectual monocultures

I came across an article that I’ve been thinking about for a few days. I thought I would simply post some thoughts. They may not be well-formed, and my views could certainly change. I should say that I got it from a tweet by Tom Levenson, who posted a bit of a Tweet storm about it. He also had a Tweet storm about Andy Revkin’s interview with William Happer, that is worth reading.

Anyway, I’m already off-track. The article that I’ve been pondering is called the threat from within. It’s about a speech by John Etchemendy, former Provost of Stanford, in which he discusses threats to universities. The bit that stuck me was the following

But I’m actually more worried about the threat from within. Over the years, I have watched a growing intolerance at universities in this country – not intolerance along racial or ethnic or gender lines – there, we have made laudable progress. Rather, a kind of intellectual intolerance, a political one-sidedness, that is the antithesis of what universities should stand for. It manifests itself in many ways: in the intellectual monocultures that have taken over certain disciplines; in the demands to disinvite speakers and outlaw groups whose views we find offensive; in constant calls for the university itself to take political stands. We decry certain news outlets as echo chambers, while we fail to notice the echo chamber we’ve built around ourselves.

There are some things I agree with. I think universities are places where our views should be challenged. They should also be places where we encourage people to have interests beyond their own narrow domains. We should want to think about the world around us and be exposed to a wide range of different views. However, I think the above is a far too simplistic view of the issues and conflates many different, and largely unrelated, issues.

A key aspect of a university is, obviously, research. A goal of research is to understand the system being studied, ideally in a way that minimises the impact of biases, or personal opinions/views. Typically research involves collecting information about the system being studied, analysing that information, developing models of the system, and rejecting those that don’t fit the information collected. In the physical sciences, there is often an expectation that we can use this information to constrain our understanding and, in many cases, can constrain our understanding quite tightly. In other words, there is an expectation that we might eventually develop an understanding about which there is overwhelming agreement. This is not a bad thing and, in some sense, is the goal.

Maybe in other areas, this is not necessarily the case. There may well be systems for which it is not possible to develop a well-defined understanding about which everyone would agree. However, it still seems that the understanding of such systems should be constrained by the information available. If there isn’t a single well-defined understanding, does that mean that the information simply can’t constrain our understanding, or does it mean that the information actually indicates that there are indeed multiple valid understandings. Something that has always bothered me about disciplines that are heterogeneous is that it’s not clear if this is because those involved are being strongly influenced by their ideologies, or because the data is actually consistent with these various interpretations.

Of course there will always be people who challenge our current understanding. This is a good thing. However, a well-developed understanding can often be built up over a long period of time, and can involve an enormous amount of information. Challenging such an understanding is therefore very likely to be difficult and, in many cases, is more likely to be wrong than right. Therefore, even though we should accept that some will challenge consensus views, there’s no real reason to embrace it, or give it any special place. Those challenging accepted views need to do the work of convincing others; it’s meant to be, and should be, difficult. If it were easy, it would probably indicate that our original understanding was not very robust.

Those are my thoughts for the moment. As I said at the beginning, this is mainly something I’ve just been pondering. I do think that universities should be places where our views can be challenged, and so preventing people from speaking is something that should typically be avoided (with some exceptions). However, the criticism of intellectual monocultures within some disciplines, in my view, ignores that the goal of research is to develop, and constrain, our understanding. A high-level of agreement more likely indicates that a consistent picture has developed, rather than indicating some kind of fundamental problem with that discipline. Academics love arguing with each other, so even if there is a strong agreement about the basics, they’ll almost certainly still be fighting about the details.

Posted in Research, Science, The philosophy of science, The scientific method | Tagged , , , , , , , | 117 Comments

Judith Curry confuses laypeople about climate models

Judith Curry has written a report for the Global Warming Policy Foundation called Climate Models for the layman. As you can imagine, the key conclusions is that climate models are not fit for the purpose of justifying political policies to fundamentally alter world social, economic and energy systems. I thought I would comment on the key points.

  • GCMs have not been subject to the rigorous verification and validation that is
    the norm for engineering and regulatory science.

Well, yes, this is probably true. However, it’s primarily because we only have one planet and haven’t yet invented a time machine. We can’t run additional planetary-scale experiments and we can’t go back in time to collect more data from the past.

  • There are valid concerns about a fundamental lack of predictability in the complex
    nonlinear climate system.

This appears to relate to the fact that the system is non-linear and, hence, chaotic. Well, that it is chaotic does not mean that it can vary wildly; it’s still largely constrained by energy balance. It will tend towards a state in which the energy coming in, matches the energy going out. This is set by the amount of energy from the Sun, the amount reflected, and the composition of the atmosphere. It doesn’t have to exactly match this state, but given the heat capacity of the various parts of the system, it is largely constrained to remain fairly close to this state. Also, for the kind of changes we might expect in the coming decades, the response is expected to be roughly linear. This doesn’t mean that something unexpected can’t happen, simply that it is unlikely. Also, that some non-linearity might trigger some kind of unexpected, and substantial, change doesn’t somehow reduce the risks.

  • There are numerous arguments supporting the conclusion that climate models
    are not fit for the purpose of identifying with high confidence the proportion
    of the 20th century warming that was human-caused as opposed to natural.

This seems like a strawman argument. There isn’t really a claim that climate models can identify with high confidence the proportion of the 20th century warming that was human-caused as opposed to natural. However, they can be used to estimate attribution, and the conclusion is that it is extremely likely that more than half of the observed increase in global average surface temperature from 1951 to 2010 was caused by the anthropogenic increase in greenhouse gas concentrations and other anthropogenic forcings together (e.g., here ). Additionally, the best estimate of the human-induced contribution to warming is similar to the observed warming over this period (e.g., here ). One reason for this is that it is very difficult to construct a physically plausible, and consistent, scenario under which more than 50% of the warming is not anthropogenic.

  • There is growing evidence that climate models predict too much warming from
    increased atmospheric carbon dioxide.

This is mainly based on results from energy balance models. I think these are very interesting calculations, but they don’t rule out – with high confidence – equilibrium climate sensitivity values above 3K, and there are reasons to be somewhat cautious about these energy balance results. There are also indications that we can reconcile these estimates with estimates from climate models.

  • The climate model simulation results for the 21st century reported by the Intergovernmental Panel on Climate Change (IPCC) do not include key elements of climate variability, and hence are not useful as projections for how the 21st century climate will actually evolve.

This seems to be complaining that these models can’t predict things like volcanic activity and solar variability. Well, unless we somehow significantly reduce our emissions, the volcanic forcing will probably be small compared to anthropogenic forcings. Also, even if we went into another Grand Solar Minimum, the reduction in solar forcing will probably only compensate for increasing anthropogenic forcings for a decade or so, and this change will not persist. Again, unless we reduce our emissions, these factors will almost certainly be small compared to anthropogenic influences, so this doesn’t seem like a particularly significant issue.

The real problem with this report is not that it’s fundamentally flawed; it’s just simplistic, misrepresents what most scientists who work with these models actually think, and ignores caveats about alternative analyses while amplifying possible problems with climate models. Climate models are not perfect; they can’t model all aspects of the system at all scales, and clearly such a non-linear system could respond to perturbations in unexpected ways. However, this doesn’t mean that they don’t provide relevant information. They’re scientific tools that are mainly used to try and understand how the system will evolve. Noone claims that reality will definitely lie within the range presented by the model results; it’s simply regarded as unlikely that it will fall outside that range. Noone claims that the models couldn’t be improved, it’s just difficult to do so with current resources; both people to develop/update the codes and the required computing resources. They’re also not the only source of information, so noone is suggesting that they should dominate our decision making.

Something to consider is what our understanding would be if we did not have these climate models. Broadly, our understanding would be largely unchanged. We’d be aware that the world would warm as atmospheric CO2 increased, and we’d still have estimates for climate sensitivity that would not be very different to what we have now. We’d be aware that sea levels would rise, and we’d be able to make reasonable estimates for how much. We’d be aware that the hydrological cycle would intensify, and would be able to make estimates for changes in precipitation. It would, probably, mainly be some of the details that would be less clear. If anything, without climate models the argument for mitigation (reducing emissions) would probably be stronger because we’d be somewhat less sure of the consequences of increasing our emissions.

I think it would actually be very good if laypeople had a better understanding of climate models; their strengths, their weaknesses, and the role they play in policy-making. This report, however, does little to help public understanding; well, unless the goal is to confuse public understanding of climate models so as to undermine our ability to make informed decisions. If this is the goal, this report might be quite effective.

Posted in Climate sensitivity, ClimateBall, Judith Curry, Science, The scientific method | Tagged , , , , , | 94 Comments

Not even giving physicists a bad name!

credit : xkcd

credit : xkcd

When I’m trying to have a bit of a dig at physicists (of which I’m one) who think they somehow know better than climate scientists, I’ll post the cartoon on the right. When I came across this interview with William Happer, who may become Trumps’s next science advisor and who is also covered in this Guardian article, I immediately thought of it. However, I don’t think it’s quite right. The cartoon is – I think – meant to illustrate that physicists come sometimes be rather arrogant. They can think that physics is very difficult, that everything else is quite simple by comparison, and that they could step into other fields and easily solve what’s been puzzling others for ages. Happer doesn’t come across as a physicist who is just a bit arrogant; he comes across as someone who has completely forgotten how to do science altogether. A great deal of what he says is simply untrue, and demonstrably so.

For example, he says

In 1988, you could look at the predictions of warming that we would have today and we’re way below anything [NASA scientist Jim] Hansen predicted at that time.

You can look at Hansen’s 1988 paper. The prediction was that we’d warm by something between 0.4 and 1oC between the late 1980s and now. We’ve warmed by about 0.5oC. You can even plot the temperature datasets over Hansen’s predictions (H/T Nick Stokes) and it’s clear that we’re not way below anything predicted. This Hargreaves & Annan (2014) paper actually says his forecast showed significant skill. Furthermore, he considered a number of different possible future emission pathways; we’ve followed – as far as I’m aware – one closer to the middle of what he considered, so it’s not surprising that his high emission scenario forecasts more warming than has been observed. Also, his model has an ECS that is towards the high end of the range that is considered likely. Overall, his forecast is quite remarkable.

Happer continues with,

the equilibrium sensitivity, is probably around 1 degree centigrade, it’s not 3 1/2 or whatever the agreed-on number was. It may even be less. And the Earth has done the experiment with more CO2 many, many times in the past. In fact, most of the time it’s been much more CO2 than now. You know, the geological record’s completely clear on that.

Well, this is utter nonsense. The geological record is consistent with an ECS of around 3oC and is largely inconsistent with an ECS below 1oC. You can find the various climate sensitivity estimates here. We’ve already warmed by about 1oC, are only about 60% of the way towards doubling atmospheric CO2 (in terms of the change in forcing) and are still not in equilibrium. It’s utterly ridiculous to suggest that the ECS might be below 1oC. How anyone can suggest this is bizarre, let alone someone who is meant to be a highly regarded physicist.

Possibly the most bizarre thing he says (which is quite something, given all the other things he’s said) is:

I see the CO2 as good, you know. Let me be clear. I don’t think it’s a problem at all, I think it’s a good thing. It’s just incredible when people keep talking about carbon pollution when you and I are sitting here breathing out, you know, 40,000 parts per million of CO2 with every exhalation.

What’s exhaling got to do with it? The reason CO2 is accumulating is the atmosphere is because we’re digging up carbon that has be sequestered for a very long time, and burning it in a very short time; releasing CO2 into the atmosphere. Us exhaling CO2 would be entirely carbon neutral if we weren’t digging up and burning fossil fuels. Also, how can he know that it is good? This is almost entirely about risk. The more fossil fuels we burn, and the faster we do so, the more we will change the climate and the faster it will change. Can we adapt to these changes, both in terms of the magnitude and the speed? The answer may not be definitive, but there is pretty convincing evidence to suggest that continuing to emit increasing amounts of CO2 into the atmosphere may produce changes that will be very difficult to deal with. This doesn’t definitively mean that we shouldn’t do so, but suggesting that it is not going to be a problem at all, is nonsensical. There are scenarios under which parts of the planet essentially become uninhabitable.

What Happer says is so beyond anything reasonable, that I don’t think it’s fair to regard him as giving physicists a bad name. Even if physicists can sometimes be a bit arrogant, I don’t think they’re so arrogant as to say things without bothering to check that what they’re saying is actually true. It’s unbelievable that he’s being seriously considered as a science advisor. Oh, hold on, it’s for the Trump administration; I take that back, he’ll probably fit in perfectly.

Update:

Skeptical Science has a nice post that discusses Hansen’s 1988 predictions. He also made predictions in 1981, that are also pretty spot on.

Nick Stokes has a more recent post comparing Hansen’s prediction with observations and also has a post that discusses his scenarios.

Posted in Climate sensitivity, ClimateBall, physicists, Science, Sound Science (tm), The scientific method | Tagged , , , , , , , | 68 Comments

Guest post: On Baselines and Buoys

One of the key criticisms of Karl et al. (2015) is that it used a dataset that adjusted buoy data up to ship data – the suggestion being that, in doing so, they produced more apparent warming than if the ships were adjusted down to the buoys.  In a guest post below, Zeke Hausfather shows how it makes no difference if you adjust the buoys up to the ships, or the ships down to the buoys.

Guest post: On Baselines and Buoys

Much of the confusion when comparing the different versions of NOAA’s ocean temperature dataset comes down to how the transition from ships to buoys in the dataset is handled. The root of the problem is that buoys and ships measure temperatures a bit differently. Ships take their temperature measurements in engine room intake valves, where water is pulled through the hull to cool the engine, while buoys take their temperature measurements from instruments sitting directly in the water. Unsurprisingly, ship engine rooms are warm; water measured in ship engine rooms tends to be around 0.1 degrees C warmer than water measured directly in the ocean. The figure below shows an illustrative example of what measurements from ships and buoys might look like over time:

zeke_post_1Buoys only started being deployed in the early-to-mid 1990s. Back then about 95 percent of our ocean measurements came from ships. Today buoys are widespread and provide over 85 percent of our total ocean measurements, so it’s useful to be able to combine ships and buoys together into a single record. One option would be to ignore the temperature difference between ships and buoys and simply average them together into a single record. This is what the old NOAA dataset (version 3) effectively did, and we can see the (illustrative) results in the figure below:

zeke_post_2Now, this approach of simply averaging together ships and buoys is problematic. Because there is an offset between the two, the resulting combined record shows much less warming than either the ships or the buoys would on their own. Recognizing that this introduced a bias into their results, NOAA updated their record in version 4 to adjust buoys up to the ship record, resulting in a combined record much more similar to a buoy-only or ship-only record:

zeke_post_3Here we see that the combined record is nearly identical to both records, as the offset between ships and buoys has been removed. However, this new approach came under some criticism from folks who considered the buoy data more accurate than the ship data. Why, they asked, would NOAA adjust high quality buoys up to match lower-quality ship data, rather than the other way around? While climate scientists pointed out that this didn’t really matter, that you would end up with the same results if you adjusted buoys up to ships or ships down to buoys, critics persisted in making a big deal out of this. As a response, NOAA changed to adjusting ships down to match buoys in the upcoming version 5 of their dataset. When you adjust ships down to buoys in our illustrative example, you end up with something that looks like this:

zeke_post_4The lines are identical, except that the y-axis is 0.1 C lower when ships are adjusted down to buoys. Because climate scientists work with temperature anomalies (e.g. change relative to some baseline period like 1961-1990), this has no effect on the resulting data. Indeed, the trend in the data (e.g. the amount of warming the world has experienced) is unchanged.

What the folks at the Global Warming Policy Forum have been trying to do is to compare “Up to Ships” and “Down to Buoy” records without accounting for the fact that they are on separate baselines (e.g. they are not both showing anomalies with respect to a common climate period). The graph they show, using our illustrative example, looks something like this:

zeke_post_5However, when we put both on the same climatological baseline, we see there is in fact no difference between the two lines:

zeke_post_6Similarly, here is what the actual graph comparing ERSSTv4 (which adjusts buoys up to ships) and an early draft version of ERSSTv5 (which adjusts ships down to buoys) looks like. When put them on the same baseline, however, we see that the new version 5 is nearly identical to the old version 4:

zeke_post_7Here the old NOAA record is shown in blue, while the new NOAA record is shown in red. Its clear that the difference between the two is quite small, and in no way changes our understanding of recent warming.

As Peter Thorne, one of the authors of the upcoming version 5 of NOAA’s ocean dataset told Carbon Brief:

 “It’s worth noting that the ERSSTv4 and ERSSTv5 series are virtually indistinguishable in recent years and that the comparison does not include the data from 2016. The recent changes that were made for ERSSTv4 are largely untouched in the new version in terms of global average temperature anomalies. Therefore, as currently submitted, ERSSTv5 would not change the bottom-line findings of Karl et al (2015)… The change in long-term global average time series in the proposed new version is barely perceptible when the series are lined up together with the same baseline period, and much smaller than the uncertainties we already know about in the existing dataset.”

He continues:

 If ever there was a storm in a teacup, this was it. There is no major revision proposed here and anyone who tells you otherwise fundamentally misunderstands the submitted paper draft (which at this juncture should be the sole provenance of the editor and reviewers per the journal’s policy).

We should let peer review complete its course. Then, and only then, we can discuss this new analysis in more depth.

In the Daily Mail last week David Rose quoted John Bates as saying that “They had good data from buoys. And they threw it out and “corrected” it by using the bad data from ships.” This statement is patently false. Not only did NOAA not “throw out” any buoy data, they actually gave buoys about 7 times more weight than less reliable ship data in their new record. As we discussed in our recent Science Advances paper, relying on the higher quality buoy data removed some bias in recent years due to the changing composition of the global shipping fleet.

At the end of the day what matters is not that ships were adjusted down to buoys or buoys up to ships, what matters is that the offset between ships and buoys was effectively removed. This is now done by all groups producing sea surface temperature records, including NOAA, the U.K.’s Hadley Centre, and the Japanese Meteorological Association.

 Author: Zeke Hausfather is a climate/energy scientist who works with Berkeley Earth and is currently finishing a PhD at the University of California, Berkeley.

Posted in Climate change, ClimateBall, Global warming, Science | Tagged , , , , , , , , , , | 299 Comments