#astroSH Haslitudes

Once upon a time, there was a valiant Prince who lived in a Castle, far far away. His penthouse dominated the highest tower, and his facilities covered the newly revised hierarchy of needs:

hierarchy-of-needs

In the comfort of his tower, our prince was therefore free to pursue quests to demonstrate his allegiance to the compact code of his knighthood:

  1. Thou shalt believe all that teh Freedom teaches and thou shalt observe all its directions.
  2. Thou shalt defend teh Freedom.
  3. Thou shalt exploit all weaknesses, and shalt freely constitute thyself the attacker of them.

Freedom has been under severe assaults lately, whether it’s in serious business like gaming or trivial pursuits like climate change.  Philosophy was the worst. Until now.

And then there was astrophysics.

The Prince monitored nefarious activities in the country of #astroSH. An ominous agent running under the name of Michael Brown was brooding a witch hunt. The barbarism was too inept: witch hunts should be reserved to indecorous women.

But what is #astroSH? According to Brown (pers. comm.):

The starting point is discussion by astronomers, journalists and science organisations of the well documents harassment cases. Some astronomers have also discussed their harassment where the perpetrator is not identified  […] There is legitimate discussion of whether current policy is effectively implemented, how it can be improved and if punishments under current policy are reasonable.

Our Freedom Fighter swiftly called to arms to vanquish the Social Justice Warriors (SJW):
hey

He flew with his virtual blue dragon to #astroSH and threw himself into haslitudes worth fighting for, come what may – better to die for freedom than to endure tynanny. His valor shined so brilliantly that at one point he declared himself the winner.

After more haslitudes, something suddenly changed – our merciless freedom fighter became enchanted by Love and Light. The windmills of his mind stopped overclocking, and he courageouly flew away to the castle he never left. He lived happily ever after the next morning, until another quest would break his solitary peace.

The end.

For now.

Brace yourselves, more haslitudes are coming.

 

Posted in ClimateBall, Satire | Tagged , , | 53 Comments

Transparency

My previous post on research integrity was motivated by Stephan Lewandowsky and Dorothy Bishop’s article on transparency in science. This appears to have ended up being a rather more controversial topic than I was expecting, so I thought I would add one more post about this. This isn’t really to try and make it less controversial, mind you, it’s just a few thoughts I’ve had since writing the last one. If anything, it’ll probably make it worse ;-)

When I refer to science (or research, in general) I really mean something similar to what Eli was referring to as normal science. I’m thinking of the process by which we gain understanding of some system, be it a physical system – like the universe or our planet’s climate – or something more societal. It can be a rather messy process and, as Michael Tobis points out here, there really shouldn’t be some expectation to have access to all the mistakes, background discussions, and dead ends that took place before doing what was ultimately published. It’s not only that this is not really relevant, but scientists must be free to do stupid things out of the public eye.

Transparency should only really apply to what is actually published. However, here’s where I think there is also a subtlety. A key part of the scientific method is that we only start to trust some scientific result when it’s been tested and checked by others; we don’t simply trust it because it looks reasonable, we can’t find any obvious errors, and because those who did it appear trustworthy. In this context, transparency should be something that aids the scientific method, not – IMO – something that we should see as a way of making results more trustworthy. There’s nothing fundamentally wrong with delving into the details of what others have done, but there’s no real substitute for actually doing something independent to see if the original result stands up to further scrutiny. This involves collecting more data, doing more analyses, running improved and updated models,……

Our overall understanding of a topic is therefore very unlikely to be based on a single study, but on a collection of research that has tended towards a consistent picture. There isn’t even some definitive rule as to when we should regard our understanding as robust, and when not; it’s generally a slow process of acceptance by the community. Transparency is clearly an important part of this whole process, but it’s not some kind of panacea. We should be careful of assuming that we can trust a result simply because the authors have been completely transparent, or dismissing something just because the authors have not released all that others think they should.

I should stress, however, that I’m really talking here about normal science; the process of discovery. If, however, a single piece of research is likely to heavily influence some political – or societal – decision, then the position may be very different. We may then want to really delve into the details of that study to ensure that there are no obvious errors, or reasons why we should do more before making any decision. I’m also not suggesting that normal science shouldn’t be transparent; I’m just suggesting that we need to recognise that the overall scientific method is important and that transparency is simply an important part of the standard scientific process. It shouldn’t be some kind of blunt instrument for bashing some and lauding others.

Posted in ClimateBall, Science, Universities | Tagged , , , , , | 83 Comments

Research Integrity

Stephan Lewandoesky and Dorothy Bishop (whose blog I used to read quite a lot, but haven’t for a while) have published a comment in Nature about Research Integrity, arguing that we shouldn’t let transparency damage science. It’s a complex issue, but I think they make some interesting points and a number of the usual suspect kindly turned up in the comments to illustrate some of what they were trying to suggest. The same usual suspect were most put out when some of their comments were later deleted.

The key issue, in my view, is that everything necessary for the results of a study to be evaluated and reproduced should be made available. However, that is not the same as making every single thing associated with a particular study available to anyone who asks; it should simply be possible for someone else to check and reproduce what’s been done before.

I can’t speak for other fields, but in my own field most data is either publicly available, or will soon be publicly available. Most methods and techniques are well understood and there are often resources available so that you don’t necessarily have to write your own analysis codes. Most computational models are also publicly available, or something equivalent is publicly available. So, if someone wants to check a published result they simply have to get their hands dirty and do some work. That doesn’t mean that the authors of the original work shouldn’t answer questions, or clarify things; it simply means that they shouldn’t be expected to simply hand over everything that they’ve done simply because someone asks for it. If anything, if someone is incapable of redoing the work themselves, then they probably aren’t in a position to critique it in the first place.

I am, however, certainly not suggesting that researchers shouldn’t hand over more than is necessary. There’s no real reason to not be reasonable and in many cases the requests are, themselves, entirely reasonable. On the other hand, scientific understanding progresses via people actually doing research (whether it’s new, or an attempt to check another result) not people sifting through other people’s work looking for possible errors.

Of course, this is my view based on my own experiences and what is the norm in my own field. It may well be different in other fields and may be different in other circumstances. Maybe when human subjects are involved, or when the results are particularly societally/politically relevant, we should expect more. On the other hand, if research is fundamentally about gaining understanding, maybe we should simply trust the scientific method. We shouldn’t trust scientific results simply because they’re published by people who we trust and regard as being experts in their field. We also shouldn’t distrust scientific results simply because we don’t trust those who did the work, or because we don’t like the result. We start to trust a scientific result when it has been replicated and reproduced sufficiently. That requires doing actual work, not simply checking what others have done so as to try and find mistakes.

Posted in advocacy, Science, Universities | Tagged , , , | 182 Comments

Record Warmth

Michael Mann, Stefan Rahmstorf and colleagues have a new paper on the likelihood of the recent warmth. What they’re investigating is the run of warm years we’ve seen recently – 13 of the warmest 15 years have happened since 2000, and 9 of the 10 warmest years have happened since 2000. They want to determine how likely this is from internal variability alone, and how likely it is if they then include anthropogenic and natural forcings.

Essentially, they generate a large number of time series and then test the likelihood of observing these runs of warmest years. For time series that are intended to represent internal variability only (estimated using the residuals after the CMIP5-estimated forced response is subtracted from the observed temperatures) it is 1-in-10000 for the 13 in 15 warmest years, and 1-in-770 for the 9 in 10 warmest years. When anthropogenic and natural forcings are included, it becomes 72% and 83%. They also considered a scenario in which internal variability was assumed to have much more persistence than is considered likely, which then increases the likelihood due to internal variability only, to 1-in-100 and 1-in-80. However, as the paper says

even using a too-conservative null hypothesis of persistent red noise, the recent observed record warmth is still unlikely to have occurred from natural variability alone.

and they conclude that

the recent record temperature years are are roughly 600 to 130,000 times more likely to have occurred under conditions of anthropogenic than in its absence.

I should probably add that they also considered individual years and found that the likelihood of these warm individual years occuring due to internal variability alone is much smaller than the likelihood of the runs of warmest years. This is because it has to actually cross some warming threshold, rather than simply have a run of warmest years in the record.

As far as I can tell, this all seems pretty obvious. Judith Curry, on the other hand, seems less than impressed, and appears to be suggesting that we should simply assume that we don’t know anything. Nic Lewis, surprise surprise, seems to think that

[i]t is a paper that would be of very little scientific value even if it were 100% correct.

You might imagine that this is because Nic also thinks that it’s a pretty obvious result. You might also be wrong. He then goes on to list various criticisms of the paper, including that the record is too short to determine internal variability, that a detailed attribution study should have been performed, that they should have considered models with lower sensitivity (as if only a few hundred to a few hundred thousand times more likely would change the overall conclusion significantly), and that there are problems with their assumptions about long memory noise.

The latter issue – which I’ll comment on briefly – is essentially whether or not internal variability could drive long-term warming or cooling. The answer to this is almost certainly “no”. You could try reading this Realclimate post. Richard Telford has written about this in the context of Doug Keenan’s claims. I’ve written about it too.

The basic issue is very simple. If you want internal variability to drive, for example, long-term warming, then the energy has to come from somewhere. It could come from the oceans, but you can’t extract energy from the oceans indefinitely and, if the temperature exceeds the equilibrium temperature, it would radiate away quite rapidly (the heat capacity of the land and atmosphere is low relative to the oceans). Alternatively, maybe some internal warming could drive a radiative response that sustains a planetary energy imbalance. The problem here is that the physical processes involved would essentially be the same as those that act as feedbacks to forced warming. So, if you want to argue for high sensitivity to internally-forced warming, you’re essentially arguing for high climate sensitivity overall, and most of our observed warming would be anthropogenic anyway – which is essentially what this paper is illustrating.

Anyway, it’s been a long day and that’s about all I can think of saying. If anyone has anything to add, feel free to do so through the comments.

Posted in Climate change, Climate sensitivity, Global warming, Judith Curry, Science | Tagged , , , , , | 62 Comments

Mass Balance

I once again managed to get involved in a discussion on Judith Curry’s blog about the rise in atmospheric CO2. This time was slightly better than it has been in the past, as most seemed to at least agree that the rise was anthropogenic. The dispute seemed to be as to whether or not a particular line of evidence was conclusive or not. Let’s clarify something first, though. There are many lines of evidence indicating that the rise in atmospheric CO2 is anthropogenic; this is not really in dispute.

However, a particularly elegant way to illustrate that the rise is anthropogenic (which Dikran Marsupial used during the discussion on Climate Etc.) is to simply consider mass balance. If dC is the rise in atmospheric CO2, Ea is the anthropogenic emissions, En is the natural emissions, and Un is the natural uptake, then

dC = Ea + En - Un,

which we can rewrite as

dC - Ea = En - Un.

The rise in atmospheric CO2 is smaller than our emissions, so the left-hand side is negative. Therefore, the right hand side is negative, nature is net sink, and therefore cannot be the source.

However, some were arguing that this simple mass balance argument does not preclude the possibility that some component of nature could be a net source. Okay, but we can go a bit deeper and consider the different components of the system. We have the oceans (which both releases and absorbs CO2) we have the biosphere (which both releases and absorbs CO2) we have the lithosphere (which releases CO2 via volcanic activity and absorbs CO2 via the slow carbon cycle) and we have fossil fuels, which we burn to release CO2 (there is no relevant anthropogenic sink).

Well, the oceans are taking in more CO2 than they release, the biosphere is taking in more CO2 than it releases, and the lithosphere is – we think – roughly in balance, with volcanoes releasing as much CO2 as is being absorbed via the slow carbon cycle. What’s left? Us; the release of CO2 via the burning of fossil fuels. Therefore, the rise in atmospheric CO2 is anthropogenic.

However, I don’t think the above extension is really necessary. If a component of nature could be, or has been, a net source of atmospheric CO2 that would imply a couple of things. There should have been a time when atmospheric CO2 rose faster than our emissions, and – similarly – there should have been a time when atmospheric CO2 would have continued rising were we to stop all our emissions. I don’t think either of these is true. I think atmospheric CO2 has always risen more slowly than our emissions and, if we were to stop emitting, concentrations would drop, not rise. Hence, it seems that the basic mass balance argument is sufficient to show that nature cannot be a source. Even if one component of the natural system is a net source, that would simply imply that other parts are an even bigger sink, so that – overall – nature is a net sink.

There is, however, one possibility. What about, for example, the conditions today being such that nature would be a source were we never to have emitted CO2. Well, we do know that there is a relationship between temperature and atmospheric CO2. If the temperatures had risen in the absence of our emissions, we would expect atmospheric CO2 to rise by between 10 and 20 ppm. One might, therefore, argue that a small part of the rise in atmospheric CO2 is natural and due to the rise in temperature. However, this is a bit of a cheat, given that the rise in temperature is mostly a consequence of our emissions anyway. Also, given our emissions, the concurrent rise in temperature really acts to slightly reduce the uptake of anthropogenic CO2; nature is still a net sink.

This, however, does lead to an interesting issue. As we continue to warm, we expect the uptake by the natural sinks to decrease; the ocean uptake being constrained by Henry’s Law, and the biosphere being constrained by nutrient availability. However, we don’t expect either to ever become a net source of atmospheric CO2. It is, however, quite possible that other natural sources may start to operate, such as permafrost release. These would then be a net source of atmospheric CO2. However, they would be feedback responses to the warming that will be largely a consequence of our emissions (assuming we do continue to emit CO2), hence to suggest that this would mean that nature has somehow become a net source would seem rather disingenuous.

So, as far as I can tell, the mass balance argument pretty conclusively shows that nature cannot be a source and, hence, that the rise is almost certainly anthropogenic. Of course, there are plenty of other lines of evidence, so we certainly don’t need to simply rely on the mass balance argument, but I think the basic mass balance argument is still sufficient precludes nature being a source.

Posted in Climate change, ClimateBall, Global warming, Judith Curry, Science | Tagged , , , , , | 303 Comments

Watt about David Whitehouse

I realise Sou has already covered this but, since I haven’t done a Watt about post for some time, I thought I would also comment. The post I’ll be discussing is a guest post on Watts Up With That (WUWT) by David Whitehouse, who is an Academic Advisor to the Global Warming Policy Foundation and who also happens to have a Ph.D. in astrophysics.

Before I discuss his guest post, I’ll point out that he recently suggested that

Remember when some analysts used 1998 as a start point for global temperature trend analysis they were rightly criticised for it. It now seems that some are using a strong El Nino year – 2015 – as the endpoint for their analysis!

while failing to mention his report from 2014, which starts with

What is the reason for the lack of warming observed at the surface of the Earth since about 1997?

Not only is this somewhat ironic, I don’t think that including the most recent data point in your analysis would typically regarded as a cherry-pick; it’s not as if we’re simply choosing to not use the data for the future.

Now back to his WUWT guest post (archived here). In this post, David Whitehouse claims that, with respect to 2015 being the warmest year on record, some scientists deliberately mistook weather for climate. His argument is essentially that

the large increase over 2014 is far too great and swift to be due to a resurgence of forced global warming. It must be due to short-term natural variability, and you don’t have to look far to find it. 2015 was the year of the El Nino which boosted the year’s temperature.

Not only is this silly (does he really think that forced warming somehow stopped, rather than simply being masked by variability?) but it would probably still have been a record year even without the El Nino:

Even though 2015 is only a single year, it is still the warmest in a record going back more than 100 years; is warmer than previous El Nino years, and – what is more – in the past decade we’ve had La Nina years that are warmer than previous El Nino years. There is a clear warming trend, that indicates that we continue to warm, largely because of our emissions of greenhouse gases.

David Whitehouse then goes on to say

The IPCC says that just over half of the warming since the fifties is forced so most of the contribution to 2015′s temperature is natural variability.

Well, this is utter nonsense. What the IPCC actually says is

It is extremely likely that more than half of the observed increase in global average surface temperature from 1951 to 2010 was caused by the anthropogenic increase in greenhouse gas concentrations and other anthropogenic forcings together.

Surely even David Whitehouse understands that more than half is not the same as just over half, especially as it goes on to say

The best estimate of the human-induced contribution to warming is similar to the observed warming over this period.

In other words, anthropogenic influences probably contributed to most of the observed warming since 1950, possibly even more than all of it.

Given that David Whitehouse has a PhD in astrophysics, it’s really hard to understand how he can’t get this basic point. It’s almost as if he deliberately mistook just over half for more than half. You might think that someone with his credentials would be willing to correct such a mis-representation. However, given that he seems to think WUWT is a site worth posting on, I’d be very surprised if he did.

Posted in Anthony Watts, Climate change, ClimateBall, ENSO, Gavin Schmidt, Global warming, IPCC | Tagged , , , , , | 81 Comments

Water vapour and climate

Peter Sinclair has video interview with Andrew Dessler about water vapour and climate. The video doesn’t really say anything all that surprising. We are now reasonably sure that relative humidity remains roughly constant as we warm. This then allows us to constrain the role water vapour and indicates that it will have a net positive effect (it will amplify warming). Clouds are still uncertain, mainly because they can both reflect incoming radiation, cooling the climate, and trap heat, so warming the climate. However, our current understanding is that the net effect is probably that clouds will be a net warming influence.

The real reason I wanted to post the video was to make a slightly different point. Given the above, we have a number of lines of evidence indicating that the equilibrium climate sensitivity (ECS) is probably greater than 2oC. This includes paleo estimates, climate models, but also our basic understanding of the physical processes involved.

As many are aware, there are, however, some studies that suggest that ECS is probably less than 2oC. Many of these are methods that rely on statistical analyses of observations. There is, of course, nothing wrong with these methods, but simply because someone has correctly applied a statistical technique, does not mean the result is correct (or correctly represents reality).

The key point I was wanting to highlight is that if some really think that ECS is probably less than 2oC, then at some point someone is going to have to illustrate what physical processes are involved. At the moment, our understanding is that the various processes involved (clouds, water vapour, lapse rate, …) suggest that the ECS is more likely above 2oC than below. No amount of complicated statistics trumps this kind of understanding.

Posted in Climate change, Climate sensitivity, Global warming | Tagged , , , , , , | 40 Comments