I notice The Global Warming Policy Foundation has a new report by Nic Lewis and Marcel Crok called Oversensitive : how the IPCC hid the good news on global warming (you can probably find it if you want to). I was going to write a post about this, but I notice that there’s a guest post by Piers Forster on Ed Hawkins’s blog that’s already commented on the GWPF report. I might just add two quick comments. One is that I’m still very surprised that there hasn’t been more mention of how the addition of a small amount of extra data significantly changed Lewis (2013)’s ECS estimate. Using data up until 1995, his method gives 2-3.6oC. Using data till 2001, it gives 1.0-2.2oC. I might be a little concerned about a method that seems that sensitive to small changes in data.
The report also includes the following statement
So, to conclude, we think that of the three main approaches for estimating ECS available today (instrumental observations, palaeoclimate observations, GCM simulations), instrumental estimates – in particular those based on warming over an extended period – are superior by far.
Really? Any mention of recent work suggesting that regional variations can make these energy budget constraints unreliable? Not that I could see.
If you want to read something that may be a somewhat better representation of the evidence, you could try reading the new report from the Royal Society and the US National Academy of Sciences called Climate Change : Evidence and Causes. Stoat already has a post that discusses this, so I won’t say much more about it.
I thought I’d finish this post with a video that I saw yesterday about abrupt climate change. If I quote from the Royal Society and NAS report, it says
Results from the best available climate models do not predict abrupt changes in such systems (often referred to as tipping points) in the near future. However, as warming increases, the possibilities of major abrupt change cannot be ruled out… Such high-risk changes are considered unlikely in this century, but are by definition hard to predict. Scientists are therefore continuing to study the possibility of such tipping points beyond which we risk large and abrupt changes.
So, we do not predict abrupt changes in the near future, but we can’t really rule them out. That sounds reasonable. In the video Richard Alley says
is it possible we’ve over-estimated the dangers? Sure. Is it possibly a little better? Sure. Is it possibly a little worse? Sure. Is it possible that CO2 breaks things that we really care about and things are a lot worse than we expect? Yes it is.
The uncertainties are mainly on the bad side.
And that’s a fundamental issue – in my opinion – with the Lewis and Crok report. They’re trying to argue that things will almost certainly be better than many estimates suggest. Could they be right? I hope so, but I would argue that some fairly basic physics suggests that they won’t be. That aside, how does it help policy makers by essentially suggesting that they ignore the possibility that things could be a lot worse than we expect? I don’t think it does. Optimism is a great quality. Blind optimism, on the other hand, can be rather dangerous if you’re ignoring potentially serious risks. There’s much more that probably could be said, but I’ll end there and let those who have more to say, do so through the comments.
> The uncertainties are mainly on the bad side.
I think that must be right. Its pretty hard to imagine the climate suddenly changing into a significantly “better” state, whatever that might mean. Its not too hard to imagine it suddenly changing to a worse one, even if its hard to know how probable that would be. So – in terms of abrupt change – the cost/benefits are bounded-one-side, and so the expected value is negative.
Well, its a plausible-sounding argument.
I agree. I get the sense that some interpret this as meaning “it will definitely be worse than we expect”, which is not the case. It’s simply that it could be worse than we expect and we shouldn’t be ignoring this possibility. As you say, it’s bounded-one-side which always makes me think of a drunkard walking down the pavement (sidewalk, for you Americans who drive on the pavement). He/she could keep bumping into walls, but the real risk is falling into the road.
Using data up until 1995, his method gives 2-3.6oC. Using data till 2001, it gives 1.0-2.2oC. I might be a little concerned about a method that seems that sensitive to small changes in data.
It sure suggests that at least the error bars are much too small.
Sounds to me like Big Tobacco saying, “Yeah, we know that smoking cigarettes causes cancer, but the cancer grows very slowly so there’s no need to stop smoking immediately.”
Just reading the Forster article, there’s another trait you never, never see in “skeptic” analyses: a critical, detailed assessment of the weaknesses of their own method.
And thanks to JamesG over at climate lab book for making my point for me: “So Foster is criticising his own method here? How very funny!”
Yeah, only if you understand zero about how research/science works.
I’ve asked Nic Lewis a question on the comment thread on Ed’s blog. However, given that Nic’s first comment there includes your criticisms on this point are baseless, and So your conclusion is wrong, I’m not hugely confident on a particularly constructive discussion. Always happy to be proven wrong though.
I noticed NL’s claims and assertions too. I didn’t see any actual evidence backing them up though.
I don’t think that addition of 6 more years is the only change that contributed to the lower estimate for climate sensitivity in Nic Lewis’ 2013 paper. Dropping the upper-air temperatures from the data set is an important difference between the two series of results and may have a big role in the change, possible the largest role.
That such a choice has a large effect tells about the nature of the analysis. The approach is extremely dependent on the model used. In this case the model is the 2-dimensional model of MIT (2DCM). It may well be that the model is not capable of explaining simultaneously the three diagnostic data set: surface temperatures, upper-air temperatures and the deep-ocean temperatures, or in particular the two first of these. Forcing it to agree with all, may force the model to an unphysical region, and lead to seriously erroneous results. This seems to be, what Nic Lewis thinks. Therefore he favors analysis that drops the upper air-temperature data.
My main point is, however, that the method is all too much a black box. Data is fed in, manipulated using a opaque method that’s dependent on a crude model of the atmosphere and on the use of Jeffreys’ prior that has virtues in many uses, but may fail totally in other cases. In this case there isn’t any obvious reason to believe that it’s a good prior. It’s use is objective in the sense using a black box methodology always is, when the analysis is done once and not several times varying some assumptions. It’s objective, because the user cannot control the biases – or actually know, how large the biases are.
Nic Lewis has looked in some detail in the model, and does discuss some of the issues (Figs. 4 and 5 tell on that). To me he’s, however, far too ready to accept the results that suit well his purposes.
I haven’t yet looked at the new paper or discussion of that on the net, but decided to make this comment as I didn’t comment on the previous thread on Lewis’ analysis.
Yes, I’ve only just discovered that. The argument that Lewis makes (I think) is that using it didn’t make much difference when doing the comparison with Forest et al. (2006). It doesn’t really change that their method produces quite a different result, depending on the assumptions.
I agree. Just because you can’t influence the result by making some kind of subjective choice, doesn’t really mean your analysis is robust. I would argue that one could eliminate the lower end of his range just by considering some basic physics.
Yes, that – unfortunately – is my impression too. He’s also a little too eager to argue that his method is better and more robust than any other methods. It’s almost as if he feels that he has to sell his method. I know that science does involve convincing others that what you’ve done is interesting, but there are limits to how far you can push this.
What Forster shows in his blog post is something that I have wanted to see. As the real data has limited coverage it’s better to calculate from models results with the same coverage. That’s easy to do with models, while extending data as Cowtan and Way have done is problematic. Actually I*m really surprised that it turns out that any other approaches than that shown by Forster are made at all, when models are tested or their agreement with observations is studied.
I agree and I made a comment to that effect. I was surprised that it hadn’t been done before as it does seem like something that might be quite obvious.
One thing I can thank Nic Lewis for. Without his paper, I would know very much less about so called objective Bayesian methods. I had a crude idea of, why they are not really objective as I knew (in agreement with common understanding) that Bayesian methods are always and unavoidably dependent on subjective choices, and that “objective” or “uninformative” is always true only in a very limited sense, But learning, how things really work led to reading some interesting papers and satisfying pondering.
During the early part of my research my field of study was known as multi-particle phenomenology of elementary particle physics. In that research we looked at the data, which was complex, sparse and had various gaps. Figuring out, how to compare such data with existing models was the main issue we concentrated on. Therefore I cannot easily believe that any comparisons are made without full consideration of these issues.
Actually I*m really surprised that it turns out that any other approaches than that shown by Forster are made at all, when models are tested or their agreement with observations is studied.
Generally published approaches have tended to use some kind of spatial “fingerprinting” method which I understand should apply the method only to areas with observations. The Otto et al. paper was literally a back-of-envelope calculation.
I don’t quite get what you’re suggesting. My understanding is that in Ed’s post there’s a comparison with Otto et al. (I think) in which the GCM estimates are based only on the regions that have HadCRUT4 coverage. I wasn’t aware that this had been done before. Do you have an example of a paper that has used the “fingerprinting” method. I agree, though, that Otto et al. is just a back-of-envelope calc. I quite like it because it’s easy to do myself, but I wouldn’t see it as superior to other methods.
I have certainly read in a few papers that the areas of coverage are made to agree, but from a couple of comments it seems to be the case that the practice is not followed everywhere.
In some discussions on various temperature time series I have been proposing that reliability and minimization of “random” variability should be considered more important than maximal areal coverage. I have done that assuming that this is taken fully into account when the temperature index is used.
If that gets forgotten too often, I’m forced to reconsider my view, but I do still hope that those methods are used that result in largest improvement in knowledge rather than those that are most fool-proof.
The Otto et al. paper literally just uses the zero-dimensional HadCRUT4 time series for global average temperature. I’m just saying that other more detailed approaches tend to use maps of the observational data and look at spatial patterns, which would generally mean they account for observation gaps.
One clear example I can give is Gillett et al. 2012. See Figure 2 where they show model trend maps constrained to HadCRUT3 coverage from 1860.
The issue of choosing the prior is complex, and various arguments lead to quite different results. In most cases an attempt is made to avoid highly informative priors, i.e. priors that dictate largely the outcome while the data makes only a minor adjustment in that.
In practice it’s common to use a flat distribution, i.e. a prior that’s not peaked at all, but has a constant value over a range that may extend to infinity, if it’s known that the upper limit makes little difference. When the variable has the nature of a scale, the flat distribution applies typically to the logarithm of the variable.
Saying that the distribution is flat does, however, not fix the prior, as it’s always possible to change the variable to another by a non-linear transformation. In that case a prior flat in one of the variables is not flat in the other. A typical case is the pair climate sensitivity S, and the feedback parameter f (assumed typically to be less than 1) related to S by
S = 1/(1-f)
f = 1-1/S
A parameter flat in S has a divergence in the distribution of f when f -> 1, while a prior that does not diverge when f -> 1 has an 1/S^2 type tail for the prior of S. The latter leads normally to lower estimates for climate sensitivity.
The case of Lewis is a little more complex. The flatness assumption is in his case made for the prior distribution in the space of the empirically measurable temperatures. That assumption is transformed to the prior in the space of the model parameters (climate sensitivity, ocean diffusivity, and strength of aerosol forcing) in a way determined by the particular model used in the analysis. Lewis’ Fig 4 tells about the outcome. I cannot see any fundamental reason, not anything even remotely plausible, why this method should lead to a valid outcome. In that sense the method is not theoretically stronger than other ways of fixing the prior, it’s only less predictable.
A completely separate issue is that I like the idea of a prior that has a non-diverging f as defined above, and that such priors lead to quite similar results as what Nic Lewis has found. The arguments that I have for my preference are not bullet-proof, because the implied assumption that f < 1, or that the Earth system remains stable within the range where linearity can be applied is not really strong.
I feel that all this talk about abrupt climate change misses the more than likely not so abrupt climate change.
We still talk as though any of the existing climate cycles exist and will continue to exist.
Is there an El Nino in the higher energy world? Does anyone know?
Richard Alley is great in that talk and I also think he makes a very good point about the uncertainties falling mostly on the bad side. However, did anyone else notice Dr. Aradhna Tripati smiling when she says (about 6:14) “there are going to be some pretty nasty surprises in place for us in the Centuries ahead”? I found this a bit disconcerting. This is exactly the point that Naomi Oreskes was making when she said that scientists don’t allow themselves to sound alarmed when communicating something alarming.
Okay, thanks, I see what you mean. I’ll have to have a better look at Gillett et al. (2012).
There are two features of the Lewis and Crok publication, first that they propose a figure in the low end of the range with their methodology.
Second, that their method is better than all others because it more closely constrains the error range.
Almost all of the methods of determining climate sensitivity include the Lewis and Crok value within their respective error range because this is several degrees wide for those methods. It is the wide 95% probability range that Lewis and Crok use to justify their rejection of paleoclimate derived estimates of ECS.
However the Lewis and Crok error range is much smaller and excludes around half of the values determined by around half of all the other methods as can be seen in fig2 in the short form publication.
It may reveal my scientific naivety in this field, but I find I have a curiosity about the outcome of an attempt to use the Lewis and Crok method of deriving ECS from observational data on the psuedo-observation output of a climate model.
Would the observation derivation match the analytically modelled (emergent) value? How might any discrepancies reveal biases in the modelling or uncertainty in the observational derivation.
izen: “Second, that their method is better than all others because it more closely constrains the error range.”
Does it? I just read Myles Allen saying: “”[I]t turns out Lewis and Crok’s [uncertainty] range (not in the GWPF report, but kindly provided by Nic Lewis) is 0.9 to 2.5 degrees Celsius, which is almost identical to the range of the [IPCC’s] models (1.1-2.6 degrees Celsius).”
As far as I’m aware Lewis (2013) does not produce a TCR estimate because it’s a comparison between models and observational data in which the models are characterised by an ECS, an aerosol forcing and a deep ocean diffusivity. I presume that each model should also have a TCR estimate, but I can’t find any mention of that in Lewis (2013). Otto et al. (2013) do produce TCR estimates and these are more similar to the IPCC values (1.0 – 2.0) and would probably rise somewhat if Cowtan & Way (for example) were considered.
Inspired by the very uncivil but funny http://www.theguardian.com/commentisfree/2014/mar/06/bad-words-swearing-responsibly
The method of Lewis 2013 transforms the distributions of the empirical data set to a three-dimensional PDF in the parameter space using the model as tool in the transformation. That approach does not produce directly estimate for anything else than the chosen parameters. To get results for other variables like TCR a separate study should be done on, how far it’s determined by the distribution in the parameter space.
Alternatively the same analysis could be repeated using TCR from the beginning, if that’s technically possible with the selected model.
The Otto et al analysis could be called as determination of TCR and ECS as defined for the particular temperature index (HadCRUT) rather than the global MST.
Global Warming Policy Foundation has a new report by Nic Lewis and Marcel Crok called Oversensitive :a Crok of [MOD: uncivil redacted]
I like your self-moderation here, VTG. Maybe it’ll take off? Lately I’ve been thinking my moderating has been more of a hindrance than a help so perhaps a self-service style of moderation like this might work? I encourage others to try it 🙂
Along what lines?
That said, self-moderation is the best moderation.
Speaking of moderation – looks like Ed Hawkins has (at least on the thread that was linked) a policy of allowing initial and largely ridiculous broadsides to stand, but deleting the follow-on comments. It’s an interesting approach. Not sure what the thinking is. But the hard-line moderation does keep the discussion focused. It will, however, necessarily limit the discussion. It would be hard to adopt such an approach and also post on less technical and more policy-related topics.
Along what lines?
I’m angering people on both sides. But I agree that self-moderation is the best.
I had a look at the comments on Ed Hawkins blog and I don’t see any moderating except for one comment for being political. Have I missed something?
Pingback: IPCC stevig op de vingers getikt door Lewis en Crok - Climategate.nl
I’ll also have a post on this report on Monday. There’s a paper to be published on Sunday that really decimates the entire report. Not that it’s really needed – there’s already plenty of evidence contradicting the report, which is really an exceptionally biased view of the climate sensitivity literature. The excuses for disregarding the paleoclimate-based estimates are particularly lame, and the excuses for disregarding the GCM-based estimates are just plain wrong. There’s so much wrong with the report, my SkS post on the subject is nearly 3000 words long.
As Allen and Hawkins and others have noted, ultimately they’re just arguing that sensitivity is on the low end of the estimated range. Even if they’re right, we’ll still get a dangerous amount of warming under business-as-usual, so it will still require significant policies to mitigate that risk. But in any case, the full body of evidence is firmly against them. They just dismiss or ignore the research that doesn’t support their desired conclusion, and overlook the shortcomings of the research that does.
you need to keep it up. Otherwise it’ll end up like Curry’s. Self moderation is like self awareness. A good thing in principle but rarely achieved in practice.
Any if anyone complains about it, just tell them they’re [MOD: uncivil redacted}
So long as they fear you, it doesn’t matter if they are angry 🙂
No, but seriously, what VTG said.
As for moderation at CLB, Ed Hawkins comments right at the end of the thread.
I haven’t seen anything that anyone without a persecution complex should have found objectionable about your moderation.
Ed’s moderation seems to me to be pretty arbitrary.
1) geronimo drops a broadside, and Ed deletes the follow-on comments – and says to let him know if he’s been unfair.
2) I write a comment saying that I don’t think that he’s been unfair, but that if the follow-on comments are deleted then geronimo’s broadside should be deleted as well – and my comment doesn’t get past moderation.
3) Foxgoose posts a complaint about being censored, and it gets through moderation.
The problem with moderating is that it is almost impossible to be perfectly consistent. You’re doing as good a job as anyone could reasonably expect.
Well, I don’t think it’s been a hindrance. It’s been very useful. Also, if anyone is annoyed, it should really be with me, not with you. It’s not easy so some self-moderation would certainly help, as would some recognition that it can be tricky.
Rachel, for what it is worth, I consider you to have been doing an excellent job in moderating. Do not treat complaints as indications that you are doing anything wrong. Mostly, they are only evidence that somebody would have done it different (which is not the same as better).
Rachel is training me not to troll. A really nasty habit that I learned from the so called ‘skeptics’.
Rachel is doing a very good job.
Random question: how do people feel about the fact that (a) Lewis and Grok didn’t submit anything to peer review and (b) it’s still caused this level of detailed discussion? Do we think blog science is doing something good here?
I was kind of wondering the same thing. I’ve seen various people of Twitter claim, “you see, most leading climate skeptics are actually lukewarmers and the Lewis & Crok report illustrates how they’ve been misrepresented”. So, maybe this is true. The problem I still have is what this implies. Are we meant to give “climate skeptics” more credibility because they acknowledge that future warming is likely? Maybe, but it stills seems that they’re selectively choosing the low-end of the range. Doesn’t seem credible to me. It’s maybe a step in the right direction but it’s still seems disingenuous for “climate skeptics” to be now running around complaining about how they’ve been mis-represented in the past when they still seem to be ignoring a large fraction of the available evidence.
It makes me grind my teeth. This is a PR campaign dressed up as a paradigm shift.
I think Lewis does tend to lose some credibility by associating with the politically motivated GWPF. And he does seem to have an overt agenda to discredit the IPCC.
I would also take issue with the title. I don’t think the IPCC “buried” anything. The dispute seems to be over what studies should have been given more weight with regards to climate sensitivity.
I really like this site and the moderation of it seems to be just fine to me.
Thanks for pointing out the latest contribution from the GWPF, it seems fairly typical of that organisation’s output, and once again leaves me asking climate ‘skeptics’: “Is that all you’ve got?”
I’m no expert, but it seems to me that Lewis & Crok go to great lengths to dismiss, rather casually, the large array of evidence from modelling and palaeoclimate studies which doesn’t fit with their belief that climate sensitivity is low. With regard to modelling, I’m waiting for a “skeptic” or anyone else to unveil their credible state of the art climate model that suggests that climate sensitivity is low (unless I’ve missed this happy event?). Surely instant scientific fame awaits.
The inordinate amount of time spent attacking (the approximately 7 years out of date) IPCC AR4 in Lewis & Crok’s article also amuses me. I think it distracts from their lack of an argument against the estimate of the plausible range for sensitivity given in AR5.
Anyway, as Piers Forster (and Dana above) point out, we’ll get what I would call dangerous warming even if we’re very lucky and if Lewis & Crok’s low sensitivity estimate turns out to be correct.
I like Ed Hawkins’s comment: “It is great to see the GWPF accepting that business-as-usual means significant further warming is expected. Now we can move the debate to what to do about it.”
I just realised that perhaps “going to great lengths to dismiss something rather casually” may be an oxymoron. Perhaps I should have said “very unconvincingly in my opinion” instead of “rather casually”.
Nic Lewis has just pointed out the the globally average rate of OHC uptake in the 0 – 1800m interval of 0.56 W m-2 reported by Lyman & Johnson (2014) seems inconsistent with their Figure 4 (which shows the OHC against time for the different depths). Does anyone who might read this understand the discrepancy. It seems like a rather silly mistake to have made, if it is a mistake.
Wow, thank you people. Kisses and hugs to all of you. I’ll try not to feel so despondent. 🙂
Cheers for replies. Joseph – Frank O Dwyer took issue with the same thing – as he says, it doesn’t suggest good faith, does it?
And I understand why it makes BBD grind their teeth! We have two possibly completely incompatible things going on (and I’m sure I’ve said this before): the PR exercise, as BBD puts it, who’s only goal is to create the appearance of scientific argument where none exists. If FUD is sown in the mind of the public and policymakers, the job’s done. I’m not even sure it’s that consciously thought through, but…
And then there’s assuming good faith and arguing on points of science. Even as I write that I realise this is an old, old argument: where should we be investing our energies / is it just massively wasting them even to respond to something posted at the GWPF?
If this were a blog run by a biologist and someone had posted a report with some half-feasible ideas on a creationist website, how would you react? Where it’s hosted clearly *does* say something about its agenda, but perhaps there’s still an onus to patiently restate where something like that is going wrong.
As regards blog science: I’d like to think, at some point in the near future, there would be such a thing. So far, the interaction of the internet with science as represented in university institutions and peer review has been pretty shocking. Climate science is at the forefront of that interaction and, thus far, we haven’t found anything remotely comparable to peer review able to accumulate repeatable knowledge.
Quite the reverse: the net’s shown itself capable of undermining scientific knowledge in the places it really matters, with policymakers and the public – though to be fair, I’m not really sure how it compares to something comparable, pre-internet, like how the science of smoking and cancer progressed.
Only glanced at this but there might be a typo in the caption: 900 – 1800m written up as “0 – 1800m”.
Possibly, but the text and figure caption are both the same. It would probably explain it though, if that was the issue. All the other sources I’ve found seem to suggest that the OHC uptake rate exceeds 0.5 Wm-2 for the period 2004 – 2011, so 0.3 Wm-2 suggested by Lewis seems a little low.
First, what is a “leading” climate “skeptic?” Who gets to determine who is leading and who is following?
Second, there is no consistency in how the term is applied. For example, there are many who identify as “lukewarmers” and who say that there is no fingerprint, at all, of ACO2 in any measurement of the climate. The attack all the various metrics as being invalid. In fact, they argue that there is no meaningful concept as global mean temperature. Would that person be a “lukewarmer?” Can someone be a “lukewarmer” and still make arguments that are logically inconsistent with “lukewarmism?”
A problem in the logic of the statement is excerpted above is that it a generalization about “skeptics,” of the sort that is often made by a “skeptic” who then turns right around and says “Skeptics” are not monolithic. In fact, they aren’t “monolithic” – and that is true irrespective of the attempt by some “skeptics” to throw other “skeptics” under the bus because they aren’t convenient.
The larger, more basic problem, is that people are trying to apply labels like “lukewarmer” or “skeptic” without any clear definition of inclusion or exclusion criteria, and in fact, the terms are applied in ways that contain within the definition, arguments that are mutually exclusive. The ambiguity of the terminology is readily exploited, and thus it allows a “skeptic” to say that “most ‘skeptics’ are ‘lukewarmers'” without being accountable for the veracity of that statement.
Having looked a little closer at the text, I noticed this:
pp 21 – 22:
Which would seem to confirm the table but I’m none the wiser about the fig 4 issue.
Something I’ve been wondering about, and may not explain all that clearly in a short comment, is how some people (Pielke Jr being one example) complain about how they’ve been characterised by others. In my opinion, in general you don’t get to decide how others characterise you. Unless it’s defamatory, others get to decide your character. If people think you aren’t “mainstream” then that’s their right. You can choose to then act in a way that makes you more mainstream, but you don’t really have the right to insist that others don’t characterise you in that way. Similarly, complaining about how various “climate skeptics” have been characterised seems similarly silly. If you don’t like what people think of you, behave differently. If you believe your views are credible and correct, then stick with it until those who’ve unfairly characterised you are proven wrong and – possibly – acknowledge this.
Yes, I saw that paragraph too and did wonder if it didn’t explain the issue, but I couldn’t quite work it out myself.
Somebody might want to email the corresponding author. Someone with academic credentials 😉
Re “lukewarmer”, no lesser luminary that Mosher himself (who claims to have been involved in the coining of the term) told me that this requires acceptance of the basics (radiative physics; GHGs as efficacious climate forcings etc) but a belief that ECS would be below the ~3C/2xCO2 ‘canonical’ estimate. He is, I believe, a self-identifying lukewarmer. I also strongly suspect his rhetoric is part of a pattern of attempts at legitimising himself in the light of his rather more stridently sceptical past.
Sorry – I should have continued. The definition of lukewarmer *to me* and many others, is a belief that ECS (or perhaps now TCR) is right at or even below the IPCC range. This belief enables a no-worries position on emissions in those espousing it.
So physics deniers cannot be lukewarmers.
I think that the amount of energy expended on labeling, including the drama-queening about whether someone has slandered them by applying to them a label, is instructive of the underlying dynamic. The climate wars are largely about identity struggles. This is why people spend so much time arguing about what label should be applied to whom – particularly given that no one bothers to create clear definitions of terminology. Entire threads at Climate Etc. have been devoted to deciding which derogatory term is appropriate for referring to “realists,” by people who spend a lot of time expressing outrage, outrage I say, about whether someone has called them a “denier.”
I mean really, why does it create such indignation if someone calls you a name and you know it doesn’t apply? I get called all kinds of names in these debates that don’t apply. Why would that bother me?
The scientific arguments should stand on their own merits. But instead, they are often seen as a tool for identifying “us” and “them.”
I found it amusing to watch the struggles about whether Muller is a “skeptic” when the opinions on whether he is or isn’t did a 180 after the BEST study was released. It was another ink blot moment in the climate ward, where people see what they want to see.
Ethan Siegel calls Judith Currie’s behavior, ‘scientific fraud’.
Here’s a blast from the past, The GWPF versus BEST;
Can Science Ever Be Settled?
View at Medium.com
Judith Curry’s *words* on advocacy:
See her posts:
“Mann on advocacy and responsibility”, “Rethinking climate advocacy”, “(Ir)responsible advocacy by scientists”, etc, etc, etc ad nauseum.
Judith Curry’s *actions* on advocacy however:
to write a foreword to a GWPF document, a lobbying organisation on climate policy (!) (!!)
I believe the correct word is “chutzpah”. Although “sickening hypocrisy” might also apploy if I was feeling less charitable.
@anders/BBD: re your comment at CLB (in line with your comment above) regarding the 2004-2011 OHC uptake rate Lyman and Johnson 2014, it should be clear that Table 1 is to be trusted over the blurry Fig.4. I’ve been wondering for a while what went wrong with Fig.4, but thought it would be a good idea to wait for the final version of the paper. Having said that, before I had used this number in an official report or paper, I would have contacted the authors. The fact that Nic Lewis has chosen not to do that and to ignore Table 1 (and the corresponding text in the paper) altogether is more than telling. He also doesn’t seem to care that Fig.4 is essentially inconsistent with most of the other published estimates (let alone that the true value can hardly be extracted from Fig.4). So his assumptions are almost certainly wrong. Combine it with all his other best-case assumption (HadCRUT4 rather than Cowtan & Way etc), it can’t help but think that his degree of wishful thinking has apparently increased over time.
Apart from that, I am disgusted by the tone of the usual suspects in the comment section under the Climate Lab Book posting. DK from start to end … with a few laudable exceptions of course 😉
The final paper of Lyman and Johnson is now available in the Journal of Climate 1 March 2014 pp 1945-1957
The Table 1 of the the paper gives 0.29 W/m^2 as the warming of 0-1800 m over the period 2004-2011 as average heat flux applied to Earth’s entire surface area.
It would seem to be the reasonable thing to do. Proper scientific caution, and all that.
Pekka, we crossed. I manage to do this with most commenters here 😉
Indeed, have just managed to find the published version. That would seem to resolve that then. However, as Karsten suggests, that does seem somewhat lower than other published work would suggest.
There’s no way around the fact that the coverage of well-understood and reliable ocean temperature measurements is lacking. Since 2004 we have better data from ARGO, but the time period is too short for firm conclusions. The data appear to tell that warming is not uniform by depth, most certainly it’s also far from uniform geographically. I’m afraid that all uncertainties inherent in reconstructing OHC from the measurements are not taken fully into account in most analyses. Lyman and Johnson tells explicitly about these difficulties.
For the above reason, I don’t trust fully any of the data on OHC, including the 8-year trends determined by Lyman and Johnson. It’s like the “pick-the-name” behavior of the the surface temperatures of last 15 years. The low values should not be used strongly against understanding that tells about stronger warming, but the low values surely cannot add support to those theories. In this case the Lyman and Johnson results tell that we cannot conclude from empirical observations that the heat has gone into the oceans during those 15 years. It may have gone, but empirical observations do not confirm that.
” Can someone be a “lukewarmer” and still make arguments that are logically inconsistent with “lukewarmism?”
Many of those who have recently got the ‘lukewarmer’ religion have certainly done so. For example consider how many that now claim this label are obsessed with the Hockey Stick and the existence of the MWP. According to Richard Alley: “The irony is that a warmer medieval time popularly means less concern about global warming, but to me it indicates higher climate sensitivity motivating greater concern”.
It also seems that many of these ‘leading sceptics’ have been horribly misunderstood all these years even by many of their own followers. Somehow people have been reading the ‘leading sceptics’ trojan attempts to convey the lukewarm message and instead got hold of ideas such that it is cooling, or that the warming is natural and is caused by UHI, plus the warming that isn’t happening is caused by the sun and ocean cycles, that temperature records are unreliable, adjusted upwards, yet show a pause, and so on.
Pekka, I’m not sure how this affects the discussion since I have yet to read either paper, but if England et al. are correct the :missing” heat is disappearing into the ocean in specific places. Do the obs used by Lyman and Johnson reflect that, and if not is it reasonable to expect they could do so?
Lukewarmers are easily explained.
1) For unscientific reasons, they really want to deny that AGW is a major problem.
2) Physics denial isn’t respectable, and they want to be respected.
Re: silly srguments over labels
On blogs anyone can argue about anything.
Just as some of us look for good climate scientists to learn about climate science, we look for good social scientists for research about social science questions related to climate issues. For instance, see the Yale/GMU Six Americas studies, starting here and explore the “dismissive” category.
For an analogy, is there any practical difference between people who say:
1) There is no evidence that smoking causes disease, researchers do not know every biochemical mechanism, and my smoking uncle lived to 95. All the claims are just statistical gimmicks by socialist medical cabal who want to enforce their morality on us.
2) yes, smoking might cause problems sometimes, but not so bad, and people can always quit, and under no circumstances ever should cigarette taxes be raised or there be any further restrictions .
Congenitally encumbered by a scientific world view, Richard Alley misses the point, which is that if climate scientists can be shown to have made an error, then all of their claims can be questioned.
Interesting result and I have no doubt that we’ll be hearing more about this 😉
One of the reasons for my doubts on the accuracy of the OHC data is that many things occur in the ocean at specific places. There are rather large temperature gradients, and the local temperatures vary really much more than the average ocean temperature.
For the land based surface temperature measurements we have the advantage of fixed measuring points. Therefore we can determine rather reliably and accurately temperature changes at a large number of fixed points. If we look at the temperature field as a function of place and time, we can determine directly the partial derivative with respect to time at all fixed measuring points.
In oceans the measuring points vary all the time. We can measure separate points in the four-dimensional space of place and time. There are strong and varying derivatives in this temperature field. We wish to determine the derivative of the average, but some small volumes may affect that strongly, because their temperatures vary so much more. It’s really difficult to reach a high accuracy in the reconstruction of the warming rate. The large number of ARGO floats starts to seem far too small, even tiny. Of course the scientists who do the calculations are aware of all this, and of course they give their error estimates having this knowledge. Even so I have my doubts.
In the acknowledgements of Lyman and Johnson (2014) in the published version (http://www.pmel.noaa.gov/people/gjohnson/gcj_4n.pdf):
“Comments of three anonymous reviewers improved the manuscript.. Nicholas Lewis pointed out an error in the accepted version as well.”
This suggests to me that Nic Lewis both contacted the authors and confirmed that this was an error in their accepted draft version.
Thanks Pekka for pointing that out. Just downloaded the paper and things have indeed radically changed for 2004-2011. Much to my surprise (for reasons I highlighted above), the tabular values were indeed completely wrong. Well, Nic Lewis may open a bottle of champaign now. If this value turns out to be closer to the truth (compared to previously published estimates), then their seems to have happened something pretty weird with the forcing over that period of time. On a final note, one could argue that the pentadal average of the Levitus OHC data between 2004-2011 isn’t much higher, which doesn’t make Lyman and Johnson not lesser an interesting result.
Yes Troy, you may condemn my unwarranted prejudgement ad libitum now. NL got it all right. I got it all wrong. Happens. I duly apologize. Cheers!
Just another thought which just popped into my mind. The change in forcing between 2000-2004 was indeed slightly negative (though with a considerable planetary energy imbalance). Sure enough, OHC uptake between 2000-2003 was massive. Oceans just doesn’t seem to care much about the forcing 😉 But then, no one would expect this to be the case anyways I guess …
Steve Bloom – “but if England et al. are correct the :missing” heat is disappearing into the ocean in specific places. Do the obs used by Lyman and Johnson reflect that, and if not is it reasonable to expect they could do so?”
There’s nothing mysterious about where the heat should be going – see Ekman (1905). And indeed heat is occurring in the deep ocean beneath areas of surface convergence – see the supplementary material in Levitus et al (2012).
Interestingly though, most of the deep ocean warming in the last decade has been occurring in the Southern Hemisphere subtropical gyres.
Karsten – “Sure enough, OHC uptake between 2000-2003 was massive.”
It sure was. Check out Figure 2 from Balmaseda et al (2013). Even when the ARGO data is excluded the rapid ocean heat uptake (OHU) in the early 2000’s remains. What really caught my eye was where this rapid OHU was occurring – see Figure 3 in their paper – the rapid warming occurs in the tropical ocean. Somewhat difficult to handwave that away by invoking the uncertainty ewok……
Those links don’t work for me so I thought I’d grab the figures and post them here instead.
Source: Balmaseda et al (2013)
It seems that the accuracy of ocean heat content keeps coming up. The numbers are accurate.
Levitus has been very active in measuring and accurately predicting ocean thermal profiles using sparse data with XBTs. I first learned about his work in Anti-Submarine Warfare in 1991, not climate science.
His methods and data are applied, not theoretical. Extracting accurate data from sparse geo-spacial sources are the realm of engineering. A submarine isn’t going to surface and run an XBT down to figure out where to hide. It primarily uses the Levitus database to accurately predict where to hide. All of naval warfare is predicated on the accuracy of just this data.
Thermal Saline profiles are used to predict the speed of sound underwater. Variations will channel sound to particular depths and deflect it from others. You hide your $3 billion dollar submarines where the sound is deflected away.
So have L&J, ah… misunderestimated OHU?
Reposted for emphasis
It's so tragic that they've been so misunderstood.
Oh, and Frank – also tragic that they’ve been misinterpreted to believe that climate models have been rigged to produce a particular outcome, or that climate modeling itself is completely useless, or that it is impossible to estimate climate sensitivity and fraudulent to try, instead of understanding what they really have been saying as “lukewarmists,”: That modeling is not only possible, but in fact quite reliable, and that it is quite possible to estimate sensitivity within a fairly narrow range (as long as we throw out the high ends of the ranges and focus only on the analyses that produce relatively lower “best-estimates”).
This thread, and links contained to other sites, plus recent discussions on this site and others on communicating science to the general public has convinced me that I am on the right track when giving talks to the general public on GW (subject to any mid-course corrections suggested here).
We need to be very explicit to the public what is ‘certain’ and what is ‘uncertain.’ And I do not think that is always done in a succinct and direct manner. (A manner of speaking which I usually attribute to the ‘Brits,’ as opposed to the ‘cousins in the States.’
What is ‘certain’: 1) GW is real, 2) it is due to GHG effect, 3) GHG effect is due to excess CO2, 4) excess CO2 is due to humans, 5) therefore it is going to continue to get warmer with attendant consequences.
What is ‘uncertain’ (and discussed herein in great, excruciating detail): 1) how far it is going to go, and 2) how fast it is going to get there.
Therein lies the dilemma. How do we determine a policy, and get the public to buy in, when we do not have ‘certainty’ about the road we are about to travel, how long that road is, what speed we are going down that road, and where the end of the road may lie?
BG: The military is all over this stuff and they truly grasp what it’s implications are.
Global Warming means ‘war’ as far as their are concerned. Fighting over resources (like water) within a nation will tear at the very heart of what it is to be a nation. (You can expect nations to break apart.) You can’t ask people to ‘be reasonable’ when their populations are starving. Reason leaves the picture.
Refugees will also be a huge burden. The US already has some in Alaska. A huge portion of Florida will be flooded in the next hundred years. Navies will also be refugees, apparently their bases are built near oceans. Who knew? 🙂
Food will be scarce. Nations which rely on imports will likely be bankrupted by escalating food prices. That’s you Brits in case you’re wondering. Some nations will do well… Canada can expect more bumper crops of wheat and Canola, etc. However this is not the same as growing more food… The soil here won’t support much more than grains.
Carbon does not enhance plant growth, particularly if coupled with increased temperature. There have already been experiments which simply increased CO2 (everything else remained the same) around outdoor plants and there was no improvement.
Water, which is already a serious concern will become more so with increasing heat and desertification. Ground water pollution is also hitting dangerous levels from toxic chemicals to FRACing. By the way… I’m reasonably certain that we can’t seal up an oil well. They need maintenance in perpetuity as even the cement plugs break down. (It’s not the FRAC that leaks, but the well bore going down there.)
Lastly ocean acidification stands to trash our ocean food supplies. All data points declining fish/shell fish stocks. To my knowledge the oceans provide the bulk of Earth’s protein. (Hell hath no fury like a few billion starving Chinese.)
Lastly we have no clue what the new climate will be. There is an underlying assumption that the existing natural cycles will still be there. But I seriously doubt that. Perhaps Canada will get constant snow and ice storms? Maybe the ENSO cycles will disappear?
We can scrub the CO2 from free air and sequester it. It currently costs 150$(?) a ton to do this. A Calgary company developed this technology.
I believe Hansen’s Tax and Dividend idea will work. The policy is working wonders in BC Canada. No government, zero cost to do.
AnOilMan, I agree with everything you’ve written here, except
Here’s a nice Nature summary of the CO2/Plant Growth discussion.
There may also be the occasional bright spot, like the supposed greening of the Sahara, or (wishful thinking) putative improvements in photosynthetic potential. But your first two paragraphs are what really worry me, partly because our politicians appear to be completely ignoring these risks.
AnOilMan and That’sMrBall (all this from your link MrBall which I think is worth elaborating on here),
On the topic of CO2 and plant growth, C4 plants do not benefit at all from increasing CO2 concentrations and these plants include important crops like maize, sugar cane, sorghum and millet. These plants also cover large areas of the planet like the vast tropical grasslands of Africa and South America.
The plants that do benefit from increasing CO2, C3 plants (wheat, rice, soybeans), also show a decrease in the concentration of important minerals like calcium, magnesium and phosphorus and also lower protein concentrations. So animals eating these crops will need to eat more of them.
The other issue is water use which does decrease with higher CO2 levels. However this is not necessarily a good thing as it will cause runoff and higher soil moisture levels.
Fully concur. I have a presentation that I give on the ‘science’ and have another 50-slide one on the US Military and climate change.
Interest stems from just retiring after 40 years as DoD contractor, the last 19 years of which running my own small business. AND having a son who did 6 tours in Iraq in our war for oil.
You might enjoy the comments policy in this blog – http://www.ritholtz.com/blog/2014/03/your-broccoli-is-way-too-thirsty/
Please use the comments to demonstrate your own ignorance, unfamiliarity with empirical data and lack of respect for scientific knowledge. Be sure to create straw men and argue against things I have neither said nor implied. If you could repeat previously discredited memes or steer the conversation into irrelevant, off topic discussions, it would be appreciated. Lastly, kindly forgo all civility in your discourse . . . you are, after all, anonymous.
Further to what Rachel said, it’s always worth remembering that CO2 is only a factor in plant growth when other limiting factors are controlled for. So nutrients and water and temperature need to be just right. What works in a hydroponic greenhouse with an elevated CO2 atmosphere doesn’t usually work in the great outdoors.
Anyone wanting to get a grip on the geopolitical implications of C21st warming could to a lot worse than read Gwynne Dyer’s Climate Wars. Dodgy cover, generally excellent text 😉
Six tours. Hell’s teeth. My commiserations to you as a parent.
I did enjoy that! Thank you. This bit was especially good:
I also like his response to complaints of censorship: GYOFB.
The blogger is clearly a retired diplomat.
jsam and Rachel,
That’s brilliant. I particularly like GYOFB, especially as that’s essentially exactly what I did 🙂
What BBD said. That must have been quite something to have had to go through – for you and your son.
It is interesting to note that the Nature paper cited by MrBall2U hardly mentions the effects of increasing temperature on plant growth. Early experiments on RUBISCO (the enzyme responsible for CO2 fixation) was quite resistant to higher temperatures so it was felt that higher temperatures would be beneficial to plant growth.
However, more recent research found that RUBISCO went through a daily cycle from active form to inactive form. The activation was carried out enzymatically by the enzyme RUBISCO activase. Unfortunately, RA is heat sensitive so that higher temperatures will in fact lower carbon fixation and hence plant growth:
Click to access 13430.full.pdf
BBD and ATTP,
Thanks. And for his mother. “Hell hath no fury like a woman” whose son has been sent to Iraq 6 times. And had two grandkids born during two of this deployments……and one by C-Section.
They’ve run experiments on dumping extra CO2 around plants and not changing other variables. As always, the answer is complex, and but not good;
Yes, I admit it’s wishful thinking on my part that some small benefit might emerge from that complexity.
Pingback: Another Week of Anthropocene Antics, March 9, 2014 [A Few Things Ill Considered] | Gaia Gazette
Pingback: A poignant essay | …and Then There's Physics