Assessing global warming

I haven’t really tackled anything specifically physicsy for a while so I thought I might comment on a recent post on Judith Curry’s blog that was pointed out to me by Very Tall Guy. It’s a guest post by Roger Pielke Sr called an alternative metric to assess global warming.

Roger seems to be suggesting that we should use ocean heat content (OHC) to assess global warming. I certainly agree as – I thought – do many others. If anything, I was under the impression that many were arguing that surface temperatures were a poor metric for assessing global warming. So, overall, the post is quite interesting. Roger does, however, say a few things that confuse me; for example

Stephens et al. (2012) reports a value of the global average radiative imbalance (which Stephens et al. calls the “surface imbalance”) as 0.70 Watts per meter squared, but with the uncertainty of 17 W m-2!

Stephens et al. (2012) do indeed suggest a surface radiative imbalance of 0.6 ± 17 Wm-2, but also show a top-of-atmosphere (TOA) imbalance of 0.6 ± 0.4 Wm-2. I would argue that the latter is what’s actually relevant to the climate system as a whole. I’ve never been quite sure why the uncertainty in the surface imbalance is so large, but I had assumed it had something to with the large variability in surface warming.

Roger also says,

It needs to be recognized that deep ocean heating is an unappreciated effective negative temperature feedback, at least in terms of how this heat can significantly influence other parts of the climate system on multi-decadal time scales. Nonetheless, we have retained this heating in our analysis.

I’m not quite sure what he means by this. I agree that the deep oceans can probably influence how the energy is distributed through the system and can play a role in variability, but am not sure why that’s a negative feedback.

Anyway, Roger’s basic argument seems to be that the standard equation

StandardEquation
where ΔQ is the system heat uptake rate, ΔF is the change in radiative forcing, ΔT is the change in temperature and λ is essentially the climate sensitivity, is conceptually useful, but can be difficult to implement.

Roger seems to be arguing that we should be more explicit, and should instead use

ExplicitFeed
where ΔFfeed is the feedbacks and includes that due to the change in temperature.

The post then includes the following two figures
spm5
wielicki
From the top figure one can work out that anthropogenic forcings have increase by ΔF = 1.72 Wm-2 since 1950. From the bottom, figure one can sum the 4 different feedbacks to get -1.21 Wm-2K-1. Surface temperatures have increase by around 0.6 K since 1950, so ΔFfeed = -1.21 x 0.6 = -0.73 Wm-2.

If you take those numbers and plug them into the second equation above, you get ΔQ = 1.72 – 0.73 = 0.99 Wm-2. So, this is somewhat bigger than the mean value in Stephens et al. (2012), but within the uncertainty range. The uncertainties in some of the quantities above are quite large; for example the change in radiative forcing is actually 1.72 ± 1.1 Wm-2 which, if included, would make the results consistent with the OHC measurements. There’s also been a small reduction in solar forcing since 1950, which isn’t included. So, it all seems about right to me. Quite impressive in some sense.

One thing I will add though is that, as far as I’m aware, the first equation above (which Roger seems to think is not ideal) is essentially the same as the one that Roger recommends. Consider the following :

Consistent2
where ΔW has units of flux per Kelvin, and ΔWnonTfeed is the flux per Kelvin for all feedbacks, bar those due to the change in temperature. Interestingly, using Roger’s own numbers, 1/λ = 1.21 Wm-2K-1, which gives λ = 0.82 K/Wm-2. Given that a doubling of CO2 produces a change in forcing of 3.7 Wm-2, this implies an equilibrium climate sensitivity of 0.82 x 3.7 = 3.1 K.

So, as far as I can tell, the form that Roger suggests using is essentially consistent with the form that most already use; you can estimate the feedbacks if one knows the change in temperature, the system heat uptake rate, and the change in anthropogenic/external forcings. If you want to check GCMs (which I think is part of what Roger is suggesting) you can take their feedbacks and use that to estimate the system heat uptake rate, which can then be compared with measurements. Alternatively, use the system heat uptake rate, the change in anthropogenic forcings and the change in temperature to estimate the feedbacks and then compare that with what GCMs suggest. Both seem essentially the same, to me at least.

However, I do agree with Roger that using OHC and other system heat uptake rates is a better way to assess global warming than using surface temperatures only. I am, however, slightly confused as to why he seems to be suggesting that this is somehow novel, as I suspect many others agree with him too. So, maybe I’m missing some subtlety here. If anyone can see something more to this than I have, feel free to let me know through the comments.

This entry was posted in Climate change, Climate sensitivity, ENSO, Global warming, Judith Curry and tagged , , , , , , . Bookmark the permalink.

64 Responses to Assessing global warming

  1. Fred Moolten says:

    Note that if you do the same calculations using the 1980 forcing value of 1.25 Wm-2 rather than the 1950 value of 0.57 m-2, the imbalance calculates out to 0.31 Wm-2 rather than 0.99 Wm-2. I suspect that the very large differences reflect inaccuracies in forcing estimates. If you average the two estimates, the imbalance becomes 0.65 Wm-2, similar to what you quote for Stephens (0.6 Wm-2) or for the IPCC (0.71 Wm-2). A small point – in your post, the temperature rise since 1950 (or 1980) should be expressed in terms of K rather than Wm-2.

  2. Fred,
    Interesting, I hadn’t thought of trying that. AFAICT, this is essentially equivalent to energy budget estimates for climate sensitivity which are also very sensitive to variations. Fixed the units, thanks.

  3. And I thought you had finally learned that equations scare people away. 🙂

    Maybe it is because I am more of a ground-based remote sensing guy, but I am surprised at how small the error bars of Graeme Stephens are. For the ground estimate, it makes sense that it is a lot larger. How much sun gets to the surface depends strongly on how thick clouds are, but most satellites basically just see the top. Also aerosols are hard to measure, especially below clouds naturally, but also near clouds. Graeme Stephens probably used his A-train data and thus had some information on the vertical cloud profiles, but also that data is quite limited for thicker clouds.

  4. Sometimes scaring people away is the intention 🙂

    As far as the large uncertainties are concerned, given that the surface can cool even when the TOA imbalance is positive would suggest that the surface flux can be positive or negative which suggests the uncertainty (if it is a measure of variability) has to be bigger than the mean TOA flux. 17 Wm-2 is surprisingly large, though. Alternatively, the uncertainty is something else altogether and I don’t really understand it at all 🙂

  5. I meant to add to the post that I was surprised by the global mean temperature feedback in the lower of the two figures (-4.2 Wm-2K-1). I had always thought that the surface emissivity was about 0.6 which would give a temperature feedback of 4 ε σ T 3 = 3.3 Wm-2K-1.

  6. Fred Moolten says:

    Regarding the temperature feedback, I haven’t looked up the reference, but I wonder if it might not include a lapse rate feedback as well as the Planck response.

  7. Fred,
    That’s possible. I did try looking it up today (it’s actually from AR4) but I didn’t have any success finding the figure and eventually gave up.

  8. Steven Mosher says:

    “However, I do agree with Roger that using OHC and other system heat uptake rates is a better way to assess global warming than using surface temperatures only. I am, however, slightly confused as to why he seems to be suggesting that this is somehow novel, as I suspect many others agree with him too. So, maybe I’m missing some subtlety here. If anyone can see something more to this than I have, feel free to let me know through the comments.”

    Simple:

    http://www.skepticalscience.com/pielke-sks-disagreements-open-questions.html

    As Rogers title suggests he has been arguing for some time that OHC should be the alternative
    and sole metric.

    The disagreement has gone on for some time. You must be new to the debate.

    “However, as the graphic illustrates, nearly 7% of the energy goes into the rest of the climate system. Moreover, the degree of accuracy of OHC measurements is still quite uncertain: the ARGO network is relatively new, and doesn’t measure the deep oceans, and short-term noise in the data remains a concern (i.e. see Domingues et al. 2008).

    Furthermore, we humans are surface-dwelling creatures, so impacts to the climate on the surface is highly significant to us. And most fast feedbacks are dominated by surface/atmospheric temperatures; so to rely on OHC as the sole diagnostic would disguise the progression of processes that will affect our experience of climate.

    Ultimately we stand behind our answer that we must consider all lines of evidence, and all aspects of the climate. SkS does not believe there should be a single preferred global warming diagnostic. However, it’s also important to note that a lot of energy has gone into the oceans (more on this below).”

    So the argument , drama, has gone like this.

    Side 1: Look at air temperatures.
    Peilke: OHC is better, OHC should be the sole metric.
    Side 1: well OHC is uncertain, and we live in by the surface. we need to look at all metrics.
    wow, look at air temps, look look look at air temps. look at the air temps..

    years pass

    air temps sputter…

    Side 1: look at OHC, look at OHC.
    Peilke: durrr…

    In short, Peilke has consistently argued that OHC should be the sole diagnostic.
    Side 1 has argued that we need to consider all and they highlight metrics when they look good.

    So, now the climate plays out. If air temps soar.. the PR focus is there. If ice melt increases, the PR shifts. If ice recovers, and temps stall, the PR shifts to OHC.

    its more interesting as drama than it is as science.

  9. Steven,

    As Rogers title suggests he has been arguing for some time that OHC should be the alternative
    and sole metric.

    Alternative : absolutely. Sole : never sure why we should restrict ourselves in that way.

    You must be new to the debate.

    Yup.

    Side 1: Look at air temperatures.
    Peilke: OHC is better, OHC should be the sole metric.
    Side 1: well OHC is uncertain, and we live in by the surface. we need to look at all metrics.
    wow, look at air temps, look look look at air temps. look at the air temps..

    years pass

    air temps sputter…

    Side 1: look at OHC, look at OHC.
    Peilke: durrr…

    I’m not quite sure how your SkS link somehow proves this narrative.

    its more interesting as drama than it is as science.

    Indeed, that seems common. From what I’ve seen, I’m not sure that you can claim to be faultless in that regard yourself. Myself too, to be fair.

  10. Determining separately each component of the energy balance from empirical measurements leads to a large uncertainty in the net energy balance also at TOA. The small uncertainty can be obtained only by using other constraints like changes in OHC. The TOA uncertainty determined without such constraints is smaller than surface uncertainty, but still several W/m^2.

    OHC is a good measure in principle, but lacking good history data and there are some issues of accuracy even now. Thus I don’t think it can take over as the primary indicator of warming in near future. It’s certainly an important measure, but not a replacement for surface temperature.

  11. Pekka,

    Determining separately each component of the energy balance from empirical measurements leads to a large uncertainty in the net energy balance also at TOA. The small uncertainty can be obtained only by using other constraints like changes in OHC.

    Indeed, I should probably have mentioned that.

    OHC is a good measure in principle, but lacking good history data and there are some issues of accuracy even now. Thus I don’t think it can take over as the primary indicator of warming in near future. It’s certainly an important measure, but not a replacement for surface temperature.

    I think the history issue is a factor. However, I’m not sure why it can’t start to take over as the primary indicator in the near future though. We now have good coverage at depths down to 2000m (I think) so it would seem sensible that it becomes one the prime indicators, even if it isn’t the only indicator (I don’t really see why it should ever become the sole indicator though).

  12. Arthur Smith says:

    If I may hazard a guess, I think the point Pielke was trying to make was that flux changes in reality may not be simply a response to temperature change on the surface (in fact, changes in incoming energy from the sun due to solar cycles etc have nothing to do with the surface temperature), and it is more correct at any given point in time to look at all the flux changes whatever the source. However, in the article he then just finds the flux change by multiplying the feedbacks by temperature change just as you did, so in actuality the treatment there doesn’t take any notice of this difference at all. Odd – but of course one hopes such an obvious inconsistency would be caught if this was an actual peer-reviewed publication… 🙂

  13. Science has never been restricted to a single indicator. The need of one primary indicator is elsewhere. In that use the surface temperature is perhaps the only one that’s understandable enough for most.

  14. Arthur,
    Yes, it certainly seems that his proposed change is essentially exactly the same as what he suggested wasn’t sufficient.

    and it is more correct at any given point in time to look at all the flux changes whatever the source.

    Yes, that would certainly seem to be the fundamental point.

    Pekka,

    In that use the surface temperature is perhaps the only one that’s understandable enough for most.

    Doesn’t that highlight the issue, in some sense. Scientifically we should really do as Arthur pointed out – consider all the fluxes to determine the energy balance/imbalance in the system. However, when communicating about climate change/global warming it’s much simpler to consider something that is understandable to most – surface temperatures, for example (although I think many can understand how different parts of the climate system absorb different amounts of energy).

    As Steven points out, though, much is this is drama rather than science. What one would hope is that active scientists could agree that surface temperatures alone are a poor indicator and that one should consider all the fluxes (for example) but also accept that when communicating to the general public, one might present a slightly simpler picture. One might hope that this is what would be done, but one might also hope in vain.

  15. dana1981 says:

    I was about to reference the ‘debate’ between Pielke Sr. and SkS, but I see Mosher beat me to it. Except I entirely share ATTP’s puzzlement:

    “I’m not quite sure how your SkS link somehow proves this narrative.”

    It doesn’t at all. We fully agreed with Pielke that OHC was a very important metric, and were simply making the point that it shouldn’t *exclusively* be used to measure ‘global warming’, as there’s another ~7% in warming the atmosphere, land, ice, etc. And it’s still true that surface temps are relatively important for surface dwellers, which is why they’re used to define climate sensitivity. Our position on this matter remains consistent – SkS always strives to consider *all* the available data and evidence.

    I’d also argue Pielke was a proponent of using OHC as *the* metric because at the time he first made that argument, measurements didn’t seem to indicate much ocean warming. That was before we had reliable Argo data below 700 meters, and before the data seemed to indicate much warming in the upper 700 meters.

    Now that we have better OHC data, I’ve seen Pielke continue to ignore the warming below 700 meters and claim the upper 700 meters haven’t warmed much, which isn’t true. So I’d suggest Mosher’s narrative about Pielke Sr. isn’t very accurate either, though it is true he’s been arguing that we should focus on OHC for a long time. However, I think Mosher has the ‘side’ that’s practicing ‘PR’ wrong.

  16. Steve Bloom says:

    “That was before we had reliable Argo data below 700 meters”

    That was before we had reliable Argo data *above* 700 meters!

  17. JCH says:

    Bingo. He was salivating at the chance in trash Hansen and GISS Model E, and once at Climate Etc he popped in and encouraged used of the PMEL graph for 0 to 700 meters.

    Also, I hate it when people say deep ocean heating. To skeptics that means abyssal oceans, and then they start with their idiotic “it’s harmless” routine. I think that may be what RP Sr. is getting at by “negative feedback”. By percentage, little heat is making it to the abyssal oceans.

  18. Closey says:

    [Mod: this comment has been removed by the moderator]

  19. I’m an SkS author, and wrote this in 2009:

    “I looked around for Pielke’s work mentioning heat content and found this [rotted link]. Is that a good reference? I agree that internal energy of the atmosphere is a more robust and useful variable than temperature, but I’d go one step further. That is, a much more useful variable would be the internal energy of the atmosphere and ocean combined. That would eliminate the spurious temperature swings associated with ENSO events that seem to mislead many people. This heat transfer between the atmosphere and oceans wouldn’t distort such a metric.”

    My “disagreement” has only gone on for about 5 years, so I must be new to the debate.

  20. Carrick says:

    Dana:

    SkS always strives to consider *all* the available data and evidence.

    Myself too. OHC just isn’t very useful because the temperature changes are small, the volume to measure much larger, the time scales for full circulation much greater, and the data coverage much worse.

    And as Mosher points out, the economical significance of mean surface air temperature is much greater than total OHC.

    I’d also argue Pielke was a proponent of using OHC as *the* metric because at the time he first made that argument, measurements didn’t seem to indicate much ocean warming.

    Mind reading would be a useful if you had anyway to validate it. Since people usually are terrible at it, I’d suggest it’s better usually to just consider the veracity arguments made by people and not assume intent about why they made the argument they made.

  21. anonymon says:

    Pielke was writing about heat content before the drama started:

    Pielke, Roger A., 2003: Heat Storage Within the Earth System. Bull. Amer. Meteor. Soc., 84, 331–335. doi: http://dx.doi.org/10.1175/BAMS-84-3-331

  22. Joshua says:

    steven –

    Here’s another convo I’ve seen.

    “Skeptic”: “We can’t rely on air temperature records because they have been fraudulently adjusted as a part of a hoax. Oh, and also, there is no such thing as global average surface temperatures. The entire concept is nonsense. You can’t average global temps. Oh, and besides, UHI, UHI, UHI (and also RC moderation).”

    “Realist.” “Well, the surface air temps have shown an anomalous rate of increase, when you consider all available evidence. That is certainly important. It suggest that we should consider policies that address mitigating the potential risk of accumulated ACO2.”

    “Skeptic.” “You’re an eco-Nazi cultist, intent on destroying capitalism so you an instil a statist, socialist, fascist system in order to starve children.”

    “Realist” “Are you saying that this is all a conspiracy? The oil companies must be paying you. And you’re a denier.”

    “Septic” “Please ignore everything I said in my first comment about the lack of validity of surface air temps, because if we look at surface air temps, we can see that the hairbrained “alarmism” about the potential of ACO2 accumulation is merely the product of “activist” scientists duping the public with scare stories. Now that there has been a “pause,” I suddenly realized that surface air temps are the be all and end all, the mother of all metrics. And also, I am writing from my fainting couch while clutching my pearls, because you have compared me to a Nazi. I am such a victim!”

    “Realist” “Well, but surface air temperatures are not the whole story.”

    “Skeptic” “You’re an eco-Nazi cultists, intent on destroying capitalism so you an instil a statist, socialist, fascist system in order to starve children.”

    “Realist” “If you continue to write comments like that, I won’t post them on my blog.”

    “Skeptic” “CENSORSHIP!!!1!! LYSENKO !!!!111!!!!! FREEDOM OF SPEECH11!!!!!!!1”

  23. As Steven points out, though, much is this is drama rather than science. What one would hope is that active scientists could agree that surface temperatures alone are a poor indicator …”

    If you look at longer time scales, which an active scientist could agree is sensible, the surface temperature is a fine indicator of global warming. 🙂 And the homogeneity of the data is likely better than that of long term oceanic observations. But naturally, one should look at all data and for every question consider their strengths and weaknesses.

    The OHC is good for the climate debate because it has less decadal variability. On the other hand, I can imagine that it is more abstract and distant to many people and that that is also a reason for Pielke to like it.

  24. Fred Moolten says:

    Conflict between the primacy of OHC as opposed to surface temperature changes may appeal to our sense of drama, but the more important reality lies in the complementarity between these metrics for climate change attribution. When external forcing is the dominant mechanism, OHC and surface temperature change commensurately – the atmosphere and surface warm and some of the heat accumulates in the ocean. Conversely, when internal variation dominates due to changes in heat distribution within the ocean, the two metrics move in opposite directions – surface warming (e.g., during El Nino) is accompanied by ocean heat loss, with the reverse true for La Nina. This distinction is crucial in demonstrating that almost all the post-1950 surface warming has been due to forcing (mainly via GHGs). The same principle, applied to the reduced surface warming rate since 1997 accompanied by a persistent (and in some years increasing) rise in OHC, is presumptive evidence for an important role for internal variability in modifying the effects of continued GHG forcing. The data on ocean heat redistribution during this recent era is uncertain enough to preclude a completely confident attribution, but the attribution of post-1950 warming to forcing with little contribution from internal variability is based on sufficient evidence for a rising OHC to justify a very high level of confidence.

  25. Steve Bloom says:

    New indeed, DS. RP Sr. started his blog in mid-2005 and launched into this stuff promptly, although I think the initial emphasis was on metrics that would be better felt by the public than GMST. But that’s a blind alley about which there’s not even much to say, so pretty soon he was heavily flogging OHC (which note is actually worse than GMST in regard to personal perception).

    For anyone with the stomach to wade through it, the Wayback Machine seems to have a complete archive of his original blog. (I didn’t look to see if they have the newer one as well, but probably.)

    But this early 2006 Stoat post gives a sufficient flavor, I think.

    It’s funny how techniques like pointing to where the problem isn’t and making much noise about its absence, or trashing an adequate metric in favor of one that can’t yet exist in a useful form even while using its partial data to argue against the metric that actually exists, circulate within the Pielke clan.

    Carrick: I don’t think that takes mind reading.

  26. Wow, Steve. Thanks for the links, which show that I’m new indeed. I lost my stomach for wading long ago, but my brief flavor sampling reminded me of the recent Cosmos episode about lead. Methods of generating confusion seem to rhyme through history. In fact, I remember an ancient quote about real medical doctors being less able to convince villagers than con men with showman skills. Unfortunately, I can’t figure out who wrote that. Seems relevant today, for some reason.

  27. Steve Bloom says:

    Sadly so, DS. In a similar vein I recall a Beeb show hosted by Jonathan Miller where a witch doctor (this was in Angola IIRC) divined by means of newly-hatched chicks which he would hold by the neck between his toes while force-feeding them a bit of strychnine. Answers were determined by whether the chicks survived. Miller pointed out to some customers that the chicks who died always seemed to receive a much larger dose. They agreed this had happened, but disagreed that it might have affected the outcome. Our society, being above that sort of thing, has stuff like homeopathy and the supplement industry. 😦 The witch doctor put on a nice show, BTW.

  28. Steve Bloom says:

    Oh, you especially will appreciate this: My favorite RPSrism re OHC was when he advised against considering Jason and Argo data together, saying that the latter (at the time with far less complete data relative to Jason) should be considered on its own, although I suppose anyone who accepted that assertion must have wanted to be fleeced.

  29. Eli Rabett says:

    Way back when, Eli pointed out to Sr that the early Argo data was early, that time should be allowed for it to shake out, and that in any case reliable OHC data did not go back very deep or far. As the Rabett wrote in 2007

    ” I would recommend caution with something like this. It is going to take a while for the calibration and other kinks in the ARGO float data to be worked out, and in the meantime there is great potential for egg on the face as was the case with the MSU fiasco’s. ARGO has the huge advantage that it was designed for the type of measurements that are being done. Moreover extrapolation with such a short data set is particularly risky given variability.”

    Sr. would have none of it and was quite enthusiastic about pointing out how the Argo data proved that OHC was decreasing.

    “The response was to attack the surface temperature record, via a 95 pager that apparently has been accepted by J. Geophys. Res. One wearies. ”

    That link to climatescience (Sr’s then blog) has disappeared.

    The Argo data is now more trustworthy, but it still does not go back very far, and the earlier measurements were not very, shall Eli say, indicative but not very qualitative. That leaves you with point estimates of the various quantities (e.g. single year/month etc). To get anything sensible out of that you have to assume that natural variability is zilch. Judy’s Italian flag lies in tatters if you accept such a method.

    We were talking about consistency.

  30. OPatrick says:

    Stephen Mosher’s dramatism appears to be of the Looney Tunes school.

  31. Rachel M says:

    Joshua,

    I love it! I’ve seen that conversation too, many times.

  32. verytallguy says:

    ATTP, thanks for following up on my suggestion.

    On the Climate etc post:

    What I think Pielke has done is look at the change in OHC from 1950 – 2011 vs forcings and feedbacks.

    Forcings are taken from AR5, feedbacks from Wielicki et al, assuming a delta T of 0.6K from 1950 – 2011

    Taking the central estimates for these, Pielke concludes that the mean radiative imbalance is 0.99W/m2 over this time period and the energy gain of the earth system (OHC+10% for surface/atmosphere) is only 0.43W/m2

    Even if the lower bound of forcings is taken, the energy balance is only just closed, the implication being that the IPPC analysis is incorrect, likely overstating energy imbalance.

    I’m not sure I see any issue with the general approach, although there are also large uncertainties in feedbacks of course.

    What I was surprised by: how small the error bars are on OHC (0.39W/m2 +/- 0.031W/m2).

    Does this make sense?

  33. verytallguy says:

    ATTP,

    Moving on to your post, I can’t replicate your numbers at the end. (Also I’m not sure how to do greek letters or equations in comments, so here “d” means “delta”)

    You have dQ=dF-(1/lambda)dT

    then you say using Rodger’s numbers this gives lambda = 0.82K/Wm-2

    When I put these numbers in I get

    0.43 = 1.72 – (1/lambda)0.6

    Giving lambda = 0.465K/wm-2

    And therefore climate sensitivity of 1.72/doubling.

    Which I think is Rodger’s implied but unstated point – if you use measured OHC to estimate feedbacks, you get a low answer for climate sensitivity.

    I’m sure, however, there are both mistakes and misunderstandings in this…

  34. VTG,
    You’ve made me realise something. If you go to Levitus, he has a change in OHC from 1955 – 2010 of 24 ± 1.9 x 1022 J (which is why I think his uncertainty is small). This gives an average flux into the ocean of 0.39 Wm-2, but averaged over the entire surface of the planet, it’s 0.27 Wm-2. I think the latter is the more correct value to use for this calculation. You’d think that would strengthen Roger’s argument.

    However, there is an issue with what he’s done. In these energy budget type calculations, ΔT should be the change in temperature over some time interval (say 1955-2010), and ΔF should be the change in radiative forcing over the same time interval. ΔQ, however, shouldn’t be the average system heat uptake rate over the same time interval, it should be the rate today. In other words, we’ve had a certain change in radiative forcing over some time interval, accompanied by a change in temperature which produces a negative feedback. The difference tells us what the radiative imbalance is today, not averaged over the time interval. If you consider Levitus, the change in OHC over the last decade is around 1023J, which averages to around 0.6 Wm-2. So, the difference is actually smaller than Roger’s calculation suggests.

    As for your second comment, I was simply using the values that Roger took from the lower figure (ΔW = 1.21 Wm-2K-1). If you invert this you get λ = 0.82 K/Wm-2) and hence can get the ECS. You’re right though, that you can determine λ from the other quantities and it would give a lower ECS. However, if you use the average system heat uptake rate for the last decade (around 0.65 Wm-2 – Otto et al. for example) you get 2 degrees per doubling.

  35. VTG,
    Actually, to be fair, I probably shouldn’t have said Roger’s numbers. They’re the numbers from the figure and do give a higher ECS than you would get if you worked the calculation the other way around – as Otto et al. (2013) have essentially done. That gives an ECS of 2 degrees per doubling. However, incorporating Cowtan & Way and Shindell, for example, pushes it back up towards 2.5 or higher.

  36. Rachel M says:

    Δ = Δ
    λ = λ

  37. Rachel M says:

    lol

    You needed to write the & in full like:

    &
    
  38. I tried that, but it doesn’t want to work. I’ve given up 🙂

    I guess so that others know the joke, I was trying to write the alternative html which is

    & #916; = Δ
    & #955; = λ

  39. Rachel M says:

    You forgot the semicolon that’s why.

  40. Rachel M says:

    Go on, don’t give up, be a smart arse.

  41. If I put the semi-colon in, it autocorrects and takes it out. If I don’t, it adds it in and adds an extra amp.

  42. Okay if I add a space, it works, but it’s no longer funny 🙂

  43. Rachel M says:

    & #8747; = ∫

  44. verytallguy says:

    OK, so I can now do Greek if not integral signs 🙂 I’m not even going to attempt sub/superscripts. Isn’t there an editor somewhere that generates this code automatically?

    On the correct way to do energy budgets.

    I’m not sure why you say ΔT, ΔF should be differences over a time interval, but ΔQ is the number today.

    The point of the calculation is to use OHC (ie measured ΔQ) to provide a check on the consistency of ΔF and ΔFfeed ?

    Strictly, then I think you’d need to integrate ΔF+ΔFfeed over time using best estimates of forcings real temperatures to generate the ΔFfeed term. It would then be accurate to use the mean ΔQ from the OHC estimates at start and end of the time period.

    I think what Rodger has done is effectively assume ΔF is zero in 1950 – actually a conservative assumption if you assume that forcings change linearly, I think.

    But I may have just tied myself in knots.

  45. verytallguy says:

    Rachel, you officially win Geek of the Week !

  46. Rachel M says:

    OMG, you can put LaTeX directly into the comment box!

    i\hbar\frac{\partial}{\partial t}\left|\Psi(t)\right>=H\left|\Psi(t)\right>

    I have no idea what this means, I just copied and pasted it from here:
    http://en.support.wordpress.com/latex/

  47. Rachel M says:

    Rachel, you officially win Geek of the Week !

    Thank you! A compliment. 🙂

  48. verytallguy says:

    I have no idea what this means

    Hah! Some geek! 😉

  49. VTG,

    I’m not sure why you say ΔT, ΔF should be differences over a time interval, but ΔQ is the number today.

    Consider this. Imagine we consider some time interval (100 years, 50 years, doesn’t matter) and we determine the change in radiative forcing over that time interval, ΔF, and the change in temperature over that time interval, ΔT.

    Consider two extreme scenarios, one in which ΔT is zero, and one in which ΔT is the equilibrium change for that change in radiative forcing. If ΔT=0, then there are no feedbacks (they depend on ΔT) and the radiative imbalance today must equal the change in radiative forcing, ΔF. Therefore, in this scenario, ΔQ = ΔF but is not the same as the average – over the time interval considered – of the rate at which the system has accrued energy. That will be lower.

    In the other scenario, ΔT is the equilibrium temperature change for the change in radiative forcing ΔF. In this case, ΔQ = 0 (i.e., there is no energy imbalance today) but the average of the rate at which the system has accrued energy is not zero.

    Therefore, ΔQ is the rate at which the system is accruing energy today, while ΔF and ΔT are the change in radiative forcing (external) and the change in temperature over the time interval considered. Ideally, it shouldn’t depend on the time interval as both will change together.

    Not sure if that’s clear, but it’s an attempt to explain it 🙂

  50. Rachel,
    It’s Schrodinger’s equation which, according to this, can be applied to explain climate science denial 🙂

  51. VTG,
    Actually, maybe a simpler way to look at this is that ΔF is the change in external radiative forcing, and 1/λ ΔT is how much the temperature change has compensated/balanced the change in external forcing. The difference is therefore by how much the system is still out of balance today.

  52. Fred Moolten says:

    VTG and others – If you use the 1950 forcing data, the calculations give an imbalance of 0.99 Wm-2, which may be an overstatement, but if the same calculations are used with the 1980 forcing data, the imbalance comes out to be 0.31 Wm-2, which appears to be too low, so one can’t claim the IPCC either overstated or understated the imbalance. The problem lies at least in part with bad forcing data. In particular, the interval between 1955 and 1980 was characterized by a flat temperature, and although the data are sparse, a flat OHC, yet there is a large forcing difference described between the two dates, which doesn’t make sense. The IPCC has less confidence in the 1950 forcing data, but it’s likely both are inaccurate.

    Finally, I don’t think the energy budget equations can be used to calculate Charney type ECS although some authors have done that (e.g. Otto et al).. In addition to problems with data inaccuracy, the use of non-equilibrium data to compute ECS involves an assumption that the feedback parameter λ is time invariant. However, there is much evidence that radiative restoring (the loss of energy to space per K warming) declines over time, so that the assumption of constancy leads to an underestimation of ECS..As an example, see Armour et al, J. Climate, July 2013 – Time Varying Climate Sensitivity.

  53. BBD says:

    Fred Moolten

    The problem lies at least in part with bad forcing data.

    It does seem to boil down to this. Which makes a nice space for the merchants of doubt to pitch their stalls.

    Can we draw a straight line from RPSnr to Nic Lewis? I would argue that we can.

  54. Fred,
    Yes, I agree. The energy budget estimates are interesting and are – in my view – nice sanity checks (would be concerned if they were entirely inconsistent with other estimates). But, as you say, it assumes that various factors are time invariant and that may well not be true.

  55. Steven Mosher says:

    “I’m not quite sure how your SkS link somehow proves this narrative.”

    The point of the post was not to “Prove” the narrative. ( nice skeptical try though )

    The point of the post was to give you some sort of reference to why this is an issue for him and to demonstrate one of the key scenes in the drama.. lots led up to that, lots followed.

    My sense is that you do not want me to fill up your blog with ‘proofs’ ( climategate mails on the topic and such )

    @dana

    Everyone practices PR. dont pretend you dont. My suggestion has always been to practice better PR.

  56. Steven,

    The point of the post was to give you some sort of reference to why this is an issue for him and to demonstrate one of the key scenes in the drama.. lots led up to that, lots followed.

    My sense is that you do not want me to fill up your blog with ‘proofs’ ( climategate mails on the topic and such )

    Indeed, I’m not particularly interested in rehashing the whole climategate saga.

    My personal view is that if an individual (or individuals) feels hard done by or under-appreciated it is unfortunate – if justified – but doesn’t really have any real significance with respect to how we should – or should not – assess global warming. I’ve also been in academia long enough to know that there are always people who feel as though their work is not given enough credit. Not all of them are justified in thinking this, although some are.

  57. Rachel M says:

    VTG,

    “Hah! Some geek! “

    Damn! I’ve been rumbled. Ok, as much as I would like to be I’m not really a geek at all. In fact, there’s nothing particularly clever about what I did earlier which was just to copy and paste the relevant code for the symbols you wanted from a chart like this one. Anyone can do it.

    AndThen,

    “It’s Schrodinger’s equation which….”

    Thank you! And thank you for not laughing at me as I did you earlier. 🙂

  58. “Everyone practices PR. dont pretend you dont. My suggestion has always been to practice better PR.”

    Speak for yourself, some of us are interested in the science.

    Now it seems to me that it is unreasonable to expect scientists not to change their views on some topic as the availability and reliability of data changes. As Keynes rightly said “When my information changes, I alter my conclusions. What do you do, sir?”. Wanting to look at all data sources, weighted according to uncertainty is a perfectly consistent approach, and requires that you should pay more attention to OHC as the data becomes available in greater quantity and is better understood. Expecting SkS to keep the same weightings, despite scientific progress, is clearly unreasonable.

  59. Eli Rabett says:

    “Everyone practices PR. dont pretend you dont. My suggestion has always been to practice better PR.”

    Eli knew Rick Smalley and Bob Curl. One did, one didn’t, to drop a couple of names.

  60. > Speak for yourself, some of us are interested in the science.

    First-person reports fare very little in “the science”, except when it comes from Chuck Norris.

    But assuming this is true, dear Dikran, you might be interested in letting John Cook know that the SkS’ post linked by AT got the basic form of the argument wrong. Here is the proper one:

    Gorgias is the author of a lost work: On Nature or the Non-Existent. Rather than being one of his rhetorical works, it presented a theory of being that at the same time refuted and parodied the Eleatic thesis. The original text was lost and today there remain just two paraphrases of it. The first is preserved by the philosopher Sextus Empiricus in Against the Professors and the other by the anonymous author of On Melissus, Xenophanes, and Gorgias. Each work, however, excludes material that is discussed in the other, which suggests that each version may represent intermediary sources (Consigny 4). It is clear, however, that the work developed a skeptical argument, which has been extracted from the sources and translated as below:

    – Nothing exists;
    – Even if something exists, nothing can be known about it; and
    – Even if something can be known about it, knowledge about it can’t be communicated to others.
    – Even if it can be communicated, it cannot be understood.

    The argument has largely been seen as an ironic refutation of Parmenides’ thesis on Being. Gorgias set out to prove that it is as easy to demonstrate that being is one, unchanging and timeless as it is to prove that being has no existence at all. Regardless of how it “has largely been seen” it seems clear that Gorgias was focused instead on the notion that true objectivity is impossible since the human mind can never be separated from its possessor.

    http://en.wikipedia.org/wiki/Gorgias

    This might be less sexy than to refer Schödinger’s thought experiment leading to a paradox, two elements that are usually missing from the contrarian’s master argument.

  61. AnOilMan says:

    I was very worried when Willard mentioned Chuck Norris (he’s a die hard republican);
    http://dprogram.net/2009/11/12/video-chuck-norris-copenhagen-talks-to-forge-one-world-order/

    Yup, Chuck thinks it’s conspiracy to control the world. I guess you all are just faking it for me. I bet all those journals are fake too. And to think I wanted go solar…

  62. > Chuck thinks it’s conspiracy to control the world.

    I was referring to the Chuck Norris who controls the Internet:

    http://www.chucknorrisfacts.com/

    If a conspiracy control the world, Chuck Norris controls the conspiracy with his stare. And with a round kick, he could spin the world the other way around.

  63. Pingback: We’ve all forgotten about the oceans! | And Then There's Physics

  64. Pingback: Back to basics | …and Then There's Physics

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.