A two-box model

In an attempt to learn something about climate modelling, I decided to write a little two-box climate model. I found quite a lot of useful information on Tamino’s blog, Arthur Smith’s blog, Isaac Held’s blog, Lucia’s blackboard and from Kevin Cowtan’s website, which is where I got some of the data from. I recommend having a look at Kevin Cowtan’s website if you want to play around with basic climate models. I should acknowledge that I haven’t really spoken to anyone much about this, so I may have made some kind of silly blunder. This also isn’t really intended to illustrate anything deep or meaningful. It was fun and instructive, so I thought I’d write a blog post. I also did it slightly differently to how others have done this, but I think it’s essentially consistent with other methods.

So, the way I implemented this was to start with the RCP11 forcing dataset, and then summed all the different forcings to get a net forcing which is shown in the figure below.
Forcings_RCP11
Some of the other methods work directly with this forcing. I decided to try something that was a little more intuitive (to me at least). What really determines the warming trend is the energy imbalance. The energy imbalance can be determined, at any time, if one knows the change in forcing (above), the increase in outgoing flux due to the change in temperature, and the change in forcing due to feedbacks. This is expressed below
EnergyImbalance
where, Torig = 288 K, Ts is the change in surface temperature, ε = 0.625 is the surface emissivity, and I assume that Ffeed = 1.8 Ts Wm-2.

The model has two boxes, one that represents the surface, and the other an ocean layer. The equations that then describe their evolution are
two-box-equations
where Cs and Co are the heat contents of the surface (atmosphere) and the ocean layer. The heat content of the atmosphere I determined by assuming a mass of 5 x 1018 kg and a specific heat capacity of 1000 J K-1 kg-1. The ocean layer is assumed to have a heat content 100 times greater than that of the atmosphere. These are also then normalised by dividing by the surface area of the earth and the number of seconds in a year. The coefficients in front of the first terms on the right-hand side in each equation are the fraction of the energy imbalance associated with surface warming and with ocean warming. I use αs = 0.025, and αo = 0.93. The last term in each equation is simply a coupling between the ocean layer and the atmosphere and I use β = 0.1. I also have an ENSO correction term which is essentially Ts = Ts + 0.075 ENSO, where ENSO is a 12 month average of the ENSO index, but lagged by 6 months (i.e., averaged between July and June for each year).

So, basically I start in 1880 with the change in forcing known and with both temperatures set to zero. I then step forward in time, at each time determining a new energy imbalance, updating the two temperatures, and then correcting the surface temperature using the ENSO index. The result I get is below, where the solid line is from my model and the dashed line is the observed temperature anomaly. The linear regression coefficient is 0.945.
TempAnom

I can also plot the evolution of the energy imbalance, which I show below. It suggests an energy imbalance today of between 0.5 and 1 Wm-2, quite similar to what is expected based on ocean heat content data. I also determined a TCR and ECS for my model which – for the assumptions I used – are 1.92 and 2.6oC respectively.
EnergImbala
Anyway, I’ve written this post quite quickly, so I hope I’ve explained things clearly. I’ve also tried to explain my assumptions quite thoroughly so as to be corrected by those who know better. I’m also not really implying anything by this post other than it was fun to develop a simple model like this and it seems quite remarkable that making assumptions that seem consistent with our current understanding, can produce results that appear quite consistent with observations. That’s it really.

This entry was posted in Climate change, Climate sensitivity, ENSO, Global warming, Science and tagged , , , , , , , , , , . Bookmark the permalink.

50 Responses to A two-box model

  1. BBD says:

    I believe GISS ModelE (which cost a bob or two) has an emergent ECS of ~2.7C, so you seem to have done rather well for a first go. I’m impressed. I believe NASA is looking to cut back here and there, so perhaps give the lab a ring?

    I’m sticking this in early before all the serious physicists appear and it all gets even more complicated than it already is.

    😉

  2. I’ve actually downloaded the GISS forcings, but haven’t redone the calculation with them to see how it differs.

    I’m sticking this in early before all the serious physicists appear and it all gets even more complicated than it already is.

    I’m thinking of heading home, closing my laptop, opening a beer, and avoiding all the physicists who’ll appear and say “no, you idiot, your assumptions are all completely wrong and you only got a reasonable result by pure chance”. 🙂

  3. So, if I optimise the model so that the GISS forcings give a good fit to the observations (which doesn’t change very much to be honest) I get essentially the same TCR and ECS values.

  4. I should probably acknowledge that even though the model I’ve developed here is quite simple, it does rely on the forcings already being calculated, which is – I assume – a much more complex calculation and one that I have, obviously, not done myself.

  5. BBD says:

    You just look ’em in the eye and say “all models are wrong”…

    * * *

    I bet Karsten pops up to tell you that the GISS aerosol forcings aren’t exactly the last word 😉

  6. ATTP,
    As the model is so simple and has very limited goals, it cannot really be criticized, only discussed.

    With that in mind i find one choice that you have made a bit surprising. In your model the forcing affects directly and strongly the ocean layer (α_o = 0.93), while it would seem more natural to have forcing to warm the surface and the surface to warm the ocean. That would be something like the simple model if Isaac Held.

  7. Pekka,
    Maybe, although my 0.93 only applies to the energy imbalance at that time, not to the total change in forcing. So, I based this on the idea that more than 90% (93% I used) of the energy imbalance goes into the oceans while only a few percent is associated with warming the surface.

    I did think of writing something similar to what Isaac Held presents there, but I couldn’t work out some of the coefficients and so decided to do it this way. I’m just trying to work out now if what I’ve done is at all consistent with that. Maybe not.

  8. A simialr ratio of stored heat can, however, be obtained with a high enough value of β in combination with a ratio of heat capacities similar to what you have.

    Whether the final results would be very similar or not, that I cannot tell without some computation.

  9. Pekka,
    Yes, you’re probably right. I did start writing a model similar to that of Isaac Held’s, so I’ll have a look and see what the differences are.

  10. WebHubTelescope says:

    Those equations are not the correct equations to use for heat continuity. Use the heat equations instead. If you do that you will find a diffusional response is the way that the surface responds. This consists of a fast transient followed by a fat-tail.

    The mistake in doing the first order damped response is that you can not show both the rapid response of SST along with the slow response of the deeper OHC.

    Read Held carefully and study the older Hansen papers and you can see how this approximation came about. Obviously it is easier to solve a first order differential equation rather than the full second order heat equation.

    To make it even easier is to just assume the fast transient happens with a quick lag and then treat all forcings as a set, operating on the system or box of interest. The fat-tail of OHC will become the long term correction.

    This is the approach I used recently on my blog http://ContextEarth.com

    I agree that it is very easy to get correlation coefficients over 0.99 if the forcing of the ENSO SOI is taken into account.

    I refer to this model as CSALT because it uses CO2, SOI, Aerosols-volcanic, LOD, and TSI as the empirical forcing factors for fitting the temperature rise since 1880.

    You are on the right track overall. Skeptics and deniers do not like it because the correlation is too high yet includes factors such as TSI which they herald.
    They can’t stomach that CO2 is the principle factor in warming.

    Keep the pressure on. Keep tweeting and giving the deniers physics pain.

  11. I agree that replacing the second box by a diffusive infinitely deep ocean is probably the simplest next step in making the model more realistic.

  12. kdk33 says:

    You don’t provide enough to know for sure, but I suspect that what you’ve done is taken forcings derived from temperature then simply closed the loop. If not, I’ll gladly acknowledge your Nobel prize.

  13. WebHubTelescope says:

    The most challenging part of this fit is getting the hump centered around 1940 correct. The reason you seem to fit it adequately is because your forcing has the same profile inflection and accentuated by the severe impact volcanic eruption of Agung in the early 1960’s.

    But what if the forcing hump was the natural variation that seemed to coincide with the measurements of LOD that Dickey of JPL is suggesting? Include that unattributable forcing and the model fitting markedly improves. Something is giving the inflection between 1940 and 1970 and it is not Agung alone.

    The WWII calibration and SST bucket issue is another nuisance factor in all this.

    Again, I do appreciate the approach because it is applying simple physics and is orthogonal to the way that the climate scientists work the problem.

  14. Hum, this is almost within reach of my dire maths skills! I would like to know more about where the forcing data comes from, though – if ever I have a model that works so well first time, I find it always pays to be highly suspicious! I wonder if kdk33 might be right that temps are involved in deducing the forcings? I couldn’t find any explanation or links via the page the data comes from.

  15. WHUT,
    Thanks, I’ll have a look at your approach. I agree that the proper heat equations would be better, but I only had a short amount of time and this was nice and easy. I’ll have a good look through your site though. I always want to try Isaac Held’s approach that Pekka mentions, to see how that compares to what I’ve done here.

    kdk33,
    I’m certainly not arguing for any awards for this. I don’t know how the forcings are determined specifically. Obviously from models, but some is at least based on known/estimated emissions. The forcings I used are Solar, GHGs, Aerosols (ind), Ozone (trop), LandUse, Volcanoes, Aerosols (dir), Ozone (strato), H2) (strato), Black Carbon (snow). Unless I’m mistaken, I don’t think that these are derived from the temperature.

  16. Dan,
    It’s a good question and, certainly, that is a valid issue. As I said above, I don’t think that they are derived from the temperatures, but it’s possibly that they could be tuned so as to improve the fit.

  17. I have to pop out and I plan to try and have a relaxing evening 🙂 , but this may be relevant.

  18. WebHubTelescope says:

    Kdk33,
    Something to be cognizant about that is for certain. That is why one uses the ENSO SOI as an index and not the SST readings to close the loop.

    The SOI is based on atmospheric pressure reading differences and so is bounded and guaranteed to revert to a mean value of zero. This means that it can not contribute to the long term trend and so becomes very useful as a compensating forcing factor.

  19. “I have to pop out and I plan to try and have a relaxing evening.” No checking comments on your phone!

  20. Carrick says:

    Nice write up. And this is definitely physics.

    One of the points that Nick Stoke’s made is that you can write the solution in terms an OLS fit of the temperature data to a series of exponential filtered forcing data with different decay constants.

    Here are the relevant lines from his R code, to hopefully make it clear what I mean:

    w1=expsmooth(vv,30.)
    w2=expsmooth(vv,0.09)
    # fit regression
    h<-lm(ss ~ w1 + w2)

    Here

    • “vv” is the sum of forcings
    • w1 and w2 are the slow and fast responses from the two box model
    • “ss” is the annual temperature data

    The code for his exponential smoother is:

    expsmooth<-function(v,d){ # u is the smoothed v with decay exp(-1/d)
    n=1:length(ss)
    e=(n-1)/d
    e=exp(-rev(e))
    e=e/sum(e)
    u=convolve(v,e,type="open")[n]
    u
    }

  21. WebHubTelescope says:

    DanO, I don’t use that forcing profile for the same reasons. You really don’t know the massaging it has been through. The furthest I go is to use the Sato GISS aerosol optical thickness model. I lay that on top of the Co2 log sensitivity and that gives my first order atmospheric GHG and aerosol forcing.

    Unfortunately this does not give much of a dip between 1940 and 1970, so that has to come from somewhere else.

    Go to http://data.giss.nasa.gov/modelforce to see the other factors that go into the forcing profile.

  22. Carrick says:

    WebHubbleTelescope:

    The SOI is based on atmospheric pressure reading differences and so is bounded and guaranteed to revert to a mean value of zero. This means that it can not contribute to the long term trend and so becomes very useful as a compensating forcing factor.

    Well approximately zero anyways. There are instrumental drift issues plus the usual effects associated with e.g. isostatic rebound and changing sea levels. Since one meter change in elevation corresponds to 100 Pa (1 mbar), small but measurable drifts in SOI are are expected.

    Still this is the best explanation I’ve seen for why you use SOI. Till just then, this wasn’t clear, but it seems like a good argument.

    I wasn’t being sarcastic on another thread when I suggested you should try writing this up. If you get close, perhaps you can set up a tip jar for the publication costs.

  23. WebHubTelescope says:

    Carrick has finally seen the light.

    The fact of the matter is that you couldn’t have this discussion at Lucia’s without someone coming down on you with a ton of bricks.

    They would all be saying it is fake physics, and then trying to demonstrate how an ARIMA statistical model could do the same thing.

    Or is that being unfair?

  24. Carrick says:

    WebHubTelescope, I think the opinion on Lucia’s blog is that ARIMA is out too.

    But people are generally going to react negatively to parametrized models, especially people with R&D experience. When I say things that people don’t agree with (typically when I discuss alternative energy), I get pretty skeptical responses too…so don’t take it personally. It’s not you, it’s just natural proclivities.

  25. BBD says:

    WT

    Have you come across Wilcox et al. (2013) The influence of anthropogenic aerosol on multi-decadel variations of historical global climate?

    Discussion by one of the authors here.

    Apologies if this is old hat.

  26. BBD says:

    Sorry, WT > WHT

  27. WebHubTelescope says:

    BBD thanks
    Another article by Yoshiaki Fujii claims that nuclear testing was the cause for stagnation during this time interval.

    The Wilcox paper us interesting in that they decompose the profile into intrinsic mode functions (IMF), something that I hadn’t run into before.

  28. troyca says:

    Anders,

    I agree that these 2-box models can be quite interesting and a good tool. I recommend Geoffroy et al., (2013) for an interesting 2-layer model that is basically a specific form of your 2-box, with alpha_1 = 1.0 and alpha_2 = 0. I’ll mention again that the typical form of net flux in these equations, which you call dF, is F – lambda * T_anom, since at temperatures this high relative to the change you can essentially drop the exponent, and lambda (the “radiative restoration strength” or “thermal damping rate”, depending on which paper you read) already includes the feedbacks when fitted, rather than adding a separate F_feed term that suggests the feedbacks are independent of temperature changes.

    Anyhow, having alpha_1 = 1.0 and alpha_2 = 0 allows for fewer parameters to be fitted, and might be a more natural breakdown of surface layer (land + SST, whose heat capacity is dominated by the ocean’s mixed layer, rather than the lower land / atmospheric heat capacity) and deeper ocean layers where heat is diffused (this is the breakdown Pekka mentioned).

    Quick question: I’m assuming the global forcing dataset that you used isn’t the one from M. Meinshausen, S. Smith et al. (2011), available here: http://www.pik-potsdam.de/~mmalte/rcps/? It looks like the graph of your forcing dataset shows a net forcing of ~1.5 W/m^2 in 2000, which is similar to the GISS forcing dataset, but is only about 70% of the net forcing for the IPCC “best estimate” or the value provided in those POTSDAM forcings.

  29. troyca,

    with alpha_1 = 1.0 and alpha_2 = 0.

    Yes, I want to try that. I’ve just been too busy this weekend doing DIY to play around with this at the moment, though.

    I’ll mention again that the typical form of net flux in these equations, which you call dF, is F – lambda * T_anom, since at temperatures this high relative to the change you can essentially drop the exponent, and lambda (the “radiative restoration strength” or “thermal damping rate”, depending on which paper you read) already includes the feedbacks when fitted, rather than adding a separate F_feed term that suggests the feedbacks are independent of temperature changes.

    Yes, I realise that that is the normal way and I did do a 1-box model using this formalism. I don’t think my feedback term is independent of the temperature change, though, as I set it to be a constant times dT. But, you’re right, the way I’ve done it here isn’t the normal way and I should probably adapt it to be more standard.

    The forcing dataset is similar to the GISS forcings and just happened to be the first one I tried. I could well find a better, more appropriate one. This was really just meant to be illustrative and – as I said in the post – was more for my benefit than anything else. I certainly wouldn’t make any strong claims about a model I wrote in a few hours 🙂

  30. Eli Rabett says:

    Early models of the oceans had pretty much thin oceans (swamp oceans) without heat capacity, the next step was a slab ocean with heat capacity equivalent to roughly the heat stored in the ocean equivalent to that exchanged between seasons, so what you have done is not such a reach. What is missing in this model is evaporation although that may be hidden in the radiative forcing through absorption by water vapor. Here is a discussion by Broccoli of the various models from 2003. We have moved on since then, esp wrt eliminating the dread flux adjustments

    What is useful here is that it kills off statistical mathterbation (viz J. Scott Armstrong)whether the increase in global temperature anomalies are statistically significant and does severe damage to “the pause”.

  31. Eli,
    Thanks.

    What is missing in this model is evaporation although that may be hidden in the radiative forcing through absorption by water vapor.

    Yes, I’m sure there’s lots missing from this model.

    What is useful here is that it kills off statistical mathterbation

    That was largely what motivated this. Attempts by some to use some kind of basic linear fit to all the forcings to then show that the best fit was one where the anthropogenic forcings only provided some fraction of the heating with the rest coming from the AMO, or something like that. Those type of models just seem to have no actual physics and are just, essentially, curve-fitting exercises.

  32. troyca,
    To me, it looks like a net forcing of 1.75 W/m2 in 2000. That’s in perfect agreement with AR5 considering that the baseline is 1880 here. It only differs slightly post 2000. So should be alright then, shouldn’t it?

    BBD,
    Yes, GISS forcing shouldn’t be used as pointed out several times. Meinshausen 2011 or AR5 are to be used.

    ATTP,
    I totally agree, unless you wanna deliberately fool yourself, stay away from AMO (and even PDO, which might equally well be a forced response)

  33. BBD says:

    See? I told you.

    😉

  34. troyca says:

    Karsten,

    To me, it looks like a net forcing of 1.75 W/m2 in 2000. That’s in perfect agreement with AR5 considering that the baseline is 1880 here. It only differs slightly post 2000. So should be alright then, shouldn’t it?

    Sure thing. I asked the question because at first glance it looked like the GISS forcings, and Anders mentioned he got “essentially the same TCR and ECS values” when using the GISS forcings, which I would not expect if the forcing sets were substantially different. I see that we both suggested M. Meinshausen, S. Smith et al. (2011) as a better choice for forcings — if the OP used that set or one similarly consistent with AR5, so much the better!

  35. Clive Best says:

    ATTP,
    A worthy effort – good stuff
    “I assume that Ffeed = 1.8 Ts Wm-2.”
    What you really show is that RPC11 forcings and an assumed feedback you can reproduce otherwise opaque GCM model results.

  36. Troy,
    thanks for clarifying. Agreed. AR5 should be the obvious choice, unless there’s a credible update sometime in the future.

  37. I was experimenting with a variation of a two-box model a few months ago where the heat transfer was a latent one between ocean and land. This is a pure algebraic problem:
    http://contextearth.com/2014/01/25/what-missing-heat/

    We want to understand the partitioning of the heat and I worked out this after someone at RC said that latent heat from the tropical Pacific was a significant driver in transferring latent heat (in the form of moisture) from ocean to land areas.

  38. Clive,

    What you really show is that RPC11 forcings and an assumed feedback you can reproduce otherwise opaque GCM model results.

    I guess that’s about right.

  39. kdk33 says:

    ATTP,

    I’m curious about a couple of things. Were all of your parameters fixed a-priori, or where some fittied? It would be interesting to see a third line on your graph that is model results minus the enso adjustment – that way one can see how much is the underlying forcing and how much is in the adjustment.

  40. kdk33,
    The α1 and α2 parameters I choose on the basis of the fraction of the energy excess going into atmosphere/land (about 2.5%) and into the oceans (about 93%). The specific heat capacity for the atmosphere was calculated as I describe, and the ocean heat capacity was assumed to be 100 times greater. The two parameters I fitted, were β and the dependence of the feedback forcing on Ts.

    So, some were fitted and others were fixed a-priori. As I said in the post, this wasn’t really intended as anything other than just a post about a simple model that I developed in about an afternoon.

  41. Eli Rabett says:

    The point about GCMs or ESMs are the patterns across broad global regions. The global maps are what really was new and interesting in the original Hansen, Sato, et al 1988 model.

  42. Eli,
    Indeed.

    Since you’re commenting, I wasn’t quite sure that I understood how you’d perceived me falling into Pielke’s trap when commenting on Stoat’s blog. I may well have, but I didn’t quite get your issue with what I had said.

  43. Eli Rabett says:

    You conflated Pielkeism, the cost of climate change, with the amount (# of events) of disasters. An honest broker reading what you wrote would miss that. To do so lets him play three card Lubos.

    Basically anything along those lines should IEHO have a pointed statement such as “AndThen is not confusing yada yada yada.

  44. Eli,

    You conflated Pielkeism, the cost of climate change, with the amount (# of events) of disasters. An honest broker reading what you wrote would miss that. To do so lets him play three card Lubos.

    Thanks, I shall have to read my comment again. I certainly didn’t mean to do that as that has been one of my criticisms of Junior. Of course, that doesn’t mean that I didn’t manage to make it sound like that. I agree with you that one should include a suitable caveat.

  45. Pingback: Energy budget estimates explicitly using feedbacks | And Then There's Physics

  46. Pingback: Another Week of Global Warming News, March 23, 2014 [A Few Things Ill Considered] | Gaia Gazette

  47. mrdodge says:

    This is the most enjoyable climate post I have read in quite a while. I am having fun reproducing it, its gelling quite nicely with Pierrehumberts book – thanks a lot!

  48. mrdodge,
    I’m afraid I’m rather losing the ability to tell if people are being sarcastic or not, but – if not – then that’s good to know. Thanks.

  49. Pingback: We’ve all forgotten about the oceans! | And Then There's Physics

  50. Pingback: I’m confused about Kummer & Dessler | And Then There's Physics

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.