## Climate model tuning

I wrote a post about model tuning that discussed a paper that argued for more transparency in how climate models are tuned. Gavin Schmidt, and colleagues, have now published a paper that discusses the Practice and philosophy of climate model tuning across six US modeling centers. The paper is a bit long, but it’s well-written and easy to read, so I would encourage you to do so (if interested) and I’ll try to not say too much.

Probably a key point is why you need to tune these models in the first place. Well, they’re certainly based on basic physics, but they’re sufficiently complex that you can’t model everything from anything close to first-principles. This means that some processes are parametrised and, in some cases, the parameters are not well constrained. This requires that you then tune these parameters so that the model then matches some pre-defined emergent constraints.

A common claim, however, is that they’re then tuned so as to either match the 20th century warming or to produce specific climate sensitivities. These, however, are not amongst the emergent constraints used for model tuning. As the paper says

None of the models described here use the temperature trend over the historical period directly as a tuning target, nor are any of the models tuned to set climate sensitivity to some preexisting assumption.

Most of them do, however, tune for a radiative imbalance, either during pre-industrial times (PI) or present day (PD), or tune for aerosol forcing, or aerosol indirect effect. A summary of the tuning criteria in the 6 different US models is shown in the Table below.

Even though the tuning does not explicitly tune to something like climate sensitivity, there are some indications that there might be some implicit tuning. For example

However, analysis of the CMIP3 ensemble (Kiehl, 2007; Knutti, 2008) suggested that there may have been some kind of implicit tuning related to aerosol forcing and climate sensitivity among a subset of models with models with higher sensitivity having a tendency to have higher (more negative) aerosol forcing

The correlation is, however, rather low and this is less evident for CMIP5.

Having started this, I’ve also just noticed that James has a post in which he suggests that even though groups certainly don’t re-run their models, and tune parameters, until they get a good fit to the 20th century, some have certainly made adjustments/updates if they know that the fit is poor.

I guess the basic message is that this is complicated and although there certainly isn’t any explicit tuning to the 20th century trend or to some specific climate sensivity, subjective choices and expert judgement can have an impact on these emergent constraints. Having said that, what they explicitly tune to – in many cases, a radiative imbalance – seems quite reasonable to me since this is a key factor that indicates the net amount of energy being accrued by the system.

The paper ends with what seems like quite a sensible suggestion:

we recommend that all future model description papers …. include a list of tuned-for targets and monitored diagnostics and describe clearly [] their use of historical trends and imbalances in the development process.

As I said at the beginning, if you want to know any more, it’s probably best to read the paper (another link below).

Tuning to the global mean temperature record, by Isaac Held.
Practice and philosophy of climate model tuning across six US modeling centers, by James Annan.
Practice and philosophy of climate model tuning across six US modeling centers, by Schmidt et al.

This entry was posted in Climate sensitivity, Gavin Schmidt, Research, Science, The scientific method and tagged , , , , , , . Bookmark the permalink.

### 62 Responses to Climate model tuning

1. russellseitz says:

“we recommend that all future model description papers …. include a list of tuned-for targets and monitored diagnostics and describe clearly [] their use of historical trends and imbalances in the development process.”

Amen to that- absent codes and parmetrization details , it can be hard to assess claims of “robust results” that turn, upon direct inquiry, into co-authors candidly admitting to transport parametrizations that consist in telling systems programmers where to put what in “sophisticated one dimensional models.”

2. angech says:

I agree with the use of models.
I think that explaining what goes in is very important.
I know that weather models can never get it all right and I am quite happy with that as I would expect everyone here agrees with
There is a hubris that comes in when using them politically or scientifically or morally that is an issue.
When a need for an outcome arises, or is expected it can be hard to say it is only a model. A good guess but not the best guess.
My request would be for more people to understand this.
Hence if we got a model that perfectly reflected the weather changes would that be good?
A bit like a stock picking program showing that it got every prediction right in the past.
Unbelievable.
Hence my plea, which falls on deaf ears.
If the models always go to one side only of a range of predicted outcomes, how can they be trusted??
After all if they always go to one side of a range of actual outcomes?

3. -1=e^iπ says:

I read part of the paper the other day and came up with this back of the envelope calculation:

Let’s say for sake of argument that if no tuning to get high climate sensitivity occurs then climate sensitivity and aerosol forcing are completely uncorrelated, where both are normally distributed with standard deviations sigma1 and sigma2 respectively. For convenience, change the scales of measurement of aerosol forcing so that sigma1 = sigma2 = sigma

Now let’s say for sake of argument that the climate scientists, being evil and all, decide to increase both climate sensitivity and aerosol forcing in their models by (k*sigma,k*sigma), in order to help ensure the continuation of the chinese hoax as they sip on unicorn tears.

Then the covariance of climate sensitivity and aerosols is k^2*sigma^2.
The variance for both variables is sigma^2 + k^2*sigma^2.
Thus the correlation coefficient is 1/(1 + k^-2)

The paper says the correlation coefficient is 0.19.
This gives a k of 0.48.

For climate models, the 95% confidence interval is [2.0,4.5]C.
This corresponds to sigma = 0.6 C.

So k*sigma = 0.3 C.

So climate models are overstating ECS by ~0.3C by tuning.

4. Steven Mosher says:

1. Results Prior to tuning.
2. Results after tuning.

Ideally at some point folks then start to give up the democracy of models.

5. -1,
I’ve no idea where your numbers are coming from.

6. dikranmarsupial says:

It would be interesting to see how high you could push ECS via model tuning, whilst still matching 20th Century observations, and keeping parameters within physical bounds where the exist). However I think the computational expense involved wouldn’t be justified by the (not particularly scientific) interest. Will have to look up the emulators mentioned in the discussion of a recent post to see if that has been estimated that way.

The key challenge for climate skeptics would be to see how they could tune a GCM to give low ECS and still explain previous climate (and keep the parameters within physical bounds where they do exist). I suspect they are not that tunable.

7. dikranmarsupial says:

SM “1. Results Prior to tuning.”

That may not be very informative, as it depends on how the default values for the parameters are decided. Naive choices will make it look as if the models are highly tunable, whereas expert choices will make the effects of tuning look minimal. How do we judge the information content of the default values if we can’t use the difference in pre- and post- training performance if we are to avoid circular arguments? Not saying this shouldn’t be done, just that it may be potentially misleading. The same thing happens in machine learning, the important thing is to specify how the hyper-parameters are tuned (optimisation algorithm, criterion, perhaps starting point and optimisation parameters for reproducibility) and the final system performance.

8. dikranmarsupial says:

I should point out that unfortunately in machine learning papers, the tuning of hyper-parameters is rarely set out in adequate detail, and often not even mentioned (which would be an indication I probably was not a reviewer ;o).

9. angech says:

dikranmarsupial
“It would be interesting to see how high you could push ECS via model tuning, whilst still matching 20th Century observations,”

Lost, don’t you need low ECS to match current 21C observations? So why does one need high ECS to match the 20th C?

“The key challenge for climate skeptics would be to see how they could tune a GCM to give low ECS and still explain previous climate (and keep the parameters within physical bounds where they do exist).”

Low ECS or not.
Surely the CO2 levels were (more) stable for the last 2000 years of climate.
If this is what you believe.
And I am fairly sure that is your position, then previous climate must be explained by natural variation changes that still escape us.
Or, if you believe that both CO2 and climate did not change for 2000 years then there is not a problem, is there because a low ECS would not matter.
Which line are you arguing?

10. Lost, don’t you need low ECS to match current 21C observations? So why does one need high ECS to match the 20th C?

1. Dikran didn’t say you needed it, simply asked how high it could go while still matching the current observations.

2. Depends on what you mean by “low”, but I think there are climate models with ECS values above 3K that still match the observed global temperature change. In fact, there are – I think – some indications that those models with high ECS values do a better job of matching some of the emergent properties than the models with low ECS values.

11. Hyperactive Hydrologist says:

angech,

I think Dikran was referring to explaining the transition between glacial an interglacial periods with low ECS.

Regarding model tuning is this just another term for calibration? Also I presume there is the issue of equifinality, you can get the right result with the wrong parameters, and how is this avoided?

12. HH,

Regarding model tuning is this just another term for calibration? Also I presume there is the issue of equifinality, you can get the right result with the wrong parameters, and how is this avoided?

I think what you suggest at the end is an issue. There is more than one parameter that can be tuned, most of which are probably constrained in some way. but that may still have a large possible range. So, you probably can’t rule out that you’ve tuned the parameters in such a way that many have very wrong values, but that these errors cancel. As I understand it, one of the goals is to find ways to independently constrain these parameters, so that you can reduce the degrees of freedom and hence reduce the possibility that you’ve just matched some emergent constraint by chance. Easier said than done, I suspect.

13. Hyperactive Hydrologist says:

This is a problem with physically based hydrological models and I imagine the problem is an order of magnitude more complex with climate models. Therefore it’s probably another good reason for not selecting one model over another based on performance against observation and instead use them all. This is one of my issues with UKCP09 as it only uses the Hadley GCM which, I have heard from a number of sources has a dry bias compared with other GCMs.

14. dikranmarsupial says:

angech wrote “Lost, don’t you need low ECS to match current 21C observations”

you have been involved in discussions for long enough here and elsewhere to know by now that 17 years is not enough to draw any conclusions about ECS for the simple reason that on that sort of timescale internal climate variability is likely to obscure the response to the forcings. Even then if you look at the observations compared to the model output then you will see that the observations are within expectations of the GCM ensemble., so no you don’t need low ECS to match 21C observations (also consider for a moment what the “E” in “ECS” actually means).

“Low ECS or not.
Surely the CO2 levels were (more) stable for the last 2000 years of climate.
If this is what you believe.

That is what the available data shows, which is a sensible basis for believing something:

However of course it would be stupid to think that CO2 is the only forcing that the climate system responds to.

And I am fairly sure that is your position, then previous climate must be explained by natural variation changes that still escape us.

Escaping you is not the same as escaping us, especially if you include those that actively research the topic.

“Or, if you believe that both CO2 and climate did not change for 2000 years then there is not a problem, is there because a low ECS would not matter.”

The data show that both CO2 and climate have changed over the last 2000 years, and so have the forcings, so I would hardly make an argument that stupid.

“Which line are you arguing?”

I’m not arguing anything on this thread (other than it would be interesting to investigate the boundaries of the parameter space that would remain plausible given what we know about physics and the observations). I am certainly not arguing either of your pathetic straw men.

I’m sorry but the amount of bullshitters on blogs mean that it is difficult to have a reasonable conversation about the science without having the irritation of ignoring trolls like angech. I’m seriously wondering if it is worth the bother. That it is a problem even at a relative haven of sanity like ATTPs, suggests perhaps not.

15. -1=e^iπ says:

@ ATTP – The Schmidt paper (to get behind paywall) https://www.geosci-model-dev-discuss.net/gmd-2017-30/gmd-2017-30-AR2.pdf

“However, analysis of the CMIP3 ensemble (Kiehl, 2007; Knutti, 2008) suggested that there may have been some kind of implicit tuning related to aerosol forcing and climate sensitivity among a subset of models, with models with higher sensitivity having a tendency to have higher (more negative) aerosol forcing (this situation was less evident in CMIP5 (Forster et al., 2013)). Both of these correlations however seem rather low (CMIP3: 0.24; CMIP5: 0.19) and so do not provide evidence for a general tuning related to forcing and sensitivity.”

0.19 correlation coefficient is the number I use. That and the [2.0,4.5] C ECS climate model confidence interval.

16. -1,
Maybe you could do a little more than simply point at a Masters of Science thesis?

17. -1=e^iπ says:

They start with the IPCC probability distribution and then use Bayesian inference using observations to obtain a 90% CI for aerosol forcing as [0.3,1.0] W/m^2. Well technically they don’t allow for efficiency, so some of that reduction in forcing could involve efficiency, but whatever.

They do a much better approach than the energy balance approach.

18. Steven Mosher says:

“If the models always go to one side only of a range of predicted outcomes, how can they be trusted??
After all if they always go to one side of a range of actual outcomes?”

Pretty simple.

If your financial advisor showed you his model of returns for people who give him
moeny to control and he always delivered 5% less than he promised,
would that be useful..

If he promised you 15 and his track record showed he was always 4-6% high
what kind of return would you expect?

If your kids were always 10 minutes later than they predicted they would be home
what would you predict?

When you build a model you do just the kind of experiments marsup mentions.
You twist some knobs hard and see what happens.. or you see how hard you can twist
it while you constrain the model in other ways.

But the bottom line.. You dont need models to set policy. we have known that we cant burn it all for some years.
That’s enough to put you down several policy paths and dont make the perfect the enemy of the good

19. Hyperactive Hydrologist said:

“This is a problem with physically based hydrological models and I imagine the problem is an order of magnitude more complex with climate models. Therefore it’s probably another good reason for not selecting one model over another based on performance against observation and instead use them all. “

I really don’t think that climate models are intrinsically harder than hydrology models such as tidal analysis.

For example, ENSO/El Nino is essentially a hydrology problem defined by solving the equations for the sloshing of a volume of water. The forcing for the sloshing is the changes in angular momentum of the earth’s rotation (i.e. the tsunami effect on a subtle level). The cyclic change in angular momentum is well known and measured by detecting the speedup and slowdown in the earth’s Length-of-Day (LOD). The cycle is precisely correlated with the lunar cycles at the daily, fortnightly, and monthly scales, which is quite intuitive since the moon (and then sun) exerts a gravitational pull on the earth and that’s what causes the cyclic AM changes.

The bottom-line is that the equations are forced by the monthly and fortnightly tidal LOD changes and then the ENSO behavior can be duplicated over the time span that measurements have been made. The diurnal tidal cycles are not applicable because the sloshing inertia is too large, but the longer periods are almost entirely responsible for the ENSO dynamics.

The tuning of the tidal parameters are done in the same way as conventional tidal analysis. Any interval is fitted to the measured ENSO data (such as NINO34) using the LOD and known lunar parameters as a calibration, and any other out-of-band interval can be cross-validated.

That’s how I do climate model tuning for ENSO, and I find no need for any extended set of climate models. Likely the reason that so many climate models exist is because they haven’t arrived at a consensus for the primary mechanism driving ENSO, which is the main contributor to natural temperature variation.

20. Hyperactive Hydrologist says:

geo,

ENSO is an interaction between the ocean and atmosphere and therefore can not be defined by just an ocean model you also need to model the atmospheric component this would be much more complex than what you suggest. Also even in the hindcast climate models you would not expect them to simulate the historically observed ENSO pattern. This is an emergent property of a model and is very dependant on the initial conditions.

21. “ENSO is an interaction between the ocean and atmosphere and therefore can not be defined by just an ocean model you also need to model the atmospheric component this would be much more complex than what you suggest. “

What forces the atmosphere then? I didn’t tell you that I use the same modified tidal analysis to demonstrate that the QBO winds are forced by the same external mechanism as ENSO. What this tells us is that at least some of the ocean/atmosphere interaction is misidentified as a coupled model instead of a common-mode mechanism, whereby the same external forcing guides both.

“Also even in the hindcast climate models you would not expect them to simulate the historically observed ENSO pattern. This is an emergent property of a model and is very dependant on the initial conditions.”

Any initial conditions to the ENSO model eventually decay and what is left is the forced response.

This is no different than conventional tidal analysis. When we look at a model for predicting tides based on SLH gauge measurements, nearly 100% of the response is due to lunar+solar gravitational forcing. Only transients due to local weather patterns will cause a disturbance, and those will damp out. Projections can be both hindcast and forecast equally well.

In my opinion, the theory of ocean/atmosphere coupling for ENSO is just a placeholder model that scientists employ until they can come up with the true mechanism. Like spontaneous combustion, ocean/atmosphere coupling lacks a root causative agent.

22. Hyperactive Hydrologist says:

The Pacific is not closed system and is influenced by other parts of the world and other atmospheric systems and teleconnection.

Also have you publish any papers to back up your claims if so can you provide links?

23. I presented the preliminary ENSO model at the AGU last year, and will present a more mature model at this year’s meeting should the abstract I submitted get accepted.

I tried submitting a paper on the model to Physical Review Letters but that got rejected with the explanation that the subject matter was not suitable for the journal. When I pointed out to the PRL editor that Tsonis had a paper published on the teleconnection network model for ENSO in PRL a few years ago, the editor got a bit peaved at me. It must have reminded them that they had published a paper by one of the most credentialed AGW-deniers out there — recall that Tsonis is on the board of the GWPF along with Lindzen 😦

Google scholar these words — predictability ENSO networks
and you will find Tsonis as the most cited.

IMO, Tsonis is responsible more than anyone else in presenting the case that natural variability in climate is chaotic and impossible to predict. That’s his schtick, but it has never been proved.

I am using two lunar tidal parameters, the Anomalistic and Lunar month, along with detailed information culled from the NASA JPL lunar ephemerides to demonstrate an alternate teleconnection for ENSO. Remember that a teleconnection doesn’t have to include just the earth 🙂

24. Hyperactive Hydrologist says:

Nope, I agree. Just as the Pacific Ocean isn’t a closed system neither is the Earth with obviously the sun being the main driver.

I sympathise with you regarding trying to publish, my wife is a researcher and she has had some challenges getting her papers published. However, if you are trying to make the case that the climate is predictable years in advance I expect you will be very much swimming against the current, so to speak.

25. I’ve seen many people dismiss WHUT’s Tidal Model of ENSO out-of-hand, but I’ve not seen anyone actually put forth a problem with the model or data he’s using. I mean, if you’re paying any attention whatsoever a graph like this HAS to catch your eye.

I mean, put this in a GCM and it’s a game-changer.

26. Steven Mosher says:

one.
The reason that stuff wont go in a gcm is because it is curve fitting nonsense on steroids

27. Thanks Kevin,
Yes, it’s one of those cases of pattern matching that’s hard to ignore and hard to shake once you start on the trail it leads you down. The training uses only two lunar tidal parameters, but it’s not the straightforward tidal analysis that you would use on this kind of waveform :

These are diurnal and semidurnal tidal oscillations at a SLH gauge station in Hawaii. That’s a breeze to solve for because the time series is so clean and the periods are already known.

But what happens when the lunar periods are aliased by a modulation of the seasonal cycle and worse? This is not so easy because none of the conventional harmonic or spectral analysis techniques that I am aware of know how to deal with this situation.
And then you have to deal with a noisy signal to top it off, which is the nuisance factor in being able to quickly zone in on a good fit.

28. SM writes:”The reason that stuff wont go in a GCM is because it is curve fitting nonsense on steroids”

Is it? They’re now putting ocean tides in GCMs, it’s largely the same parameters. The accuracy of surface elevations in forward global barotropic and baroclinic tide models, Arbic et al, 2004, doi:10.1016/j.dsr2.2004.09.014 lists 10 different parameters that need to be set. The anomalistic month Msubm and fortnightly term Msubf are already in the models that are doing ocean tides.

Your comment is precisely what I meant by dismissing it out-of-hand, but you lack *any* actual evidence to dismiss it. I’ve seen enough ‘it’s all cycles’ pseudoskepticism to be wary of attributing anything climate-related to cycles. My favorite is still Wyatt & Curry’s ‘Stadium Wave’ –, but you really should look into something before dismissing it out-of-hand.

Now if WHUT was claiming the gravitational pull of Jupiter was causing it – yeah, I’d be skeptical too.

29. angech says:

dikranmarsupial. Sorry.
This is one of your blogs of preference.
I am a latecomer.
Our views disagree.
You have a great scientific background.
I do reference your statements for clarification when I do not understand, so thank you very much for your explanations.
I would much rather you kept commentating here but I would still like to comment.
I will not comment on any of your comments again or refer to you unless you ask me to.
This might help reduce some of my annoyance factor.
I would like to keep commenting on the models with the others.

30. angech says:

SM writes:”The reason that stuff wont go in a GCM is because it is curve fitting nonsense on steroids”
oneillsinwisconsin “Is it? They’re now putting ocean tides in GCMs, it’s largely the same parameters. ”

Putting ocean tide data in is good, why is it new?
Most GCM’s must have this in originally.
Other oscillations can be, well, complicated.
You can put a 24 hour day night oscillation in and rig it with orbital distance and expect an extremely good match.
ENSO ? you cannot predict whether you will get two El Nino’s or three in a row but it does happen, sometimes.
Hence your model will rapidly deviate from observed unless you put in parameters which can be adjusted by emergent properties.
Putting unstable future predictions in is not a good idea.
This is where the parameters get put in to try to cope with the expected divergences.

“You dont need models to set policy”

Depends on what you are saying.
First level true, you can set any number of policies without models.
Second level false can you use models in setting policy?? yes.
Third level Will the models help set the best policy? We could always do a model to answer this I guess.
Will it help change what Steven said?
No “You dont need models to set policy”

31. -1=e^iπ says:

@oneilllsinwisconsin

so you need that much of a training interval + a calibration interval and it only does that well?

I think I could do a better job fitting a bunch of sinusoidals to the training interval.

32. -1=e^iπ says:

😦 was hoping someone would counter my back of envelope calculation of 0.3 C bias with a better calculation.

33. Kevin,
I agree that Jupiter is way down the list of forcings.

Robert Grumbine at NOAA was looking at some of the possible forcings of the Chandler wobble on his blog last year.

I haven’t seen any follow-up, as he’s doing this work on his own time apparently.

At least one NASA JPL researcher was looking into lunar effects as of a few years ago, but that work somehow got spun out into independent research at MoonClimate.org

34. -1,
I’m not sure what sort of counter you were expecting. I don’t there is an easy to show by show much GCMs are overstating, or understating, something like ECS.

35. izen says:

@- -1
“😦 was hoping someone would counter my back of envelope calculation of 0.3 C bias with a better calculation.”

On the back of a smaller envelope I figure that a 0.3 discrepancy is about the magnitude of an ENSO cycle. So the modelz ensemble may just include an extra El Nino to ‘ratchet up’ the GMT because they do not accurately follow the real world ENSO cycle.

Fortunately the solution is at hand. Modellers can just put in the dates and magnitudes of the next ten ENSO events from geo’s infallible predictive model.
Or is there a (Bayesian?) way of calculating its error range.
(grin)

36. JCH says:

ENSO ? you cannot predict whether you will get two El Nino’s or three in a row but it does happen, sometimes.
Hence your model will rapidly deviate from observed unless you put in parameters which can be adjusted by emergent properties.

Do they do this? I would have thought absolutely not.

37. Steven Mosher says:

“Is it? They’re now putting ocean tides in GCMs, it’s largely the same parameters.”

nothing wrong with tides.
Nothing wrong with anything physical

You’ll note I asked him for the physical parameters and he would not answer the question in a complete fashion, like listing all the parameters.

So you have to go look, Simple question

List all the tunable parameters in the model and their physical units.

This is not that hard.. do it.

38. Steven Mosher says:

angech
you dont seem to get how physical models are built.
and what they are used for.

here is a thought.

Go read through some GCM code.

39. angech,
I have no idea what you are talking about, as every one of your sentences is a non-sequitur. The paragraph was invented in the 3rd century BC.

JCH,
The one emergent property that I am aware of is a biennial modulation. In sloshing studies, this comes about from pumping at an a fixed cycle, which results in a period doubling. Watch this video and you can see the doubling in action.

The biennial cycle emerges in the GCM’s, but this regularity is not observed in the data, as described here:

“North Pacific decadal variability: insights from a biennial ENSO environment”, Climate Dynamics, August 2017, Volume 49, Issue 4, pp 1379–1397

” In this study, we take advantage of a 350-year long simulation of the Goddard Earth Observing System (GEOS-5) AOGCM to examine the characteristics and mechanisms of the PDO. It was found that the ENSO variability in the GEOS-5 has a pronounced biennial nature, a bias that is not uncommon among current climate models.”

The key to understanding why a strict biennial is not observed in the data is because the lunar tidal cycles interact with the biennial modulation to create the more complex ENSO pattern observed. What I am doing is not strictly curve fitting, but solving a differential equation with the biennial factors included and forcing supplied by the lunar terms. The solution to the DiffEq using the lunar forcing corresponding to measured angular momentum variations happens to match the ENSO behavior the best. Very minimal gross tuning involved in this — its mostly fine tuning because the lunar phase has to align precisely over a time span that takes place over 100+ years.

40. SM writes: “nothing wrong with tides. | Nothing wrong with anything physical”
After having written : “The reason that stuff wont go in a GCM is because it is curve fitting nonsense on steroids”

Now which is it? Are the tidal forcing parameters (which is what geo is putting into his model) curve fitting nonsense or are they physical? Jesus – attempt to understand what it is you’re talking about.

If you had been paying any attention you’d realize the biggest constraint *today* is resolution and time. It’s not enough to get a couple of annual, monthly, and bi-weekly terms in there. Yeah, they’ll reproduce tides, but what WHUT has shown is that if you include *more* of them (anomalistic month, tropical month, and draconic), then you’ll get the proper harmonics and phase relationships (at least if IRC).

The limitations, computational expense, and difficulty in achieving this is probably best spelled out by reading Concurrent simulation of the eddying general circulation and tides in a global ocean model, Arbic et al, 2010, doi.10.1016/j.ocemod.2010.01.007 A 5-year run generated 68TB of data.

41. Curve-fitting in a GCM? Looks an awful lot like what geoenergymath’s tidal forcing terms.

42. The GCM’s are the things on steroids. This is a typical sloshing model for a volume of liquid in a tank:

f”(t) + w^2 f(t) * (1+ A cos(vt)) = F(t)

This is called a Mathieu equation, and is close to a conventional wave equation, apart from an embedded modulation at radial frequency v. There is also an equation known as the delayed-difference or delayed action oscillator that accomplishes the same thing via a time-delayed feedback term. Solve these non-steroidal equations, with the F(t) term supplying the lunar forcing, and you are off to the races.

$f''(t) + \omega^2 f(t) (1+ A cos(\nu t) ) = F(t)$

43. I am confident that models are predicting the AGW trend correctly. That’s because there’s a real physical mechanism responsible for the forcing and that it can be quantified.

Yet, for behaviors such as ENSO and QBO, there is still no consensus causative mechanism. And the models are still divided between relatively simple equations such as the delayed action oscillator and full-blown GCMs. Unlike AGW, none of the explanations has a causative forcing — wind is often touted, but that as a forcing is a dog chasing its own tail.

Often you see reference to an emergent resonance within the models, which is next to impossible to verify in the results — any model can be tuned to create any resonance one wants. See examples of this in recent QBO papers where the QBO fundamental period is found via a GCM run to be dependent on pressure in a stratospheric layer, IIRC.

Otherwise, the explanations are recursively undecided. For example, the ocean-atmosphere interaction: if ENSO is caused by the wind, then what causes QBO? .Oh, that’s caused by ENSO as it convects warm air upward. Repeat.

If we do get the forcing for ENSO and QBO in order, then these behaviors will join the ranks of AGW as a climate behavior that we have a good handle on. But right now, it is worrying that the causes of ENSO and QBO are not known, while we claim to understand AGW inside and out.

Just last year, there were several peer-reviewed papers that were wondering about the observed QBO anomaly that unexpectedly appeared and whether that circulation was in a “death spiral”. But now its back to normal, just as predicted from the lunar forcing model.
http://contextearth.com/2017/09/07/the-qbo-anomaly-of-2016-revisited/

And this timely assertion regarding QBO
https://en.wikipedia.org/wiki/Quasi-biennial_oscillation
“In addition, the QBO has been shown to affect hurricane frequency during hurricane seasons in the Atlantic [7] ” reference to a William Gray paper.

Maybe Judith Curry, as a former student of William Gray, will figure this stuff out?

44. Steven Mosher says:

“Curve-fitting in a GCM? Looks an awful lot like what geoenergymath’s tidal forcing terms.”

List the terms.
List the physical units.

Its not that hard

45. I haven’t been following this discussion (mostly because it seems somewhat never-ending) but I will make one comment. A GCM is three-dimensional. Showing that one can reproduce some ENSO-like oscillation doesn’t necessarily means that one then knows how to implement something in a GCM that would produce ENSO-like oscillations.

46. “list the terms”

Here are the ENSO model terms for a pair of cross-validated intervals:
http://contextearth.com/2017/08/08/enso-split-training-for-cross-validation/

The model tuning process finds essentially the same terms from 1880 to 1950 as it does 1950 to 2016, except for the biennial modulation.. That’s the only term that is metastable, as all the lunar and solar seasonal terms are otherwise fixed.

The tuning process can find an even more impressive cross-validation if the lunar tide phase angles are kept fixed from one interval to the next. Then the set of forcing amplitudes are found to be nearly identical across the terms, except again the biennial modulation, where it is stronger in the last 70 years than the first t0 years of the modern ENSO instrumental record.

The biennial modulation changes may be related to climate shifts. It’s certainly related to biennial shifts in fishery populations. Red squares are odd years and blue squares are even years, with a transition around 1950, on a log plot:

Irvine, J. R., et al. “Increasing Dominance of Odd-Year Returning Pink Salmon.” Transactions of the American Fisheries Society 143.4 (2014): 939-956.

More where that came from. It could be that biology is much more sensitive to underlying changes in seasonal patterns than the instruments.

47. “A GCM is three-dimensional. “

Both ENSO and QBO are behaviors that are vastly reduced in dimensionality, forming standing waves strictly along the equator.

ENSO

QBO

The reason that EEs can solve EM standing waves in resonators is that the dimensionality allows the partial DiffEq’s of Maxwell’s equations to be separable both in time and in spatial dimensions. The same applies for these two phenomena, and if GCMs are not taking advantage of any dimensional simplifications, that may be overly complex.

48. Both ENSO and QBO are behaviors that are vastly reduced in dimensionality, forming standing waves strictly along the equator.

This still doesn’t tell me how you would implement something in a GCM that would then mimic ENSO and QBO behaviour.

49. JCH says:

I don’t quite get this. If the ENSO cycle was successfully included in GCM, then climate model predictions would look more like observations. How does that change the GCM prediction for 2100? Long term, ENSO still washes out to approximate zero.

50. “This still doesn’t tell me how you would implement something in a GCM that would then mimic ENSO and QBO behaviour.”

Take QBO as an example because that is the case that researchers try to duplicate the behavior precisely. For ENSO, they just broad-brush-approximate the behavior.

This paper claims to mimic the QBO behavior in a GCM:

Geller, M. A., Zhou, T., Shindell, D., Ruedy, R., Aleinov, I., Nazarenko, L., Tausnev, N.L., Kelley, M., Sun, S., Cheng, Y., Field, R.D. and Faluvegi, G. (2016), Modeling the QBO – Improvements Resulting from Higher Model Vertical Resolution. J. Adv. Model. Earth Syst.. doi:10.1002/2016MS000699

“It also shows that the QBO-like oscillation for a gravity wave momentum flux forcing of 2.0 mPa has a period of about 8 years, while a forcing of 2.5 mPa gives a period of about 37 months, and a forcing of 3.0 mPa gives a period of about 25 months, and a forcing of 3.5 mPa gives a period of about 21 months. In fact, we find that the best fit to observed QBO periods is for a gravity wave momentum flux forcing of 2.9 mPa, as will be shown in the next section. “

This is a bizarre emergent property that they have claimed to find via model tuning — a global oscillation caused by the magnitude of a pressure forcing. This rarely happens that an oscillation period changes according to the magnitude of the forcing, unless it’s a chaotic formulation, and then all bets are off then. Observed cycles in nature are usually due to the forced period or due to intrinsic characteristics of the medium. For example, a pushed pendulum’s period is not dependent on the forcing to first order, unless it gets large, but that’s a gradual effect. Here, the magnitude is overriding and appears from nowhere. Where exactly is this pressure change coming from and why is the period so sensitive to its value?

This may be a case of trusting the output of a simulation without having an idea of exactly why it is occurring based on a first-order physics argument. Of course the knee-jerk response to such a concern is that it’s an emergent property that can’t be predicted unless the full GCM is executed.

That’s pretty weak tea.

But then again, remember that the entire theory of QBO is based on the early models of the denier Richard Lindzen. I think the guy went down a deep rabbit-hole and never emerged.

More analysis on this QBO example here:
http://contextearth.com/2016/06/19/recent-research-say-qbo-frequency-is-emergent-property/

51. JCH says:

“I don’t quite get this. If the ENSO cycle was successfully included in GCM, then climate model predictions would look more like observations. How does that change the GCM prediction for 2100? Long term, ENSO still washes out to approximate zero.”

That’s right. ENSO is a standing wave phenomena and thus zeros out in the long run.

52. “List the terms.
List the physical units.

Its not that hard

The terms are even easier to list for a non-GCM QBO model than for a non-GCM ENSO model.
The anomalistic lunar term does not enter in, since the forcing is mainly nodal.
So only combinations of the annual, semi-annual, nodal (Draconic) monthly and nodal fortnightly cycles are amplitude and phase tuned during training. Really no different than a conventional ocean tidal analysis, except that we are using the long-periods rather than the short-period terms (i.e. diurnal and semidiurnal periods).

That will give this kind of prediction based on a very short training interval:

The QBO time-series does not look the normal QBO waveform, because I am plotting the acceleration of the QBO winds, not the velocity of the winds. In physics, the acceleration is the true forced response term — to get the velocity, you would integrate this and recover the more characteristic square-wave look of QBO.

Richard Lindzen should have found this pattern, but evidently didn’t “look harder” (in blog-speak). That’s why new generations of atmospheric scientists ended up following him down his QBO rabbit hole, IMO

53. Hyperactive Hydrologist says:

Remind me again what has this got to do with ENSO?

54. Hyperactive Hydrologist says:
Remind me again what has this got to do with ENSO?

ENSOO and QBO are thought to be connected. There is an active Twitter thread going on right now where forecasters are posting correlation charts: https://twitter.com/DrAHButler/status/906239980347027457

55. My findings are that ENSO follows both the nodal/draconic forcing and the anomalistic forcing plus the nonlinear interactions between the two (similar to ocean tides, which also track the more localized tropical/synodic forcing). However, QBO only tracks the nodal forcing, which is very clear from the model fit.

Perhaps an astrophysicist can explain this better, but the anomalistic tide is a perigee/apogee variation, and so gravity effects due to proximity of objects would be stronger in this case. Since the ocean has a huge mass, one can definitely understand this effect.

The nodal cycle is essentially the tilt of moon with respect to the equator, which also has gravitational variation but now with respect to latitude. My hypothesis is that perhaps the moon is adjusting the equatorial QBO track to follow a path where the Coriolis force equals zero, a minimum energy dissipation state. This follows from the fact that a Coriolis force of zero is required for the QBO to develop w/o vortices according to my math model. The only way to adjust this so-called F-plane is by the nodal forcing. The anomalistic forcing therefore has no real effect on QBO because it is purely perpendicular to the latitude (to first-order).

That also fits in with the precessional Chandler wobble of the North pole, which is measured to be exactly half the QBO frequency.

But that explains why the QBO and ENSO will occasionally sync — because they share the one commensurate cycle. It’s driving the scientists mad that they see the two synchronize, but they can’t figure out what is causing it.

56. re: the second paper linked:

“It is the parameterized gravity wave flux that determines the period and amplitude
of the QBO”

I can’t disagree that it would determine the amplitude, but frequency? It’s hard to find examples of frequencies that depend on the strength of the forcing. When you blow into a horn for example, the frequency doesn’t change depending on how hard you blow. And if it does change, it will probably jump to higher harmonics discretely.

If on the other hand they are suggesting that the gravity wave flux has a frequency that is providing a sinusoidal forcing, then what causes the gravity wave flux oscillations? That would leave the root cause unresolved.

57. angech says:

JCH says:
“Hence your model will rapidly deviate from observed unless you put in parameters which can be adjusted by emergent properties. Do they do this? I would have thought absolutely not.”
oneillsinwisconsin says:
“Concurrent simulation of the eddying general circulation and tides in a global ocean model”

So it is in there for the doubters and one imagines it does need adjustment. By the way JCH, a Neil Diamond Month for your temperature expectations. Congratulations.

58. Steven Mosher says:

Note that the terms and units are still not listed

59. JCH says:

If they made the adjustment as you are describing, for a period of El Niño dominance, observations would always agree with the model. They obviously did not. In the 21st century, there was a period where La Niña dominated. Observations dove below the model track. Nobody adjusted the knobs. I called it the paws. To make fun of the cultists at CargoCult Etc. who thought it meant there was something seriously wrong with the physics.

60. Mosher – do you read? I posted this an hour after you first posed the question:

And pointed out the Tidal Model of ENSO also requires, not just Msubm and Msubf (the anomalistic lunar month and it’s subharmonic), but equivalent terms for the tropical and draconic.
MONTH ….. LENGTH
TYPE ….. … IN DAYS
anomalistic ..27.554549
tropical ……..27.321582
draconic ……27.212220

This site uses Akismet to reduce spam. Learn how your comment data is processed.