A couple of years ago, I had a guest post about Pat Frank’s suggestion that the propagation of errors invalidate climate model projections.. The guest post was mainy highlighting a very nice video that Patrick Brown had produced so as to explain the problems with Pat Frank’s suggestion. You can watch the video in my post, or on Patrick Brown’s post.
Pat Frank has, after many rejections, managed to get his paper published. If you want to understand the problems with this paper, I suggest you watch Patrick Brown’s video, and read the comments on my post and on Patrick’s post. Nick Stokes also has a new post about this that is also worth reading.
However, I’ll briefly summarise what I think is the key problem with the paper. Pat Frank argues that there is an uncertainty in the cloud forcing that should be propagated through the calculation and which then leads to a very large, and continually growing, uncertainty in future temperature projections. The problem, though, is that this is essentially a base state error, not a response error. This error essentially means that we can’t accurately determine the base state; there is a range of base states that would be consistent with our knowledge of the conditions that lead to this state. However, this range doesn’t grow with time because of these base state errors.
As Gavin Schmidt pointed out when this idea first surfaced in 2008, it’s like assuming that if a clock is off by about a minute today, that tomorrow it will be off by two minutes, and in a year off by 365 minutes. In reality, the errors over a long time are completely unconnected with the offset today.
Maybe the most surprising thing about the publication of this paper is that the reviewers (who are named) both seem to be quite reasonable choices. It seems highly unlikely that they missed the obvious issues with this paper. Did it get published despite their criticisms? Did they eventually just give up and decide it wasn’t worth arguing anymore? Or, did someone decide that this was something that should play out in the literature? I think the latter can sometimes be a reasonable outcome, but only if the paper has something that’s actually interesting, even if it is wrong. Pat Frank’s paper really doesn’t qualify; it’s simply wrong, and not even in an interesting way.
Or as Pauli said, “It is not even wrong.” But that may be giving it too much digniti
You really only need to read the abstract and introduction to dismiss the paper, as he says “The unavoidable conclusion is that an anthropogenic air temperature signal cannot have been, nor presently can be, evidenced in climate observables.”
That’s not a typo, as it’s repeated elsewhere.
The whole thing sounds to me like Dr Frank found a way to break a climate model and used that to claim all models are broken. If you were building a climate model and it output such a result you’d immediately say to yourself, “Hm, I clearly got something wrong.” Then you’d find your error and fix it.
Sounds like the title captures the essentials quite succinctly. If Franks is recycling his ever widening fan of uncertainty he should be getting Green kudos for making so little go so far.
If the watch was correct 24 hours ago, and is now off by 1 minute – 24 hours later, then isn’t it possible it will be off by an additional 1 minute per day, as time moves forward? So in one year, isn’t it possible it could be off by 365 minutes?
Just a hypo of course – but why assume the watch error will not change, but remain a constant 1 minute over time?
I am glad the paper was published and others will be able to study it and either support or refute it. That is the way science is supposed to work.
I congratulate Dr. Frank for working on getting his paper published for the last six years!
If Dr. Frank’s work turns out to be refuted – well then kudos to the person who writes the paper to refute it. But what if Dr. Frank turns out to be right? That would be interesting, now wouldn’t it.
Just the act of warming (whether natural, human made or a mixture of both) will cause CO2 to be released from the ocean – so warming itself can explain (at least some part) of the increasing CO2 atmospheric concentration. At least I have read some material which suggests that is a possibility.
I look forward to watching this paper and the responses to it play out and see what comes out of it.
You seem to be missing the fact that he claims that a global temperature anomaly is impossible to measure. I just refuted what he claimed if you change “impossible” to “possible” in the last sentence.
“If Dr. Frank’s work turns out to be refuted “
How would you ever know? Anyone who tries to follow the maths can see that it is nuts? The rest, well…
ATTP has noted a major issue. If you want to turn a state error into something that accumulates at a rate, the question is, what rate? PF has a two step process:
1. He insists (elementary error) that if you average something, the units change. If you get the average height of Dutchmen, it is 1.8 m/Dutch. And if you get an average of annually binned data, say temperature, then the units are °C/year.
2. So there the time rate is determined. The rate per year goes into the calculation and determines the outcome. If you had averaged the monthly T data, you’d have the watch ticking much faster, and get a much bigger result.
Presumably at some point we’ll begin to see some of this so called ‘error’? 🙂
I think I’d be remiss in not noting that up to 2 years ago there was still wide spread belief that models were off because satellite data (erroneously) showed lower than expected temperatures. This was playing loudly with global warming deniers who are always hoping for some tangible justification for their beliefs.
@RickA No one needs to refute it. If its any good other scientists will pick up the ball and use it. That is how science is done. More than likely capable scientists are laughing.
By the way, if you are interested in understanding how error propagates through time… check out Jerry Mitrovica’s paper on sea level rise. As you know sea level affects how fast the earth spins. So if what is happening now for sea level rise was happening 2000 years ago, then astronomers would have recorded eclipses at vastly different times. Its a good read;
Click to access e1500679.full.pdf
Presumably at some point we’ll begin to see some of this so called ‘error’?
Right. If the error bars were really so large, wouldn’t the GCMs give results that are all over the map?
Nick Stokes has the following image up at his blog:
It seems like models should be giving results in the range of -15C through 15C by 2100 if the error bars are really as large as indicated.
There are common themes circulating in the discussions on the moment linked in part to this argument.
The recurrent theme is to how well we can predict ECS.
The fact that the posited range is still so wide and uncertain must lend some slight credence to the notion that we cannot measure anthropogenic effects as well as we claim.
“As Gavin Schmidt pointed out when this idea first surfaced in 2008, it’s like assuming that if a clock is off by about a minute today, that tomorrow it will be off by two minutes, and in a year off by 365 minutes. In reality, the errors over a long time are completely unconnected with the offset today.”
Again, and this is a long bow, if climate prediction is subject to the forces of chaos, then mathematically, like with fluid theory, unexpected extreme changes can accumulate.
The good thing is that historically it does look as if we operate in very strong, self regulating constraints.
Mathematically though there is no guarantee.
“Pat Frank has, after many rejections, managed to get his paper published. If you want to understand the problems with this paper, I suggest you watch Patrick Brown’s video, and read the comments on my post and on Patrick’s post. Nick Stokes also has a new post about this that is also worth reading.”
Thanks for putting this up for discussion, I look forward to some more comments and explanation of where people know it goes wrong. Also to any sections that might be considered right in the Maths as to be published with these views against it known suggest the opposing views are not as clear as they should have been.
I will watch the video again.
No, the hypothetical considers a watch that otherwise keeps accurate time. If you set such a watch to have an initial time that is slightly in error, you don’t then propagate that error.
I suspect it is mostly going to be ignored. I don’t think many serious researchers are going to bother formally refuting it.
No, not really. The chaos really refers to the dynamics (the motion of air in the atmosphere and water in the oceans). This can clearly move energy around and can lead to short-term energy imbalances (i.e., warming or cooling). However, the system is quite strongly constrained to remain near equilibrium, which is set mostly by how much energy we’re getting from the Sun, the albedo, and the composition of the atmosphere (greenhouse gases). So, the chaotic nature of the system can lead to variability, but this is almost certainly constrained to be small, especially on multi-decade timescales.
Angech “Again, and this is a long bow, if climate prediction is subject to the forces of chaos, then mathematically, like with fluid theory, unexpected extreme changes can accumulate. ”
No. The flip of a coin is subject to the forces of chaos, but they never (at least in my experience) go shooting off into space, or come down “elbows” instead of “heads” or “tails”. Chaos does not necessarily imply tipping points, just sensitivity to initial conditions.
“However, the system is quite strongly constrained to remain near equilibrium, which is set mostly by how much energy we’re getting from the Sun, the albedo, and the composition of the atmosphere (greenhouse gases). ”
This shows exactly what is wrong with Frank’s argument. The Stefan-Boltzmann law means the planet radiates according to the fourth power of its temperature. That is a *very* strong feedback. The idea that a constant error in cloud feedback can accumulate and indefinitely overcome the SB feedback is so obviously unphysical that I don’t understand how the reviewers could have let that pass. The climate is not going to warm in a century by 20+C due to clouds.
It only matters if the error in cloud feedback is changing with time. Do models predict any change in global cloud cover associated with warming? My understanding is that they do. If so, then this change in cloud forcing (feedback) does have an associated error. This propagates in time, whereas the fixed absolute error clearly doesn’t.
Yes, models do predict that there will be a cloud response to changing temperatures. Indeed, the uncertainty in this should propagate (it’s one of the main reason for the uncertainty in climate sensitivity). However, what Pat Frank is using the cloud forcing, not the cloud feedback. An uncertainty in the cloud forcing would change the base state, but would not propagate through the simulation. It would be equivalent to not quite knowing the solar insolation. If there was some uncertainty in the level of solar insolation, then that would imply an uncertainty in the base state (or equilibrium state) but it would not be something that we would propagate through the calculation so as to produce an ever growing uncertainty.
How does one go about reconciling a 1D Random Walk propagation error with the conservation laws?
How do we know, a priori, that the propagation error proceeds as a (x-axis (or time) symmetric) parabola?
One thing I don’t understand is what the proposed implications are supposed to be? I believe Frank has been careful to suggest that this error propagation doesn’t apply to the real world, so that he isn’t associated with a claim that future warming could be much greater than previously thought. However, given that there is uncertainty in real world longwave cloud forcing (of a similar magnitude to the CMIP5 model spread used by Frank) this error propagation should logically apply to the real world too if it applies to model uncertainty.
If it’s suggesting a physics problem with CMIP5 models the obvious thing to do would be to test whether his hypothesised error propagation actually happens by running the models into the future. Which of course has already been done many times and no such huge errors appear. So is Frank suggesting that something is being “hidden”?
Another thing is that Frank’s calculations are dependent on the spread over the CMIP5 model ensemble. If there were only one GCM with one average LWCRF then there would be zero error according to Frank.
You need to get some more interests. Life is short.
A deceptive person could easily create such an argument. Consider that a pure random walk is a martingale in that it will allow excursions to infinity and so the mean value is actually undefined.
In physics, this is easily accounted for by applying an Ornstein–Uhlenbeck process correction which forces the mean to revert to a fixed value.
Who knows what creative wrongness that Pat Frank is applying
“However, given that there is uncertainty in real world longwave cloud forcing (of a similar magnitude to the CMIP5 model spread used by Frank) this error propagation should logically apply to the real world too if it applies to model uncertainty.”
I think the key difference is that the Earth is our integrator. One doesn’t assume, say cumulative degree days, then go about integration of a very accurate (say perfect, sigma ~ 0, which is impossible) high frequency (say 10 Hz) digital thermometer calibrated to the highest NIST (or SI) standard. That instrument will have the same biases in the future (bias offset, frequency response and sigma).
The Earth (e. g. humans) is the real time integrator, we only need to come back a decade-century-millennium later and use our same old MIG thermometers.
In the same way, the path taken by AOGCM’s/ESM’s is very much less important than the final delta T, A final climate sensitivity will emerge which has essentially a zero error bar.
The coin came down “elbows” in No Game No Life!
Clive Best says: “It only matters if the error in cloud feedback is changing with time. Do models predict any change in global cloud cover associated with warming? My understanding is that they do. If so, then this change in cloud forcing (feedback) does have an associated error.”
When I was still doing cloud observations years ago there was a study that showed that the climate models which reproduced the 3D state of the clouds best according to observations where the ones with the highest climate sensitivity. So yes, there could be such a relationship, but that does not mean that study X gives you information on Y. You will have to study Y.
PaulS: “I believe Frank has been careful to suggest that this error propagation doesn’t apply to the real world, so that he isn’t associated with a claim that future warming could be much greater than previously thought.”
Frank does make a claim about reality, even about the now:
“The unavoidable conclusion is that an anthropogenic air temperature signal cannot have been, nor presently can be, evidenced in climate observables.”
Somehow based on a paper on dynamical climate models he is able to make conclusions about observed warming.
I am not expecting anyone wasting their time to refute this. Not refuted it makes for a nice honeypot trap.
Nick Stokes has an interesting update to his blogpost, suggesting there is even a unit error.
Yes, I noticed that. Pretty elementary.
I notice that in Equation 5 of Pat Frank’s paper he simply drops the year-1 in his uncertainty, so he would probably argue that the units are right. Of course, he doesn’t really explain why he can do so.
In fact, it’s even more bizarre. Pat Frank’s fundamental equation is essentially
where is some coefficient, is the warming due to the greenhouse effect, is the total forcing due to greenhouse gases, is the incremental change in greenhouse gas forcing at the ith step, and is either 0 if considering anomalies, or the base temperature if not.
Hence, you evolve the above without considering an explicit timestep. It’s simply assumed to be linear in change in forcing. So, Frank includes a year-1 in his cloud forcing uncertainty, and then simply drops it.
That is a weird paper.
Frank doesn’t know units. And he uses this odd +/- in front of (6), coming from \sigma^2 as being the argument under the sqrt, probably thinking in intervals.
He argued, too, that the units originate from calculating “statistical averages” instead of “measurement averages”. Very weird.
Indeed, it is very weird. I think Nick Stokes has pointed out a number of these oddities in his post.
In a WUWT comment, Pat Frank claimed that
So, I ran a poll on Twitter which posed:
I’m a scientist, I’m aware of Pat Frank’s analysis, and I
a. understand and accept it.
b. think it’s nonsense.
The results are now in.
(I used scientist, instead of physical scientist, to be inclusive).
Ah, but ATTP, he could argue that you did not specify it to be *physical* scientists – and of course the No True Scotsman Fallacy will apply when some physical scientist comes along and says it is indeed nonsense. I guess Peter Thorne is thereby automatically ruled out to be a physical scientist (he’s apparently rejected the paper when submitted elsewhere, perhaps even multiple times, considering his Twitter comment). And you are ruled out, too, of course.
Yes, I am indeed probably ruled out. Pat has already mentioned that Nick Stokes is no scientist and that he suspects I’m not either. It’s quite convenient when you can redefine people so as to delegitimise their critiques.
I actually downloaded the file with all the various journal submissions, comments and responses. The paper has been rejected by 13 different journals. It’s also remarkable how many people in the climate science community are cowardly idiots who don’t understand science. I’d also forgotten that James Annan was also one of those who rejected the paper.
“It’s also remarkable how many people in the climate science community are cowardly idiots who don’t understand science”
Another interesting “cowardly idiot” on the list is Ronan Connolly, highly praised by Lord Monckton. I think he was included as a friendly referee. But he wanted changes, so he copped it too.
I missed that. I’ve just found Frank’s response. Quite funny, really.
I don’t think it is always wise to target the messenger, but in this case I will make an exception. Pat Frank has all kinds of dubious associations, including the Heartland Institute. Along with Patrick Moore and Jay Lehr, who probably both need no introduction here, he wrote a ‘climate change primer’ for Heartland last year. He also just wrote a risible piece for WUWT. He seems to wear his ideological baggage on his sleeves.
It apparently took him 6 years to get this rubbish published. To be honest, it doesn’t surprise me that it ended up in an open access Frontiers journal. Imho I don’t rate any of the Frontiers journals very highly. This one has an IF apparently of 1.31. More of a bottom-feeder than a high flier. Moreover, as I wrote a couple of years ago with two colleagues, there is a lot of concern over the push for open access as a money-making business model rather than as a conduit for sound, repeatable science.
Don’t be surprised when this paper will sink without a trace.
Well, PF did move his ‘so called’ Cone cf Nonsense (or CoN) backwards from circa 2000AD …
to circa 1945AD …
S-o-o-o-o-o-o-o-o, why stop at 1945AD? I think the CoN should be moved back even further, say PI, circa 1750AD, But why stop there, circa 1750AD, I highly recommend moving the CoN backward 4.6 billion years!
Rats, can’t even read a PF graph. 1945AD is more like 1958AD. Makes such a BiG difference in the CoN though. 😉
In the limit as dT approaches zero, at T = 0, the slope of the CoN approaches infinite slope. Very interesting limiting BC, the error rate is at its maximum, infinity, at T = 0!
According to Pat Frank, “I was really glad they chose Carl Wunsch. I’ve conversed with him in the rather distant past, and he provided some very helpful insights. His review was candid, critical, and constructive.
I especially admire Davide Zanchettin. He also provided a critical, dispassionate, and constructive review. It must have been a challenge, because one expects the paper impacted his work. But still, he rose to the standards of integrity. All honor to him.”
I do not understand how that is possible that either of those scientists would offer “constructive” reviews for a paper whose central thesis is absolutely ridiculous. Since I know people who know Carl, I may ask one of them to check in on him and find out if Pat Frank is accurately reporting the content of the reviews…
MMM: They probably finished off with an Oh Henry 4:25.
Wunsch is definitely a force when it comes to climate dynamics related to the ocean and his papers over the last 50 years provide lots of insight. He apparently retired in 2013 from MIT.
I’m changing my screen name to TokedOutDude …
“Don’t be surprised when this paper will sink without a trace.”
Don’t be surprised if the conclusion of this paper: “whatever impact CO₂ emissions may have on the climate cannot have been detected in the past and cannot be detected now” escapes the echo-chamber and get recited somewhere as fact. I lump Pat’s nonsense in the same toilet bowl as Monckton’s feedback bs. It’s bs, but its plausible enough to the uninformed to be ammo in another front in the doubt-mongering war. Look how hard Watts is pushing it.
So important this crap is immediately and brutally strangled at birth.
Pingback: Climate Intelligence [sic] Foundation - Ocasapiens - Blog - Repubblica.it
Thanks ATTP for the opportunity. Will ease up on my angst. The post on the exoplanet looks interesting. Regards.
Didn’t last long
angech wrote (at WUWT)
Of course it isn’t deflection, it is a fundamental flaw in Frank’s argument. The last line is sheer hubris.
This whole argument is literally just about basic Probability and Statistics, combined with some high-school level of confusion about averages.
Angech… this isn’t some big revolutionary paper. It’s not even really “interesting”. It’s what you would get if the person who struggled in high school math managed to get enough of an audience that he can convince some of them that the teacher is wrong.
I… I don’t know how to break it to you, bud. We’re not talking about Relativity here; this isn’t advanced tensor calculus with Lorentzian manifolds. It’s not “advanced” math. This subject matter comes from literally the easiest undergraduate math classes you can take, after calculus. And the side you’re on — they’re flubbing it.
I’m not saying that you should just accept what we’re saying as an article of faith. Rather, make sure you have a real, solid understanding of how probability works first. Like, make sure you could correctly do the homework problems in the textbook — and then go take a hard look at Pat Frank’s work. If you do that, then I think the flaws will be glaringly obvious.
~signed, a math major who still uses this stuff on a daily basis
I think that when you’re looking at global warming denial, the issue is that those behind it aren’t really doing science. ** (I know… some may be doing real science… but keep this thought out there.)
They are trying to sell the idea that everything we know is wrong… somehow.
Since they aren’t engaged in science.. they aren’t targeting scientists or even speaking with them. Its all about getting something to the public.
To that end, it doesn’t have to be right, it just has to look right. Joe public doesn’t speak or use math in any fashion. Joe public sees graphs looking bad. Joe public thinks there may be something to this. Oh look it came from a reputable source… (?)
The ‘gold standard’ is to get something peer reviewed in a journal. If the work is really bad.. junk journals fit that bill. If you don’t use this material, then its pretty hard to tell the difference between a good journal, and a bad one.
By and large I run most sources of material though the ringer.. Who wrote it? What is their motivation? Are there detractors, and why? Who vetted the work? Is it publicly available? How new are the ideas?
Global Warming denial tends to fail all over the place with that set of tests. I have found a few think tanks that weren’t bad. But that just leaves you with their built in bias which is that if its private, it doesn’t have to present (or even try to) all the pertinent facts. In real science you get it drummed into your head that you need to list anything significant, and discuss errors.
~signed an electrical engineer who is required by law to apply science in his work.
Pingback: 2019: A year in review | …and Then There's Physics