Nic Lewis’s latest estimates

This is going to be a rather lazy post, as I’ve been at the beach all day with the kids and don’t have much energy. I was just interested in whether or not anyone had had a chance to look at Nic Lewis’s latest paper (Implications of recent multimodel attribution studies) which is due to appear in Climate Dynamics. It seems to be mainly an update on his 2014 paper in which he uses an Objective Bayesian approach to infer climate sensitivity.

I tried working through Nic Lewis’s latest paper, but there was lots of discussion about Bayesian analysis, and as much as I’d like to claim that I understand Bayesian statistics, I don’t really – well, not at a level that allows me to work through Nic Lewis’s paper. The basic results, however, are illustrated below.

Credit : Lewis (2015)

Credit : Lewis (2015)


Credit : Lewis (2015)

Credit : Lewis (2015)


Credit : Lewis (2015)

Credit : Lewis (2015)


So, ECS and TCR ranges that are not unreasonable, but somewhat lower than many other estimates. I’ll make a couple of quick comments.

  • As I understand it this is a basic energy balance approach and so cannot capture all the complexities of our climate. For example, it assumes feedbacks are linear. To be fair, Nic Lewis does discuss some of this at the end of his paper.
  • The IPCC ECS likely range is 1.5K – 4.5K. At the recent Ringberg meeting, it appeared that many regard the ECS range as probably being between 2K and 3.5K. Even though Nic Lewis’s results are reasonable, they still seem to be at odds with what most other experts regard as likely. His results suggest that the ECS is more likely to be below 2K than above, while many others seem to regard it as more likely to be above 2K, than below.
  • This is where I might potentially embarass myself, but my understanding of one of the strengths of Bayesian statistics is that you can incorporate prior knowledge. Nic Lewis uses an Objective Bayesian approach which – as I understand it – means that he regards his prior assumption as objective. However, our basic prior knowledge is that ECS is probably above 2K. That his method produces a result that suggests the ECS is probably below 2K, might suggest that some kind of physically motivated prior might be preferable to one that is regarded as objective. This seems to be roughly what James Annan is saying here.

Anyway, that’s all I was going to say. If anyone else has any thoughts, or has actually worked through the paper and understands it better than I do, feel free to present them through the comments.

Advertisements
This entry was posted in Climate change, Climate sensitivity, Global warming, Science and tagged , , , , , , . Bookmark the permalink.

151 Responses to Nic Lewis’s latest estimates

  1. I think that is the one I linked to in the post, or are you suggesting that I shouldn’t have? TBH, I’ve read through the example he gives in the post, and it’s still not quite clear to me. I shall have to try reading it again. I’ll get it eventually, I hope 🙂

  2. This one isn’t about Climate Sensitivity, but does quite nicely illustrate an example where an objective priors will fail to give the correct result.

  3. David Young says:

    http://julesandjames.blogspot.com/2009/09/uniform-prior-dead-at-last.html

    Just so we don’t fall prey to confirmation bias in our selective reading of the science or selecting only the bits we like.


  4. David Young says:
    Just so we don’t fall ..

    Youngy, figure out your hydrodynamics yet?

  5. This Lewis guy is not using effective CO2. Ploink … into the circular bin.

  6. DY,
    If someone had argued in favour of uniforma priors, your comment may make some kind of sense. Since none – here – has, it just seems typically churlish. This also seems remarkably ironic,

    Just so we don’t fall prey to confirmation bias in our selective reading of the science or selecting only the bits we like.

    Unless, of course, you were referring to yourself?

  7. dikranmarsupial says:

    James Annan has (yet) another relevant post here:

    http://julesandjames.blogspot.co.uk/2014/04/objective-probability-or-automatic.html

    Just because a prior is objective, doesn’t necessarily mean it is better, especially if it contradicts existing prior knowledge.

    Hope you had a good day at the beach!

  8. dikran,
    Thanks, it was a good day. I must start to put a proper name to my links, rather than using “here”. The one you highlight is good, but it’s the same one I linked to in my third comment 🙂

    I was actually looking at Nic’s ECS PDF, shown in the figure in the post. For the pink curves, there is a roughly 17% chance of an ECS less than 1K, and a 5% chance of an ECS less than about 0.7K. This just seems physically implausible. How would we explain the Greenhouse effect or Milankovitch cycles if the ECS were this small? It could be (probably is) state dependent, but it seems unlikely that it would be this strongly state dependent, and – if anything – we might expect it to be slightly higher in a warmer world, than in a cooler world.

  9. dikranmarsupial says:

    (i) Good! (ii) Oops! (iii) Indeed, as I said to Prof. Tol on the second Annan blog post, statistics requires a combination of good intuition and mathematical rigor. Mathematical objectivity is not a license to ignore common sense. Personally, in this case I think the correct thing to do is to use a subjective prior to encode what we think we know about climate sensitivity and then see how that is modified by the observations. I am an objectivist Bayesian by inclination, but there is nothing wrong with using a subjectivist approach where it is the best solution to the problem, provided you are clear about the justification for the prior. There may even some problems where a frequentist approach might be acceptable! ;o)

  10. I have had two (or perhaps more depending on interpretation) lengthy discussions on the value of the “objective Bayesian” method in analysis of physical systems like the Earth system with Nic at CA. I remain fully convinced that it has very little objective justification. It’s objective only in a very limited technical sense that has little relevance for problems of this type.

    Objective Bayesian method is a poor mans choice to be applied, when nothing else is available and something must be chosen to proceed. In case of physical systems, there’s always something better that can be used. Furthermore Objective Bayesian is not unique at all, but gives answers that depend strongly on the other methods used in the analysis. It’s also possible to choose the step where the method is applied in different equally plausible ways getting in some cases very different results.

  11. Andrew Dodds says:

    Terrible innocence here.. but surely we could just use a range of different priors of different derivations and test the sensitivity of the estimates to the priors?

    And surely, if the result is highly sensitive to the priors (where priors are at least remotely plausible), that would suggest a big problem with the method for this case.

  12. Andrew,
    There is the Annan & Hargreaves (2006) paper that considers a couple of different expert priors. It was a response to the use of uniform priors.

  13. The most commonly discussed issue concerns the plausibility of a prior that’s uniform in climate sensitivity (with a possible sharp cutoff at some high value). All the graphics in IPCC reports reflect such a prior either explicitly or implicitly. James Annan argued against that. My preference is a prior uniform in feedback strength rather than in climate sensitivity, Annan’s proposals are not identical with that, but go in the same direction. The behavior of Nic Lewis’ “objective” prior is, again, at leas qualitatively similar.

    Based on the above, Nic’s prior may well be reasonable, but his argument is not good. That his prior has a qualitatively similar cutoff at high ECS as mine is not accidental, the same mathematical relationships lead to that in his case as do in my preferred subjective prior. It’s, however, not known on more quantitative level, whether the further properties of his prior are plausible or not. It’s derived from assumptions that may well be seriously biasing.

    My view is that all the assumptions must be formulated for the physical system considered in a way, whose physical basis is understandable. The “objective” rule fails on that count. Nobody has any idea of the physical meaning of that rule. It’s derived from the way measurements are done, but that’s determined by very different factors than those that are used to understand the behavior of the Earth system.

  14. dikranmarsupial says:

    Pekka, I think there is a useful distinction between objectivist Bayesiaism and objective priors. Objectivist Bayesianism rejects the idea that Bayesian probabilities are necessarily subjective beliefs, but can be regarded as objective states of knowledge (Jaynes is very good on this). “Objective priors” are often used to mean priors that are intended to be uninformative or minimally informative, in some sense, but IMHO it really isn’t a good name for them (“reference priors” seems preferable to me at least). An objective Bayes inference with highly informative priors is still just as objective and in many cases rather more sensible. Being an engineer, I think it is a good idea to incorporate expert knowledge where it exists, but the use of minimally informative priors can still be useful in establishing a “lower bound” on what we can infer.

  15. The best name that I have seen used for the “objective” priors is, IMO, rule-based prior.

    That tells well that such priors are formed by first fixing a rule and then applying it to each particular case. That removes freedom for subjective choices made to (intentionally or otherwise) affect the prior to better serve the goals of the individual. That’s a virtue in several applications, but that’s not a good argument in scientific search for most truthful conclusions.

  16. One might argue that if Nic Lewis is going to continue publishing papers suggesting a non-negligible chance of an ECS close to – or below – 1K, then he will have to provide some kind of physical motivation for such a scenario.

  17. dikranmarsupial says:

    indeed, the hierarchy is physics > statistics >= chimps pulling numbers from a bucket

  18. I’m glad you said that. I got jumped on the last time I suggested that physics trumps pure statistics 🙂

  19. dikranmarsupial says:

    As a statistician (of sorts) I ought to be able to get away with it, although I did forget the ;o)

    Fortunately there is a fair bit of room between the physicists and the chimps we can usefully occupy!

  20. The fundamental strength of the Bayesian approach is that it recognizes explicitly the limitations of statistics and statistics based inference.

    The prior comes from outside of the method, i.e. from the subject science in case of science. No conclusions can be drawn without some knowledge or assumptions about the prior.

  21. Pekka,

    The prior comes from outside of the method, i.e. from the subject science in case of science. No conclusions can be drawn without some knowledge or assumptions about the prior.

    As I said, I have no particular expertise in Bayesian statistics, but that was certainly my impression of the fundamental strength.

  22. dikranmarsupial says:

    @Pekka, well put – a combination of good statistics and good physics is what is required for most observational inference problems.

  23. Paul S says:

    One glaring issue is that Gillett et al. 2013 provide their own observationally-constrained TCR estimate simply by applying the GHG regression coefficients to known TCRs of the models, with additional uncertainty relating to efficacy difference between all-GHG and co2-only. The result is a range of 0.9-2.3K and mean 1.6K. I can’t see the methodological benefit of introducing a simple energy balance model reenactment in order to estimate TCR when the simple translation makes perfect sense in terms of the logic of the scaling approach.

    Surely the difference between these TCR estimates should have been discussed in the paper since they derive from exactly the same data? With this in mind is the main result of the paper clear evidence that the EBM method is biasing results low?

  24. Paul S,
    Doesn’t that then answer one question. If Nic Lewis’s TCR estimates are around 20-30% lower than that if the models used by Gillett et al. (2013) doesn’t that imply that the basic energy balance method can easily underestimate climate sensitivity by 20-30% (as suggested in Ringberg). To put it differently, if the best fit model to the data has a higher TCR than the simple EBM method would suggest, then there is something that the EBM method is missing. Okay, I don’t think I’ve explained that very well, but maybe you get what I mean.

  25. 1. Equilibrium will always remain a hypothetical.

    2. What is the estimate on temperature variance if atmospheric constituents remained constant?

    Images from ‘Physics of Climate’ ( Peixoto and Ooort )

  26. Paul S says:

    In this case I’m just looking at TCR, where there is about 10-15% difference. To me, what Lewis has done with this paper is the kind of approach I’d use for bias testing a method, and the result appears to indicate a 10-15% low bias.

    I think ECS is much more complicated. Analyses I’ve run suggest the Otto/LewisECS doesn’t provide much of a constraint on a model’s AndrewsECS (the method used for diagnosing CMIP5 model ECS). Finding an Otto/LewisECS of 1.68K and TCR 1.4K is quite comfortably compatible with AndrewsECS of 3K. But equally it could mean an AndrewsECS of 1.68K. I don’t think theres a simple bias story there.

    Do you know what the 30% number entails? I think accounting for probable OHC and surface warming underestimates would produce about a 30% bump but that’s more about input than method.

  27. Paul S,
    Thanks. This is a good point

    To me, what Lewis has done with this paper is the kind of approach I’d use for bias testing a method, and the result appears to indicate a 10-15% low bias.

    The 20% – 30% was partly just me being lazy and partly because my recollection of tweets during the recent Ringberg meeting was that the general view was that EBM methods tend to underestimate ECS by about 20-30%. I think that was probably because a basic physics-type calculation would suggest an ECS > 2K and a basic EBM-type calculation might suggest an ECS below 2K.

  28. BBD says:

    FFS Eddie, your fig. 2 is so obsolete it’s not even funny.

  29. BBD,

    Old doesn’t necessarily mean obsolete ( and you can tell I’m old, because only and old person says that ).

    But, here are some similar looks ( though not idealized and not back to earth’s creation ):

  30. TE,
    Where do the plots come from. At first glance, they don’t make sense to me, but maybe I don’t quite understand what you’re plotting. You seem to be suggesting that there’s lots of power at long periods, and little power at short periods, which seems the wrong way around. The y-axes are, however, different in the two plots, so I’m not quite sure what you’re actually showing.

  31. Here’s the AR4 discussion – evidently moved in the AR5 pdfs.

    https://www.ipcc.ch/publications_and_data/ar4/wg1/en/ch9s9-4-1-3.html

  32. So that’s consistent – CO2 would be a long term influence.
    But also have to include what nature throws in which appears to be greater on the century scale than the biennial.

  33. TE,
    Okay, but those include the anthropogenic forcings. I had thought it was the internal variability only. Not quite sure what your point is. The models compare well with observations?

  34. BBD says:

    TE

    Old doesn’t necessarily mean obsolete

    It does in the case of your fig. 2. Ain’t telling you a third time.

  35. BBD says:

    Replace all but the very earliest section of that cartoon with this:

    Source: Zachos et al. (2008)

  36. > You seem to be suggesting that there’s lots of power at long periods, and little power at short periods, which seems the wrong way around…

    No, its what you expect.

  37. BBD says:

    You cannot get an informative power spectrum from an obsolete palaeotemperature reconstruction, although to be fair, P&O’s does pick up the 100ka, 40ka and 20ka orbital cycles.

  38. No, its what you expect.

    Is that because it include the anthropogenic forcings (which I hadn’t realised) or am I just being particularly dim – or both 🙂

  39. Its nothing to do with the anthro forcing; you always expect the spectrum to look red. The classic for this is Hasselmann: http://onlinelibrary.wiley.com/doi/10.1111/j.2153-3490.1976.tb00696.x/abstract

  40. BBD says:

    Turb

    1. Equilibrium will always remain a hypothetical.

    So what? Quasi-equilibrium will do. There was quasi-equilibrium during the LGM and during the late pre-Industrial Holocene.

  41. William,
    Thanks, but the abstract says

    In the usual Statistical Dynamical Model (SDM) only the average transport effects of the rapidly varying weather components are parameterised in the climate system. The resultant prognostic equations are deterministic, and climate variability can normally arise only through variable external conditions. The essential feature of stochastic climate models is that the non-averaged “weather” components are also retained. They appear formally as random forcing terms. The climate system, acting as an integrator of this short-period excitation, exhibits the same random-walk response characteristics as large particles interacting with an ensemble of much smaller particles in the analogous Brownian motion problem. The model predicts “red” variance spectra, in qualitative agreement with observations.

    So, does this mean that the long period variability is typically externally forced, with internal variability producing the short period variability? Or, am I still confused?

  42. Okay, I think I kind-of get it now. This part seems to suggest that even if it is forced by short-timescale “weather” the slower components of the system will still produce a power spectrum that peaks at long periods (low frequency)

    The variability of clirnate is attributed to internal random forcing by t,he short time scale “weather” components of the system. Slowly reponding components of the system, such as the ice sheets, oceans, or vegetation of the earth’s surface, act as integrators of this random input mnch in the same way as heavy particles imbedded in an ensemblc of much lighter particles integrate the forces exerted on them by the light particles. If feedback effects are ignored, the resultant “Brownian motion” of the slowly responding components yields r.m.s. climate variations – relative to a given initial state – which increase as the square root of time. In the frequency domain, the climate variance spectrum is proportional to the inverse frequency squared. The non-integrable singularity of the spectrum at zero frequency is consistent with the non-stationarity of the process. The spectral analysis for a finite duration record yields a finite peak at zero frequency proportional in energy to the duration of the record.

  43. Yes. You see the same kind of thing in Milankovitch-type spectral analysis, but with peaks imposed for the orbital cycles of course. When people try to see if peaks in that kind of data are significant, they need to fit red noise as the background, not white.

  44. It’s interestingly counter-intuitive, though – I think. In a turbulent cascade, the driving happens at large scales and the turbulence cascades to smaller scales, eventually being dissipated at very small scales. In the climate, it appears to be driven at small scales and amplified – by the slow components of the system – to large scales. Presumably, the difference here is that the feedbacks act to stabilise the system, rather than some dissipation of the turbulence itself.

  45. [Mod: DY] the drive-by gets whacked by this one.

    Isaac Held said in his blog what we always knew:


    I would also claim that these turbulent midlatitude eddies are in fact easier to simulate than the turbulence in a pipe or wind tunnel in a laboratory. This claim is based on the fact the atmospheric flow on these scales is quasi-two-dimensional. The flow is not actually 2D — the horizontal flow in the upper troposphere is very different from the flow in the lower troposphere for example — but unlike familiar 3D turbulence that cascades energy very rapidly from large to small scales, the atmosphere shares the feature of turbulence in 2D flows in which the energy at large horizontal scales stays on large scales, the natural movement in fact being to even larger scales. In the atmosphere, energy is removed from these large scales where the flow rubs against the surface, transferring energy to the 3D turbulence in the planetary boundary layer and then to scales at which viscous dissipation acts. Because there is a large separation in scale between the large-scale eddies and the little eddies in the boundary layer, this loss of energy can be modeled reasonably well with guidance from detailed observations of boundary layer turbulence. While both numerical weather prediction and climate simulations are difficult, if not for this key distinction in the way that energy moves between scales in 2D and 3D they would be far more difficult if not totally impractical.

    [Mod : DY] said this was all impossible to solve numerically, justified apparently because he works with airplanes at Boeing.

  46. Rob Nicholls says:

    Dikran Marsupial: “indeed, the hierarchy is physics > statistics >= chimps pulling numbers from a bucket”…”Fortunately there is a fair bit of room between the physicists and the chimps we can usefully occupy!” That made me smile.

  47. David Young says:

    Web, You are quite confused I’m afraid.

    1. This has nothing to do with airplanes and is really just standard fluid dynamics and just as importantly numerical analysis, a field that is I believe under appreciated by GCM builders and runners.
    2. The spectral plots TE showed are typical of 3D chaotic noise.
    2. 2D is indeed computationally easier, but there are plenty of pathologies there too. There is vortex shedding, bifurcations, multiple solutions, etc. Eddy viscosity models disagree a lot on things like a backward facing step bubble length and are probably all wrong. Sometimes 2D is harder, sometimes easier. We do both kinds of modeling and most people do as all 3D turbulence modeling is really based on 2D correlations anyway.
    3. Held is only talking about mid latitude circulation, which is only part of the climate. Tropical convection is ill posed and Held himself showed how sensitive it can be to the size of the computational domain.
    4. The planetary boundary layer is indeed a big problem as its pretty thin and not well resolved. There is a lot of controversy about its effect on temperatures measured at the surface. For example, you can find opposite views about whether irrigation cools (daytime) or warms (nighttime) the climate. Vortex shedding off mountains is completely unresolved at their grid resolution.

    These difficulties are not insurmountable, but it is a fact that the typical truncation error in GCM’s is almost certainly a lot bigger than the temperature anomaly that is sought. That makes it really a mess in which its difficult if not impossible to separate out various sources of error. What we have found is that to have any hope you need to try to isolate and eliminate sources of error. If you can’t distinguish between truncation or temporal errors and subgrid model errors, its pretty much a pseudo-scientific activity.

    [Mod : redacted]

    I would suggest some reading in the literature. Snippets from blogs are not really conducive to anything but belligerent and profound ignorance.

  48. verytallguy says:

    DY,
    first

    If you can’t distinguish between truncation or temporal errors and subgrid model errors, its pretty much a pseudo-scientific activity.

    then

    I would suggest some reading in the literature. Snippets from blogs are not really conducive to anything but belligerent and profound ignorance

    A suggestion. Perhaps rather than posting snippets on blogs which, as you so reasonably point out, encourage belligerence and profound ignorance, you could usefully publish your findings that GCMs are pretty much a pseudo scientific activity, for the technical reasons you outline above, in a reputable journal.

    If you can demonstrate that GCMs are as profoundly useless as your blog snippets imply, this would be ground-breaking. It should not be difficult either, given your evident confidence in your mastery of the field.

    Until published though (or can you provide a citation?), leading by example and exhibiting less belligerence might be worthwhile?

  49. I second this

    If you can demonstrate that GCMs are as profoundly useless as your blog snippets imply, this would be ground-breaking. It should not be difficult either, given your evident confidence in your mastery of the field.

  50. See what I mean? The condescension is absolutely dripping from the Boeing genius.

    For whatever reason the guy does not want to understand that water can slosh back and forth across a large basin, independent of small disturbances. And that is what ENSO is — a large-scale sloshing behavior. His argument is like saying you can prevent sloshing in a bucket by attaching a vibrator to the bucket while you are carrying it.

    No doubt that he is smart, and this is what we are dealing with when faced with these genius-level denialists who make drive-by appearances in social media. They spout off their vaunted wisdom while catching lots of gullible readers unaware. You know its funny that the deniers that are taking a crack at solving ENSO have at least an appreciation for making advances via trial-and-error experimentation, and not simply a blank dismissal based on experiences with wind-tunnels.

    I think that DY is a coward if he doesn’t join the Azimuth Forum and make his challenges heard there. There are a handful of others like him that have the same modus operandi, including Curry’s favorite — the infamous Tomas.

  51. niclewis says:

    Paul S

    “Gillett et al. 2013 provide their own observationally-constrained TCR estimate simply by applying the GHG regression coefficients to known TCRs of the models, with additional uncertainty relating to efficacy difference between all-GHG and co2-only. The result is a range of 0.9-2.3K and mean 1.6K.”

    That is incorrect. Gillett et al (2013) derived their 0.9-2.3 K TCR range and 1.6 K mean by taking the mean estiamted TCR of the 9 models used and multiplying it by the uncertainty bounds and mean estimate for a single multimodel average GHG regression coefficient (scaling factor), not by “GHG regression coefficients”.

    The more natural approach, that your use of the plural implied was used, of applying each model’s own mean estimate GHG regression coefficient to its own TCR implies (excluding one very poorly constrained outlier at each end) a 5-95% range of around 0.9-1.95 K, which then needs widening somewhat to allow for other uncertainties. That is very compatible with my study’s TCR range of 0.75-2.15 K using Gillett et al’s results. And my median TCR estimate of 1.39 K based on Gillett’s results is pretty close to the median of his observationally constrained TCR estimates for each model, which is ~1.45 K (and almost identical whether the multimodel average GHG scaling factor is used or those for individual models).

    My paper explains why I did not derive a TCR estimate simply by applying multimodel or average estimated GHG regression coefficients to model TCR estimates.

  52. Nic,
    Apart from the odd word, I’m not sure how this differs from what Paul S said

    That is incorrect. Gillett et al (2013) derived their 0.9-2.3 K TCR range and 1.6 K mean by taking the mean estiamted TCR of the 9 models used and multiplying it by the uncertainty bounds and mean estimate for a single multimodel average GHG regression coefficient (scaling factor), not by “GHG regression coefficients”.

    Was it just too difficult for you to start with “Actually, what Gillet et al (2013) did was…”, rather than “That is incorrect”? In some utterly pedantic sense, you are probably correct, but it doesn’t engender any sense that you have any great interest in an actual discussion. As appears to be the norm, you pop up to point out some very specific error in what someone else has said, and then disappear, only to reappear when you’ve found another very specific error in what someone else has said. Feel free to prove me wrong, but I don’t think I’ve ever seen an interaction involving you that doesn’t revolve around you highlighting all the minor errors in what others have said, while ignoring all the points that they’re actually trying to make.

    You could prove me wrong by responding to some of the points made in the various comments following this one.

    Oh, and have you yet acknowledged that you were wrong to criticise Marotzke & Forster as you did, and maybe apologised for implying that they had made a schoolboy-like error? I haven’t seen it, if you have, but it would seem to be something that you should be considering.

  53. Willard says:

    > My paper explains why […]

    This is incorrect. The author explains in the paper why, if indeed he does. Papers may contain explanations, but do not explain. Not yet.

    ***

    > That is very compatible

    As opposed to “just a bit compatible”?

  54. BBD says:

    Nic Lewis

    Your results are just as incompatible with palaeoclimate behaviour as they were last time.

    This should tell you something.

  55. BBD says:

    No, that’s wrong. TCR *not* ECS.

    Your results are just as incompatible with palaeoclimate behaviour as they were last time.

    This should tell you something.

    Appalling cold. Not up to commenting.

  56. Nic’s TCR values seem plausible. It is indeed the ECS result that seems to suggest that the lower values are more probable than seems likely. As I mentioned earlier, I would like to hear Nic present a physically plausible argument that justifies his ECS PDF.

  57. I was just considering our current situation. We’ve warmed by around 0.85C and have a planetary energy imbalance – today – of 0.6Wm-2. We can use the following calculation.

    {\rm ECS} = \dfrac{3.7 \Delta T}{\Delta F - \Delta Q},

    and,

    \Delta T = \dfrac{\Delta F \lambda_o}{1 - f},

    where \lambda_o is the no-feedback sensitivity, to show that a reasonable estimate for the feedback response (including the Planck response) would be around -1.4Wm-2 (Soden & Held give around -1.21Wm-2).

    That suggests that the warming in the pipeline is around 0.4C. That would suggest a transient response – today – of 0.85C and an equilibrium response – to 400ppm – of 1.25C. The transient response is, therefore, around 70% of the equilibrium response – which, I think, is similar to what is suggested by climate models. If you look at Nic's table, the ratio is more like 0.83. So, one might argue that both best estimates can't be physically plausible, unless we think that the ratio between the TCR and ECS can be as high as 0.83.

  58. FourEcks says:

    “Formula does not parse”

    Have accidentally put Monckton’s irreducibly simple climate model in by mistake?

  59. FourEcks,
    Very good 🙂

    No, and I have now fixed it. The next formula is essentially Monckton’s irreducibly simple model, though. It’s fine if you want to estimate something like the actual ECS.

  60. niclewis says:

    ATTP,
    Yes, but the “odd word” changes the meaning and the resulting TCR range.

    “You could prove me wrong by responding to some of the points made in the various comments following this one.”

    OK. The fact that my study’s results show a material amount of probability at ECS levels of below 1 K, whilst some other evidence suggests that is unlikely, seems irrelevant to me. The normal scientific methods involves individual studies reporting estimates based on their own data and analysis. Meta analysis studies can then combine separate estimates of the same unknown parameter(s) from different studies.

    As for paleoclimate studies, some of them show significant probability at low ECS levels. Eg, Hargeaves et al (2012) GRL had a similar 5% uncertainty bound to that in my paper.

    As for Pekka’s views on Bayesian priors in relation to this paper, see a critique of them (not by me) here: http://climateaudit.org/2015/06/02/implications-of-recent-multimodel-attribution-studies-for-climate-sensitivity/#comment-760298

    “Oh, and have you yet acknowledged that you were wrong to criticise Marotzke & Forster as you did, and maybe apologised for implying that they had made a schoolboy-like error?”

    No. I wasn’t wrong, although (as I wrote in my critique) the M&F paper had various flaws, not just the non-exogeneity (circularity) issue, and it may possibly not have been the most important one. Pekka made hand-waving arguments that the M&F forcing estimates were accurate despite the non-exogeneity, so the regressions weren’t biased. I think not. The the period I analysed in my critique (1951-2012), M&F’s forcing estimate for the bcc-csm-1-m model version was almost 30% higher than for the bcc-csm-1 version; in reality the two versions have the same forcings. However, another time I would probably be gentler on the authors. And another time I don’t think they would issue such an aggressively-worded press release.

    By coincidence, Lucia has just published an article lambasting MF15 over another issue, here: http://rankexploits.com/musings/2015/how-to-obscure-by-reducing-power-of-a-test-marotzke-and-forster/
    She concludes: ‘All in all: there are tons of blips in there that make the paper unsuited to supporting a conclusions like “The claim that climate models systematically overestimate the response to radiative forcing from increasing greenhouse gas concentrations therefore seems to be unfounded”.’

  61. On the critique of my views at CA my conclusion is that nothing substantial has been presented. More or less everybody has admitted explicitly that no so called “objective Bayesian” analysis is truly objective. During the later stage of that discussion I asked for more specific and concrete arguments, but no-one presented such. In my view the critique boiled out to essentially nothing.

    The same has been repeated on lesser scale in my exchange with Paul_K in the latest thread at CA. Paul_K made statements somewhat similar to what I criticized earlier. I asked for clarification, but got back only arguments that do not address my main point at all, only statements on the other parts of the analysis, where I do not have any disagreement.

    The fundamental fact that no Bayesian analysis is objective remains true, and most importantly it’s highly relevant in this case. The kind of arguments Nic presents may turn out to be very severely erroneous. As I have written, that’s not clear, however, as some features of his prior are plausible, and as the role of the additional assumptions hiding in his prior have not been analyzed sufficiently as far as I know.

  62. Nic,
    Well, kudos for coming back.

    OK. The fact that my study’s results show a material amount of probability at ECS levels of below 1 K, whilst some other evidence suggests that is unlikely, seems irrelevant to me.

    I disagree, very strongly. A complicated statistical analysis that produces results that appear somewhat inconsistent with our physical understanding of a system requires some explanation. Your result suggests that feedbacks are likely to be overall small (i.e., likely to produce an ECS less than 2C). That isn’t what the physical evidence suggests. You can’t simply appeal to some complicated statistics and dismiss physical climatology.

    No. I wasn’t wrong, although (as I wrote in my critique) the M&F paper had various flaws, not just the non-exogeneity (circularity) issue, and it may possibly not have been the most important one.

    Well, I think you were wrong on multiple levels. Firstly, the tone of Climate Audit post was poor and unprofessional. If you want to behave like a typical online blog commenter, that’s your right. You don’t, however, have the right to be taken seriously if you do. Also, you claimed an obivous and trivial error. It clearly is not obvious and trivial, or else we would have all agreed. That you may have found another issue, does not change this. The non-exogeniety issue also makes me think you don’t understand this topic as well as you would like people to think you do. If the paper that estimated the forcings produced a reasonable estimate for those forcings, then that they are – by definition – external, means that there is no circularity problem. That they used the model temperatures to make that estimate does not make it so. You need to show that the paper that estimated the forcings did not produce a reasonable estimate for the forcings, not simply point out that they also used the temperature when doing so.

    By coincidence, Lucia has just published an article lambasting MF15 over another issue, here:

    Well, I’ve only just seen that, but there’s a reason people refer to what Lucia writes as “word salad”. On first glance, this doesn’t appear to be any different.

  63. Willard says:

    > The fact that my study’s results show a material amount of probability at ECS levels of below 1 K, whilst some other evidence suggests that is unlikely, seems irrelevant to me.

    It just so happens that Nic wrote something with Marcel Crok for the GWPF, called Oversensitive:

    http://www.thegwpf.org/content/uploads/2014/02/Oversensitive-How-The-IPCC-hid-the-Good-News-on-Global-Warming.pdf

    Sometimes, you’re just lucky.

    ***

    > As for Pekka’s views […]

    Look, a squirrel wrote this yesterday!

    ***

    > I wasn’t wrong, although (as I wrote in my critique) the M&F paper had various flaws, […]

    No, but look, another squirrel wrote this today!

  64. Willard says:

    > “word salad”.

    A better word is Eli’s parsomatics. The practice comes from a long tradition in the auditing sciences:

    http://neverendingaudit.tumblr.com/tagged/parsomatics

  65. Lucia’s post just seems bizarre. She appears to be suggesting that Marotzke & Forster are essentially testing the following claim

    The claim that “climate models systematically overestimate the response to radiative forcing” is that models over estimate response to radiative forcing. That is: the models might predict too high trends when radiative positive and high and likewise would predict trend that are too low when radiative forcing has a large negative value. When radiative forcing is very small, the error would exist, but would be very small. When radiative forcing is equal to zero, the model mean will have no bias.

    and that they didn’t really do so, hence their suggestion that the above claim is wrong is in error. Essentially, Lucia seems to be suggesting that if there is some bias, that it’s magnitude will increase as the change in forcing increases and that there should be no bias if there were no change in forcing. So, she seems to think that to test this you should remove periods when the trend is small, since that would correspond to periods when the change in forcing was small. However, this ignores that the suggestion in Marotzke & Forster is that the mismatch is largely because of internal variability, which does not mean that a small trend has to correspond to a period when the change in forcing was small. Anyway, that’s as much as I can manage. Maybe Nic can explain why he seems to think it’s a good post?

  66. Willard says:

    From the parsomatics’ Mecca:

    > MF purported to support that claim using a number of figures.

    Scientists do not always purport to support a claim, but when they do, they use a number of figures.

    So here’s one:

  67. Of course, if Nic doesn’t trust me with respect to Marotzke & Forster, he can always consider James Annan’s view. James says

    Just to expand on it a bit, if MF got the “correct” values for forcing, α and κ then it wouldn’t matter where these numbers came from.

    Of course, James does go on to say

    However, there will be some uncertainty/inaccuracy in the estimates they have derived, and these did come from the temperature time series, and will lead to some circularity. So the question is really whether these inacuracies are big enough to matter.

    which is a more interesting issue, but doesn’t qualify as an obvious, trivial, circularity error.

  68. Paul S says:

    Nic lewis

    That is incorrect. Gillett et al (2013) derived their 0.9-2.3 K TCR range and 1.6 K mean by taking the mean estiamted TCR of the 9 models used and multiplying it by the uncertainty bounds and mean estimate for a single multimodel average.

    In practice this doesn’t appear to make any difference, if it is what they did. The mean of the 8 usable individual model constrained central TCR results is 1.6K and median about 1.5K, 5-95% range 1.0-2.5K. One of the 9 models gives a nonsense result and can’t be used* but your choice to discard another is not clearly justifiable. Did you exclude that model’s result from your Gillett AW numbers? I didn’t notice that detail.

    * since you refer to it as a low end outlier presumably you received information from the authors about what the central TCR estimate was? It’s so poorly constrained it’s off the edge of the figure.

  69. Willard says:

    Speaking of Pekka, I think we can safely replace Nic’s:

    I wasn’t wrong.

    with

    I ‘m not sure that I follow Pekka’s arguments.

    http://www.climate-lab-book.ac.uk/2015/marotzke-forster-response/#comment-448941

    That Nic claims not being wrong does not mean he claims being right.

    Or is it purport to claim?

  70. David Young says:

    VTG, There is only so much time each of us is granted and climate is not a high priority for me as there is plenty of progress to be made in more scientifically interesting areas.

    We have a string of publications that are somewhat relevant but rather technical. One last summer and another this summer. There are a couple of others to follow. The frustration in our group is that the community is really pretty resistant to negative results and the literature is quite affected by positive results bias.

    GCM climate modeling is just a giant mess that seems to be not even defended seriously anymore on rigorous grounds. In any case Gerry Browning has already tried to do some of what you suggest over the last 20 years. His advisor was I think the one who discovered the sound wave cure for weather models I think around mid 1970’s. Gerry’s work is I believe rigorous and correct and has never been rebutted by anyone.

    My take away from the last 20 years of work is what the Economist, Science, and numerous mainstream publications seem to be discovering — the profound prejudices inherent in the modern scientific system. When you are immersed in it, and more importantly must keep the soft money flowing to keep your job, you tend to be an apologist. More senior people are often more honest and more credible I think.

    Webby, your name calling and personal tripe is of no relevance and [snip].

  71. DY,
    I’m not going to bother responding to you any more. Your comment is just condescending and about as lacking in self-awareness as it is possible to be.

    GCM climate modeling is just a giant mess that seems to be not even defended seriously anymore on rigorous grounds. …..

    My take away from the last 20 years of work is what the Economist, Science, and numerous mainstream publications seem to be discovering — the profound prejudices inherent in the modern scientific system.

    As you yourself say, there’s only so much time, and wasting it engaging with someone who can say what you’ve just said is not worth doing.

  72. In reply to ATTP, niclewis said on June 10, 2015 at 7:27 pm:

    “”Oh, and have you yet acknowledged that you were wrong to criticise Marotzke & Forster as you did, and maybe apologised for implying that they had made a schoolboy-like error?”

    No. I wasn’t wrong, although (as I wrote in my critique) the M&F paper had various flaws, not just the non-exogeneity (circularity) issue, and it may possibly not have been the most important one.”

    One of your claims (evidently one of your major claims since the term “circularity” was in the title of your article) was that Marotzke & Forster (2015) (M&F) made a purely mathematical (as in algebraic) mistake, this being a circular reasoning mistake in the (underlying algebraic) construction of M&F in the form of regressing a variable on itself.

    I proved in
    https://andthentheresphysics.wordpress.com/2015/01/31/models-dont-over-estimate-warming/#comment-48402
    on February 18, 2015 at 10:33 am in the thread under the post “Models don’t over-estimate warming?” that the construction of M&F is perfectly valid algebraically, since, for any mapping m in any group, we can on m construct a *valid* composition of mappings on m with m being the outer mapping and with the inner mapping being a k-ary mapping for some k > 1, where one of those k argument variables of the inner mapping is the output variable in m. (I remind you that the real numbers form a field, whose elements form an additive group and whose nonzero elements form a multiplicative group. So group theory is fundamental here. A field of characteristic 0 implies an infinite set. I included that restriction of characteristic 0 to allow for infinitely many cases in my proof.)

    That is, in a nutshell, I proved this:

    One of the most basic theorems in group theory implies the validity of the algebraic construction of M&F, and since you deny the validity of this construction, you, by modus tollens, deny the truth of one of the most basic theorems in group theory.

    This theorem is that for any a,b in a group G (and this includes any noncommutative group) there exist unique x,y satisfying a = xb and a = by, these unique solutions being x = ab^{-1} and y = b^{-1}a. This theorem allows us to construct bijective unary functions f(x) = a = xb and g(y) = a = by. And this allows us to obtain the surjective binary function (or surjective binary operation, if you wish) itself over the group by taking the union of all the bijective unary functions obtained from letting b run through all of G in either f or g. This theorem and its implications give us tools to construct these compositions of mappings I talk about in which the inner mapping is a k-ary mapping for some k > 1.

    Another way of looking at it: The underlying group theory imposes the following: Your particular claim of a circularity problem in the M&F construction (this alleged circularity problem in the form of regressing a variable on itself) holds *only if* the inner mapping in the composition of mappings in question is a unary mapping. But it’s not a unary mapping. It’s a binary mapping. And so, by modus tollens, your claim of a circularity problem doesn’t hold.

    By the way: Do not come back in reply with a bunch of statistics, since that would be irrelevant – the underlying algebra of the additive group of the real numbers and the multiplicative group of the nonzero real numbers trumps statistics as to what is algebraically valid in these groups. Your claim of a circularity problem as a purely mathematical mistake is actually an (abstract) algebraic claim on this additive group (additive since the binary inner mapping in the composition of mappings in question from M&F uses the operation of addition). That is, it’s a claim that they got the underlying (abstract) algebra wrong. But I proved that that they didn’t and that the claim that they did is the purely mathematical mistake of not recognizing that the inner mapping of that composition of mappings in question is binary, *not* unary.

    Take any formula in mathematics, science, or engineering on which we know it is legitimate to do regressions on two of the variables in the formula. Anyone who is fluent enough in (abstract) algebra should be able to see that we can *validly* construct that which you claim is an invalid circular construction (but by group theory, it’s a *valid* construction). That is, by group theory, we can on this regression construct from the formula a *valid* composition of mappings on that regression with the regression being the outer mapping and with the inner mapping being a k-ary mapping for some k > 1, where one of those k argument variables of the inner mapping is the output variable in the regression. (Number k is the number of argument variables in the inner mapping [called binary for k = 2], these variables taken from the formula.) By group theory, your claim of invalidity via alleged circularity is therefore invalid. (And again, in particular, since one of the most basic theorems in group theory implies the validity of that which you claim is invalid, by modus tollens you claim to be false one of the most basic theorems in group theory.)

    (Final note. David Young at this comment
    https://andthentheresphysics.wordpress.com/2015/04/10/andy-lacis-responds-to-steve-koonin/#comment-53088
    on April 12, 2015 at 9:57 pm, wrote,
    “ATTP, I was surprised Nic didn’t add a correction to his post saying that he may have been wrong about the circularity…”
    in reply to ATTP’s
    “I’m not aware of Nic Lewis withdrawing his claim of a trivial and obvious statistical mistake.”
    at
    https://andthentheresphysics.wordpress.com/2015/04/10/andy-lacis-responds-to-steve-koonin/#comment-53087
    on April 12, 2015 at 9:57 pm. Some even on “the other side” against mainstream climate science seem to have understood how it is that the purely mathematical mistake that exists here is the claim itself that there is a purely mathematical mistake via circularity in M&F.)

  73. Paul S says:

    Looking again I think the bias could be considered slightly greater than indicated previously because the Gillett EIV AW (attributed warming) number used by Nic Lewis appears to be slightly larger than the mean and median of the 8 viable individual model AW results, and should therefore reasonably be expected to produce a higher TCR.

    Delving a bit deeper, the ratios of TCR to AW in Lewis’ central results are nearly identical for both the Gillett and Jones numbers, at about 1.6. This ratio is within the envelope of the individual TCR/AW model ratios in the Gillett paper, but at the low end of the 1.4-2.4 range, and well below the mean 1.9 and median 2.0. It’s clear Lewis’ uncertainty ranges in TCR do not take into account this full uncertainty in AW->TCR translation, nor is any justification provided for why the central value must reside at the low end of this range.

  74. Eli Rabett says:

    David Young is again his charming self. Somewhat less amusing is his fundamental misunderstanding of GCMs. Steve Easterbrook has a short introduction which played here last year, that that the savant Young might profit from. Eli has shortened it to something that bunnies with David’s attention span might understand: “GCM: Explicative value is strong, predictive only in the long run

    As Steve points out, GCMs are not needed to predict what will happen if we keep increasing greenhouse gases: It will get hot, and there will be major damage. This was known even 120 years ago. GCMs are useful for getting long run answers to questions about what will happen if we do something about it. Moreover, because those who work with them know the strengths and weaknesses of each, interpreting the results requires expertise, not naivety, which is not always amusing, but most often wrong.

  75. Eli,
    Indeed, I think I’ve tried to point some of that out to DY in the past.

    Also, it would be interesting to get Nic’s take on Paul S’s most recent comments.

  76. Joshua says:

    ==> ““ATTP, I was surprised Nic didn’t add a correction to his post saying that he may have been wrong about the circularity…””

    I missed that. Kudos to David – although I am surprised that David would be surprised….particularly since he said later in the same comment:

    ==> “But this behavior, while not what I prefer, seems to be the norm in climate science. ”

    Which, while I commend David for being willing to ask Nic to be accountable for being tribalistic, does seem to suggest how difficult it is for David (like everyone else) to control for his own biases.

    Why would anyone be surprised by Nic not being accountable for tribalistic behavior, particularly when it is the norm?

  77. With long-term aGHG and medium-term ENSO behaviors well-accounted for by physics models, all that is left is to account for volcanic eruptions. The other medium-term to longer-term LOD variations with correlation to 40-60 year global temperature cycles will also likely become better understood, so volcanoes are the only prediction problem.

  78. GCM: Explicative value is strong, predictive only in the long run
    Well, I’d agree that the longer the run, the greater the GHG load so the more the influence should be apparent.
    But as I pointed out above, natural variance increases in the long run also.
    So attribution may not be any clearer and predictions are less, not more certain.

  79. It will get hot,
    OK, temperatures should increase ( natural variability notwithstanding ),
    but can’t get away without asking, “how much?”

    and there will be major damage.
    Does summer cause damage?

  80. JCH says:

    Ever seen irrigated cotton plants die as seedlings?

  81. anoilman says:

    And endless summer would really suck. Just ask California.

  82. Joseph says:

    , natural variance increases in the long run

    What exactly do you mean by “natural variance?” Solar is the only one that I am aware of that can have a long term influence.

  83. Ever seen irrigated cotton plants die as seedlings?
    Is that why they grow cotton in the north to avoid the warmth?

    And why cotton production fell as global temperatures rose?

  84. BBD says:

    Don’t be obtuse, Turb.

    Plants – especially those acclimated to hot summer conditions – have a narrow thermal range. Exceed it and yields plummet. This will be happening later this century and global food security is very likely to take a big hit.

    When Pollyanna is starving, she goes quiet.

  85. BBD says:

    but can’t get away without asking, “how much?”

    Oh there is never any progress with you and by God it is tedious.

    Given plausible best estimates for sensitivity then too much unless substantive emissions reduction policy is set in motion now.

  86. BBD says:

    But as I pointed out above, natural variance increases in the long run also.

    So attribution may not be any clearer and predictions are less, not more certain.

    I don’t think this is correct. Forcing increase sustained on the centennial scale or longer will outweigh natural variability.

  87. Joseph,

    What exactly do you mean by “natural variance?” Solar is the only one that I am aware of that can have a long term influence.

    What’s he referring to is this. If I understand this properly, what this shows is that the amplitude of century-scale variability, can be larger than the amplitude of decadal-scale variability. However, that still means (as shown by Palmer & McNeall (2014)) that the decadal-scale internally-driven trends (oC/decade) will still be larger than the century-scale internally-driven trends. Therefore – as BBD points out – internal variability can mask anthropogenically-driven warming on decadal scales, but not on century scales (or not as easily as on decadal scales).

  88. JCH says:

    The summer before the Cotton Bowl where Kansas State played Arkansas, irrigated cotton plants died as seedlings. I met these farmers (at the Cotton Bowl). They had never seen anything like it.

    One interesting thing, a fan behind us from Arkansas overheard our conversation, and he had the same thing happen. He raised shrubs and flowers for nurseries. His father was there. They had been in the business since right after WW2. Irrigated plants dying in the heat.

    The short-term dryness was most acute in the Coastal Bend area, where at least one county experienced a total failure of its cotton crop.

  89. JCH says:

    66 million trees died in Harris county that summer.

  90. anoilman says:

    Turbulent Eddie: I’m flattered that you think its OK to spend your money wildly to buy imported food. That’s your current plan right? Many US crops are now moving across the border, which can only mean that you American’s are going to have to import it.

    The fact is that food production globally is expected to pick up, but its not related at all to global warming. There are a lot of efforts to increase food production, particularly in the third world. But in the mean time, you plan to import more food.

    There’s currently a push within Canada to prevent the export of water to the US, and sell you that which water is used to manufacture. Food. This ends badly for the US. The expected mega droughts will be crippling.

    Hey! Have you ever lived somewhere with water restrictions? When I was in South Africa they were in the middle of a drought. We had to share the bath water between everyone in the family. he water was dirty grey by the time I got in. Do you think that’ll happen in California?

  91. anoilman says:

    Here’s more current data on cotton;
    http://www.usda.gov/oce/forum/2015_Speeches/Cotton.pdf

    It looks like farmers are spending lots and lots of money adapting to US temperature shifts. The global outlook looks better, but that’s mainly because the Chinese want more clothes.

  92. The summer before the Cotton Bowl where Kansas State played Arkansas, irrigated cotton plants died as seedlings. I met these farmers (at the Cotton Bowl). They had never seen anything like it.

    One interesting thing, a fan behind us from Arkansas overheard our conversation, and he had the same thing happen. He raised shrubs and flowers for nurseries. His father was there. They had been in the business since right after WW2. Irrigated plants dying in the heat.

    The short-term dryness was most acute in the Coastal Bend area, where at least one county experienced a total failure of its cotton crop.

    That’s different – now you’re talking about drought.

    People like to wave their hands and talk about drought in terms of global warming and say
    “well, evaporation will increase, so the soil will dry out.”

    But that’s not what happened at all in 2011.
    That year in Texas experienced a profound lack of rainfall.
    And if you examine the storm tracks, you’ll see that fewer storm systems
    ( mid-latitude cyclones ) passed through Texas.
    This occurs because there are always an infinite number of wave pattern configurations ( of all wave lengths ) that are possible for the atmospheric circulation for any given global temperature

    Sometimes those configurations mean Texas drought ( 2011, The Dust Bowl, the 1950s ).
    Sometimes those configurations mean Texas floods ( May 2015 and earlier years ).
    It has been happening long before there was a Texas.

    Precipitation is multi-factoral – you must have moisture but you must also have convergence.
    Moisture, on a global basis, is modeled to increase, so it’s certainly not a constraint.
    Convergence is a dynamic component, known to be unpredictable much beyond a week or two,
    so discussions about precipitation and global warming are mostly idle speculation.

  93. This Isaac Held posting has a good visualization of a standing wave pattern:

  94. BBD says:

    Turb

    That’s different – now you’re talking about drought.

    But I wasn’t.

    And later this century we will see an increase in the frequency of summer temperatures extremes *and* of droughts. Think ‘expanding Hadley Cells’.

    Poor Pollyanna then.

  95. BBD says:

    And Turb…

    That’s different – now you’re talking about drought.

    No, JCH wasn’t either:

    irrigated cotton plants died as seedlings.

    And:

    Irrigated plants dying in the heat.

    Irrigated plants don’t die of drought, Eddie.

    But on you go, with the old denialist two-step. It’s tedious Eddie.

  96. Irrigated plants don’t die of drought, Eddie.

    Evidently these did.
    Temperatures are higher during summertime droughts because the soil has a reduced heat capacity from lack of rainfall.
    But it is not valid to deduce the corollary.
    Higher temperatures do not cause drought.

    Think ‘expanding Hadley Cells’.

    Or, think expanding monsoons.

  97. David Young says:

    So I trumped the Rabbet and read Isaac’s post. It is pretty balanced but follows a middle course acknowledging the convection and cloud problems for example. I don’t strongly disagree with it, even though I doubt that GCM’s are one of the greatest achievements of science. What I said technically is unaffected by it. So far as I can see, what has happened here at this blog is that no technical point I made was addressed much less contradicted. There was a lot of objection to the “tone” of more general statements about science, which of course are very common statements in real fields of science and in the Lancet for example. There was a lot of name calling and abusive language that was left to stand. Then there was the obligatory reference to a superficial video. Climate science is such a wonderful field and its blog defenders are just so great at keeping things civil.

  98. anoilman says:

    Lucifer/Turbulent Eddie: Picking a single year as your example is called cherry picking. It means you are are ignoring all the evidence in order to make a statement. Its not a credible way to discuss things.

    Did you notice the lack of increase in cotton production when you look at current data instead of your old data? Or is that not really the kind of point you wanted to make?

    No increase in cotton, was not exactly what you were going for I assume. And in the hottest years ever.

  99. Joshua says:

    And there I thought that David was going to explain why he was “surprised” that Nic would behave in a fashion that “seems to be the norm.”

  100. anoilman says:

    I’m feeling hungry… how about a baked Alaska? Anyone want some? Oops. its a bit over done!
    http://earthobservatory.nasa.gov/IOTD/view.php?id=85932

  101. Eli Rabett says:

    TE: Higher temperatures do not cause drought.

    Drought has a front end, less precipitation and a back end, more evaporation. Yes Eddy, higher temperatures can cause drought.

  102. The denialists also have no clue as to how to model fossil fuel reserves — take a look at how my oil shock model and dispersive discovery analysis is being used to estimate long term outlook for crude oil production.

  103. DY,

    So far as I can see, what has happened here at this blog is that no technical point I made was addressed much less contradicted.

    No, what happned (at least as far as I can remember) is that there was agreement about some of your technical comments, but disagreement about the significance of those comments. That a GCM cannot self-consistently model turbulence does not immediately mean that it is useless, and that all those who use them are idiots. What you appear incapable of understanding is that presenting something that is factually correct, does not immediately imply that what you interpret from those facts is correct.

    There was a lot of objection to the “tone” of more general statements about science, which of course are very common statements in real fields of science and in the Lancet for example.

    No, there was an objection to someone as biased as you appear to be, pontificating about bias in others.

    There was a lot of name calling and abusive language that was left to stand.

    Really? Are you new to the online debate about climate science? Have you never commented elsewhere? Strange, as I’m pretty sure I’ve seen a David Young complaining about – and insulting? – me elsewhere on blogs. Is that someone else?

    Then there was the obligatory reference to a superficial video. Climate science is such a wonderful field and its blog defenders are just so great at keeping things civil.

    And you’ve been a paragon of virtue? Some give and take might help, but – as I said above – I’ve pretty much had enough. I can’t see much point in repeating our discussions over and over again. You should bear in mind that me disagreeing with your general view that GCMs are a complete and utter mess, does not mean that I think they are the ultimate climate science tool. Your apparent inability to see nuance just makes such discussions largely pointless.

  104. BBD says:

    Turb

    Evidently these did [die of drought].

    Read. The. Words:

    Irrigated plants dying in the heat.

    What does ‘irrigated’ imply wrt availability of water to crop?

    That’s the third time now. It was tedious at iteration #1. Desist, please.

  105. JCH says:

    It was a drought. They irrigated from moment one. Because it was a drought. Droughts are nothing new to old Texas farmers.

    Try again you blooming’ ostrich… what was new to the Texas and Arkansas farmers was that their irrigated plants died. When it rains, a drought ends. When it sprays out of a nozzle, the drought ends.

    Because it was also the summer of seemingly never ending 100-plus-degree days.

  106. BBD says:

    Let’s have a word from Michael Tobis, speaking of Texas burning in the heatwave of 2011 and more besides:

    The pervasive nature of climate change exacerbates many other risks. Failure to account for those other risks occurs for similar reasons to the increasingly obvious failure to account for climate risks. Combined disasters combine worse than additively.

    This is how it hits the fan.

    Pollyanna burning.

  107. AOM, yup, even the Bakken oil is showing a sustained drop in production levels that may be prolonged.
    No doubt the oil scientists who devised these approaches were geniuses but it can only go so far against a natural resources wall

    Interesting that the auto-tune algorithm used on singers was apparently invented by the Exxon scientist Andy Hildebrand . That is some evil to audio purists, but some people consider it an overall positive.

  108. Drought has a front end, less precipitation and a back end, more evaporation.
    Yes Eddy, higher temperatures can cause drought.

    No.

    Evaporation is greatest just before the drought ( because there is water to evaporate )
    and decreases as the drought proceeds because there is little left to evaporate.

    Texas 2011 and Current California droughts are clearly caused by lack of precipitation and
    that lack of precipitation is clearly caused by fluctuation of circulation which will always happen and is not predictable.

  109. anoilman says:

    Drip irrigation was developed to slowly and constantly drip water to the plant roots. However, since you’re not spraying water everywhere on the surface, it reduces evaporation.
    https://en.wikipedia.org/wiki/Drip_irrigation

    However, by definition this becomes a more expensive and energy intense operation. And if you talk to local farmers in Canada, impossibly expensive. We have gophers, and they chew pipes. It renders many types of farming difficult or even impossible up here.

    I sure hope you don’t have a mega drought in the US. Yup. Not that I don’t appreciate the denialist efforts to export jobs and squander money on essentials.

  110. anoilman says:

    Turbulent Eddie: I read your post that you are backing Eli. So, I guess. thanks.

    Not all fluctuations in circulation are random. California’s drought is tied to global warming;
    http://news.stanford.edu/news/2014/september/drought-climate-change-092914.html

    In this case, it comes from the now wandering Jet Stream, which is being rather persistent in how it flows over California. Not so random or unpredictable;

    I sure hope the whole global warming trend ends before the expected North American mega droughts occur.

    Didn’t California flood in the last El Nino? That might give them some respite. Though oddly, many global cycles are changing. (See previous video on Jet Stream.)

  111. BBD says:

    Turb

    See Dai (2013):

    Historical records of precipitation, streamflow and drought indices all show increased aridity since 1950 over many land areas. Analyses of model-simulated soil moisture, drought indices and precipitation-minus-evaporation suggest increased risk of drought in the twenty-first century. There are, however, large differences in the observed and model-simulated drying patterns. Reconciling these differences is necessary before the model predictions can be trusted. Previous studies show that changes in sea surface temperatures have large influences on land precipitation and the inability of the coupled models to reproduce many observed regional precipitation changes is linked to the lack of the observed, largely natural change patterns in sea surface temperatures in coupled model simulations. Here I show that the models reproduce not only the influence of El Niño-Southern Oscillation on drought over land, but also the observed global mean aridity trend from 1923 to 2010. Regional differences in observed and model-simulated aridity changes result mainly from natural variations in tropical sea surface temperatures that are often not captured by the coupled models. The unforced natural variations vary among model runs owing to different initial conditions and thus are irreproducible. I conclude that the observed global aridity changes up to 2010 are consistent with model predictions, which suggest severe and widespread droughts in the next 30–90 years over many land areas resulting from either decreased precipitation and/or increased evaporation.

  112. BBD says:

    And then there’s Cook et al. (2014):

    Global warming is expected to increase the frequency and intensity of droughts in the twenty-first century, but the relative contributions from changes in moisture supply (precipitation) versus evaporative demand (potential evapotranspiration; PET) have not been comprehensively assessed. Using output from a suite of general circulation model (GCM) simulations from phase 5 of the Coupled Model Intercomparison Project, projected twenty-first century drying and wetting trends are investigated using two offline indices of surface moisture balance: the Palmer Drought Severity Index (PDSI) and the Standardized Precipitation Evapotranspiration Index (SPEI). PDSI and SPEI projections using precipitation and Penman-Monteith based PET changes from the GCMs generally agree, showing robust cross-model drying in western North America, Central America, the Mediterranean, southern Africa, and the Amazon and robust wetting occurring in the Northern Hemisphere high latitudes and east Africa (PDSI only). The SPEI is more sensitive to PET changes than the PDSI, especially in arid regions such as the Sahara and Middle East. Regional drying and wetting patterns largely mirror the spatially heterogeneous response of precipitation in the models, although drying in the PDSI and SPEI calculations extends beyond the regions of reduced precipitation. This expansion of drying areas is attributed to globally widespread increases in PET, caused by increases in surface net radiation and the vapor pressure deficit. Increased PET not only intensifies drying in areas where precipitation is already reduced, it also drives areas into drought that would otherwise experience little drying or even wetting from precipitation trends alone. This PET amplification effect is largest in the Northern Hemisphere mid-latitudes, and is especially pronounced in western North America, Europe, and southeast China. Compared to PDSI projections using precipitation changes only, the projections incorporating both precipitation and PET changes increase the percentage of global land area projected to experience at least moderate drying (PDSI standard deviation of ≤−1) by the end of the twenty-first century from 12 to 30 %. PET induced moderate drying is even more severe in the SPEI projections (SPEI standard deviation of ≤−1; 11 to 44 %), although this is likely less meaningful because much of the PET induced drying in the SPEI occurs in the aforementioned arid regions. Integrated accounting of both the supply and demand sides of the surface moisture balance is therefore critical for characterizing the full range of projected drought risks tied to increasing greenhouse gases and associated warming of the climate system.

  113. BBD says:

    And specific to the USA, Cook et al. (2015):

    In the Southwest and Central Plains of Western North America, climate change is expected to increase drought severity in the coming decades. These regions nevertheless experienced extended Medieval-era droughts that were more persistent than any historical event, providing crucial targets in the paleoclimate record for benchmarking the severity of future drought risks. Here, we use an empirical drought reconstruction and three soil moisture metrics from 17 state-of-the-art general circulation models (GCMs) to show that these mod- els project a significantly drier later half of the 21st-century compared to the 20th-century and earlier paleoclimatic intervals. This desiccation is consistent across the majority of models regardless of the employed moisture balance variable, indicating a coherent and robust drying response to warming despite the diversity of models and metrics analyzed. Notably, future drought risk will likely exceed even the driest centuries of the Medieval Climate Anomaly (1100-1300 CE) in both moderate (RCP 4.5) and high (RCP 8.5) future emissions scenarios, leading to drought conditions without precedent during the last millennium.

  114. Eli Rabett says:

    TE:

    Estimates of average statewide evapotranspiration for the conterminous United States range from about 40 percent of the average annual precipitation in the Northwest and Northeast to about 100 percent in the Southwest. During a drought, the significance of evapotranspiration is magnified, because evapotranspiration continues to deplete the limited remaining water supplies in lakes and streams and the soil.

    http://geochange.er.usgs.gov/sw/changes/natural/et/

    Wanna another try??

  115. “There are, however, large differences in the observed and model-simulated drying patterns. Reconciling these differences is necessary before the model predictions can be trusted.”

    Talk about attribution of current droughts to global warming is stupid.

    In an attempt to make climate science meaningful or relevant to the public, some folks lean on the science that is most uncertain: attribution of extreme events to AGW.
    That’s a mistake.

  116. anoilman says:

    Steven Mosher: I agree.

    You’re better looking in regions where expected shifts in climate are more extreme and looking for your finger prints there. ie Arctic temperature shifts.

  117. So DY is yakking it up elsewhere about this here AT blog:


    He simply cannot entertain any view about science, no matter how mainstream, that implies there is anything important that is wrong. Since this is an emotional response, it blocks real thought once it is engaged.

    Yet, he doesn’t realize that there are many “heretics” on this blog, myself included. The issue is that DY has a POLITICAL problem that he cannot come to terms with. On the other hand, non-political-types, i.e. realists, have no problems taking information from all sides and synthesizing as applicable to the problem at hand.

    Like seasoned comedians, you steal bits from the best of them — if people laff, who cares where the joke comes from?

  118. BBD says:

    Talk about attribution of current droughts to global warming is stupid.

    Nor did I do so explicitly, Steven.

    My comments pointed to the shape of things to come.

  119. Estimates of average statewide evapotranspiration for the conterminous United States range from about 40 percent of the average annual precipitation in the Northwest and Northeast to about 100 percent in the Southwest.

    Right – 100% in the SW because can’t evaporate any more than that.

    Evaporation is greatest when the amount to evaporate is greatest and decreases thereafter.

  120. BBD says:

    Turb

    So what are you saying wrt the three studies linked above?

  121. KarSteN says:

    @Stephen Mosher: Uncertainty doesn’t mean you shouldn’t try. Attribution is all about probabilities. If uncertainty is too high or the probability for a modified likelihood of an event too low, you still have learned sth. It’s a mistake not to try. Straight out of hand dismissal of an entire branch of the sciences isn’t particularly constructive, is it?

  122. So DY is yakking it up elsewhere about this here AT blog:

    Oh dear, is DY upset and complaining about me elsewhere. What a surprise. Given that he appears incapable of correctly interpreting what I say, doesn’t instill confidence in his other interpretations. Oh well, ClimateBallTM is alive and well.

  123. Oilman,

    problems with Francis’ ‘wandering jet’ theory include many things, but you have to consider the assumptions.

    Francis postulates that Arctic Amplification is reducing the gradient which either weakens, displaces, or otherwise alters wave patterns. And of course the gradient does determine the strength and seasonally, at least, the latitude of average jet stream location.

    1. But does Arctic Amplification really reduce the relevant temperature gradient? The Arctic warming is greatest nearest the surface and decreases with height. This is is significant, because the ‘jet streams’ are not at the surface, but are around 200mb. In fact, were the tropical upper tropospheric hot spot actually to have materialized, the pole-to-equator temperature gradient at 200mb would dramatically increase not decrease:

    Also, the temperature gradient is in part formed by fluid flow in addition to determining fluid flow.

    Also consider that the gradients vary substantially from winter to summer by amounts much greater than century scale change might be, and again, we’re not even sure what sign the gradient change might take.

    2. “Wandering Jets” are normal and necessary! In the SW US, like most of the sub-tropics, there is not a lot of precipitation from the statistical average flow. In fact, if circulation were locked at ‘average’ most places would not receive any precipitation. Ever. But the circulation deviates specifically because the jet streams and the attendant wave patterns wander.

    3. The largest factor contributing to the wave patterns are the large scale gradient caused by seasonal orbit and the orientation of the mountains and oceans – things that won’t change.

    4. ENSO events are fluctuations of wave patterns and they’re quite evident throughout the paleo record. Unless one is suffering from confirmation bias for the cause, not much need to take Francis any further.

  124. Joshua says:

    This is why ANY negative statement about science, even if it comes from scientists themselves causes an emotional response.

    Indeed. ANY negative statement about science and everyone here just gets soooooo emotional!

    willard won’t like this, but lol!

  125. Willard says:

    Seems that Nic was right all along in the case of radiocarbon dating:

    I’m afraid that it is you and the rest of the subjective Bayesians who are wrong about the radiocarbon dating case.

    http://climateaudit.org/2015/06/02/implications-of-recent-multimodel-attribution-studies-for-climate-sensitivity/#comment-760385

    Has anyone thought of starting radiocarbon speed dating services?

  126. Wow, I thought that James Annan’s post illustrated very nicely that Nic’s approach could quite easily give the wrong answer.

  127. BBD says:

    Tinder —> carbon?

  128. Joshua says:

    Did Nic ever respond to James’ post?

  129. Willard says:

    > Did Nic ever respond to James’ post?

    No idea, Joshua. However, Nic never really addressed Radford Neale’s comment on CA which contained this bit:

    Nic: We agree that the posterior PDF produced by use of Jeffreys’ prior may look artificial.

    The posterior PDF produced by use of Jeffreys’ prior doesn’t just look “artificial”. It looks completely wrong. I think this is the most crucial point. Your example isn’t one that should convince readers to use Jeffreys’ prior because it gives exact probabililty matching for credible intervals. It’s an example that should convince readers that Jeffreys’ prior is flawed, and probability matching is not something one should insist on. Could there possibly be a clearer violation of the rule “DON’T INVENT INFORMATION”? The prior gives virtually zero probability to large intervals of calendar age based solely on the shape of the calibration curve, with this curve being the result of physical processes that almost certainly have nothing to do with the age of the sample.

    Statistical inference procedures are ultimately justified as mathematical and computational formalizations of common sense reasoning. We use them because unaided common sense tends to make errors, or have difficulty in processing large amounts of information, just as we use formal methods for doing arithmetic because guessing numbers by eye or counting on our fingers is error prone, and is anyway infeasible for large numbers. So the ultimate way of judging the validity of statistical methods is to apply them in relatively simple contexts (such as this) and check whether the results stand up to well-considered common sense scrutiny. In this example, Jeffreys’ prior fails this test spectacularly.

    I think you would maybe agree that Jeffreys’ prior is not to be taken seriously, given that you say the following:

    Nic: … think of the posterior PDF as a way of generating a CDF and hence credible intervals rather than being useful in itself. I agree that realistic posterior PDFs can be very useful, but if the available information does not enable generation of a believable posterior PDF then why should it be right to invent one?

    http://climateaudit.org/2014/04/17/radiocarbon-calibration-and-bayesian-inference/#comment-547957

    Nic let Hu and Nullius cover for him, in that case. Just like he does with RomanM about his circularity argument, incidentally.

  130. Joshua says:

    I wonder if David is surprised that Nic didn’t really respond to Radford.

  131. Willard says:

    David may be even more surprised that Nic hasn’t commented at Eli’s at the time:

    While I was making you believe that Philosopher King was talking, I searched the Internet (which Plato anticipated in his Phaedo) and found this video lecture [1], by Michael… Jordan. Clicking on the titles of the slides makes them appear.

    It’s a slam dunk.

    http://rabett.blogspot.com/2013/02/on-priors-bayesians-and-frequentists.html

    [1]: http://videolectures.net/mlss09uk_jordan_bfway/

  132. Eli Rabett says:

    Yes, TE dead is dead, even in the desert.

  133. Eli Rabett says:

    As the discussion with Pekka and James showed, the problem with Nic and radiocarbon is he had no idea of what was being measured and the problems in acquiring that data. There is no physics in NicWorld.

  134. BBD says:

    There is no physics in NicWorld.

    Yes but static[s].

    🙂

  135. Mal Adapted says:

    Steven Mosher

    Talk about attribution of current droughts to global warming is stupid.

    In an attempt to make climate science meaningful or relevant to the public, some folks lean on the science that is most uncertain: attribution of extreme events to AGW.
    That’s a mistake.

    Not any more so than attributing a death from lung cancer to the deceased’s cigarette habit. Weather is the local and present manifestation of climate. How hard is it for the public to understand that AGW has altered the probabilities for extreme weather events? We’ve all heard it said:

    “AGW loads the dice.”

  136. Joshua says:

    Mal –

    ==> ” How hard is it for the public to understand that AGW has altered the probabilities for extreme weather events? We’ve all heard it said:

    “AGW loads the dice.””

    Most people are content to live their lives without a particular interest in climate, science, probablistic assessments of risk, etc. The probablities of extreme weather events are not a priority relative to earning a living, which new cell phone to get, etc. And indeed, as a matter of probabilities, it is extremely unlikely that any individual is going to have their lives directrly, personally affected by the increase in probabilities of extreme weather.

    People, in general, are not particularly good at assessing risk on a long-time horizon; the related tendencies are not unique to climate change.

    My guess is that beating people over the head with statistics is unlikely to have much effect relative to influences of the economy and short-term weather patterns.

    Mosher’s finger-wagging about what is “stupid” is rather typical blogospheric, identity-aggressive, self-aggrandizing, point scoring, but it does relate to an important underlying question about whether focus on extreme weather might be a sub-optimal approach to gaining support for policies to address climate change.

  137. Joshua says:

    Sorry –

    “..beating people over the head with statistics…” was a stupid turn of phrase on my part.

    My point was, rather simply, to question whether a focus on extreme weather is sub-optimal. I don’t see actual evidence for a blowback effect that is often claimed by “skeptics,” but just because something isn’t counterproductive doesn’t mean that it isn’t sub-optimal.

  138. Mal Adapted says:

    Joshua,

    I respectfully beg to differ. Weather is how “most people” will experience AGW first hand. And if they aren’t personally affected by extreme weather, they’ll read about people just like them being left homeless by floods after record rainfalls. When they notice that crops are withering while the mercury soars, they will ask why it’s so hot. And though probablistic assessments of risk may not interest them, loaded dice are a metaphor the lottery-ticket-buying public can easily grasp. It’s correct to say that AGW has made some weather extremes more likely, and I predict that people will make the connection when they see it happening.

  139. Joshua says:

    Mal –

    I experience weather on a regular basis. In my life experience, I can’t tell that there’s been an overall change in the weather.. I can understand if someone shows me a graph of change in temperature patterns over the span of my lifetime, but the total change is likely to be relatively small, and not something I can feel experientially; it is an abstract kind of understanding, of the sort that falls by the wayside when I need to get my root canal done or deal with my father-in-law repeating the same questions over and over because of his dementia, or squash the tortoise beetles that are eating my tomato plants.

    I have experienced extreme weather just a couple of times in my lifetime. As such, I have no way of knowing whether my experiences reflect some kind of increased probability. Again, my understanding is necessarily an abstract one. I’m not saying that people can’t able to “grasp” these concepts – but that the “grasping” doesn’t affect their lives on a day-to-day basis, and add onto that that many people’s views on the issue are influenced by their political predisposition to accept the “expert” opinions that jibe with their own political orientation and to reject those “expert” opinions that don’t.

    This is the lay of the land. As I understand the science, the time when people are likely to experience an unambiguous signal of climate change – on a scale that they can “feel” it within their own experiential framework in a time scale of their own life span – is likely to be at least another 50, 100, or more likely more years out.

    Again, the patterns of how people deal with risk on long time-horizons are fairly common. Thinking that climate change will somehow be unique seems unrealistic to me.

  140. Willard says:

    Breaking news:

  141. anoilman says:

    Willard, the difference between that, and Turbulent Eddie, is that’s cool! I’d do that to my roof in a heart beat.

  142. anoilman says:

    Turbulent Eddie: Let me know if you can find any evidence to support your claims. I’d be interested if you can find anything of value.

  143. Pingback: Physically plausible? | …and Then There's Physics

  144. Brian Dodge says:

    The statement “The Arctic warming is greatest nearest the surface and decreases with height. This is is significant, because the ‘jet streams’ are not at the surface, but are around 200mb. In fact, were the tropical upper tropospheric hot spot actually to have materialized, the pole-to-equator temperature gradient at 200mb would dramatically increase not decrease” shows a lack of understanding of the Hadley-Ferrel-Polar cell circulation which drives the formation of the jet streams(among other things).

    “Airflows aloft that last for several days or longer tend to achieve an approximate geostrophic balance. The main example is the global jet streams, which flow aloft (roughly 12 km altitude) from west to east. The poleward pressure gradients that sustain them are generated by the warm and buoyant tropical air below the jet level.”
    http://www.nature.com/scitable/knowledge/library/where-do-winds-come-from-100578316

    Figure 4 at http://www.amnh.org/learn/climate/Resource7 shows that the polar jet is isolated from the tropical tropospheric hotspot by the downwelling branch of the Hadley circulation and the horizontal lower flow of the Ferrell circulation. The signal of the “hotspot” has recently emerged from the noise – see “Atmospheric changes through 2012 as shown by iteratively homogenized radiosonde temperature and wind data (IUKv2)”; Steven C Sherwood and Nidhi Nishant; Environ. Res. Lett. 10 054007 doi:10.1088/1748-9326/10/5/054007; http://iopscience.iop.org/1748-9326/10/5/054007/article – but it’s apparently smaller than models predict.

    The “turbulent eddies” around high pressure and low pressure centers transport heat from the tropics toward the poles; as the “warm and buoyant tropical air” transported poleward collides with the polar front the jetstream is generated.

    The tropical tropospheric hotspot will lessen the lapse rate and lower the convective available potential energy driving the upward limb of the Hadley circulation, and ultimately decrease the energy available to “push” the jetstream; Arctic amplification of global warming lessens the difference between the parcels of “warm and buoyant tropical air” and the polar cell, lessening its “pull” on the jetstream. Like the Missippi River flowing at a low gravitational gradient, the jetstream driven by a lower energy gradient is meandering more, the Rossby waves are propagating slower, and that slow moving system in Texas dropped enough rain last month to supply eight eight ounce glasses of water to every person on earth every day for the next 10,000 days.

    The hotspot is also inextricably tied to negative feedback to global warming; models that overestimate the hotspot underestimate climate sensitivity; arguments against the hotspot are arguments for higher climate sensitivity and higher risks of global warming.

  145. Pingback: What a surprise …. not | …and Then There's Physics

  146. Pingback: Prospects for narrowing ECS bounds | …and Then There's Physics

  147. Pingback: A bit more about clouds | …and Then There's Physics

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s