## Nonlinear feedbacks

I’ve written a number of posts about the energy balance models (EBMs) used by Nic Lewis and thought I might write one more (sorry SB 🙂 ). In a previous post, Victor asked what I thought the reasons were for the difference between EBM climate sensitivity estimates and the GCM estimates. Quite why a climate scientist would ask me I don’t know, but that didn’t stop me from attempting to answer.

One possible reason is that EBMs assume that feedbacks are linear. In other words, they assume that the radiative response to changes in temperature will be the same in the future as its been in the past (or, rather, that the relationship between the change in temperature and the change in feedback is fixed). GCMs make no such assumptions and I became aware today (H/T Richard Betts) of a paper that attempts to quantify how feedbacks vary with time in GCMs. The paper is The dependence of radiative forcing and feedback on evolving patterns of surface temperature change in climate models by Andrews, Gregory & Webb. What they did was to do a suite of GCMs runs in which CO2 is instantaneously quadrupled and then held constant while the simulation evolves for a time of 150 years. The abstract nicely summaries the basic result

Experiments with CO2 instantaneously quadrupled and then held constant are used to show that the relationship between the global-mean net heat input to the climate system and the global-mean surface-air-temperature change is nonlinear in Coupled Model Intercomparison Project phase 5 (CMIP5) Atmosphere-Ocean General Circulation Models (AOGCMs). The nonlinearity is shown to arise from a change in strength of climate feedbacks driven by an evolving pattern of surface warming. ………. We also demonstrate that the regression and fixed-SST methods for evaluating effective radiative forcing are in principle different, because rapid SST adjustment when CO2 is changed can produce a pattern of surface temperature change with zero global mean but non-zero change in net radiation at the top of the atmosphere (~ -0.5 Wm-2 in HadCM3).

So, in models where the sea surface temperature (SST) is allowed to change, the SST can change in such a way that the feedbacks are nonlinear. If I understood the paper properly, this was primarily due to changes in cloud feedback. The figure below illustrates the main result, where the term $\alpha$ is determined using the equation

$N = F + \alpha \Delta T,$

with $N$ the radiative imbalance, and $F$ the change in external forcing. Since this is a quadrupling of CO2 experiment, ECS would then be $-F/2 \alpha$ with – unlike other similar studies – $\alpha$ here being negative. The top panels and bottom left panel in the figure below illustrates how the temperature changes as the system returns to equilibrium. It is clear that it is not linear. Similar when you compare estimates for $\alpha$ during the first 20 years of the simulations with estimates using the last 130 years of the simulation, they’re clearly not the same (smaller – and hence higher ECS – in the latter period than in the former).

Figure 1 from Andrews et al. (2014)

Similarly – as shown in the figure below – if you compare ECS estimates using the first 20 years of the simulation with estimates using the last 130 years, you find that estimates using the latter period tend to produce higher estimates than determined when using the earlier period.

Figure 2 from Andrews et al. (2014)

So, what does this all mean? Well, one thing is that it may give a reason why climate sensitivity estimates using EBMs are different to those using GCMs (well, there are additional reasons, but I’ll ignore that here). The former assume that feedbacks are linear, while that latter indicates that they might not be. Does this means that feedbacks are non-linear? Not necessarily, but it does indicate that they might be. Since EBMs assume that they’re linear, they certainly cannot be used to argue that they aren’t.

As Richard Betts pointed out on Twitter, it also illustrates why EBMs are not really a method that can be used to narrow the uncertainty in future warming. If they assume that feedbacks are linear and they turn out not to be, then the EBM estimate will be wrong, however narrow one has managed to make the uncertainty interval. This doesn’t mean that EBMs are not useful, but does mean that anyone arguing that we should use them because they believe that they’re more robust than other estimates, is ignoring that they rely on an assumption that may not be correct. If anything, we have evidence to suggest that this assumption will indeed not be correct, and so it would seem that anyone discussing EBM estimates should at least be willing to acknowledge this.

This entry was posted in advocacy, Climate change, Climate sensitivity, Science and tagged , , , , , , , , . Bookmark the permalink.

### 70 Responses to Nonlinear feedbacks

1. austrartsua says:

All models are wrong, some are useful.

2. John Hartz says:

If we only had a time machine!

Good post. Thank you.

3. “Quite why a climate scientist would ask me I don’t know”

I am a non-climatologist, I know a little about non-climatic changes in the climate record. What I know about climate change mainly comes from blogging. The only reason I can participate in the climate “debate” is that the level of reasoning is so extremely low that almost no expertise is needed and because the misinformation is mostly obvious for anyone with a science background who is willing to look.

One reason for me to switch from physics to meteorology is the unbearable atmosphere in physics, where most seem to feel the need to act to be the smartest on every topic. A little more humbleness could improve communication and avoid errors.

And I am a big fan of citizen scientists. Thus I see no reason not to ask someone more knowledgeable for information just for not being a climatologist. We should not allow the mitigation sceptics also give the term citizen scientists a bad name. Citizens help in making observations (stations data and phenological data), in digitising written climate records and some even do very similar work to the professionals in their free time. That is great, especially when citizens do work that professionals can hardly do in the current institutional environment, for example because it does not lead to enough scientific articles per year.

4. Victor,
Ahh, I’ve always seen you as a climate scientist, but it is a bit of a catch-all. I’m trying to think if you’re right about this

unbearable atmosphere in physics, where most seem to feel the need to act to be the smartest on every topic.

Yup, probably.

A little more humbleness could improve communication and avoid errors.

Indeed.

5. anoilman says:

… then there’s the Seagull Scientists. They swoop in, squawk at everyone, crap all over everything, and leave. Any similarities to some people around here are purely coincidental. 🙂

But yes, there’s no reason Joe Public can’t learn more or even participate. There are deficiencies in education that some simple explanations can help with. (That might make an excellent article Anders… ) For instance; What are filters, and how are they applied? What are error margins, and how are they used? Also may bits of work are quite labor intensive aid on that front is invaluable. (Canada recently through out several libraries of documents including environmental records. It was too much to digitize in time for cut backs.)

Its usually been my experience that there are plenty of people (even professors) at universities willing to talk and help out. For me, I just need a good start point to begin reading up on something.

6. Steve Bloom says:

Back around 2000 when I first started discussing climate on the ‘net, a very common denier meme was “how can it reach ~ 3C by 2100 if it’s only warming by ~ half that right now?”. The answer then was the same as the answer now to those who purport that EBM results have long-term significance.

7. Willard says:

I am a non-climatologist too, Victor. We’re almost brothers!

8. John Hartz says:

Speaking of climate models…

So I was bit surprised to read the exchange between Dr. Holdren and Representative Stockman, which suggested that at best we couldn’t explain the science and at worst we scientists are clueless about ice ages.

We aren’t. Nor are we clueless about what is happening to the climate, thanks in part to a small fleet of satellites that fly above our heads, measuring the pulse of the earth. Without them we would have no useful weather forecasts beyond a couple of days.

These satellite data are fed into computer models that use the laws of motion — Sir Isaac Newton’s theories — to figure out where the world’s air currents will flow, where clouds will form and rain will fall. And — voilà — you can plan your weekend, an airline can plan a flight and a city can prepare for a hurricane.

Satellites also keep track of other important variables: polar ice, sea level rise, changes in vegetation, ocean currents, sea surface temperature and ocean salinity (that’s right — you can accurately measure salinity from space), cloudiness and so on.

These data are crucial for assessing and understanding changes in the earth system and determining whether they are natural or connected to human activities. They are also used to challenge and correct climate models, which are mostly based on the same theories used in weather forecast models.

This whole system of observation, theory and prediction is tested daily in forecast models and almost continuously in climate models. So, if you have no faith in the predictive capability of climate models, you should also discard your faith in weather forecasts and any other predictions based on Newtonian mechanics.

Wobbling on Climate Change, Op-ed by Piers J Sellers, New York Times, Nov 11, 2014

9. JH,
Yes, I read that. Very good.

10. John Hartz says:

ATTP: Katharine Hayhoe posted a link to Sellers’ op-ed on her Facebook page. I also did so on the SkS Facebook page.

11. John Hartz says:

ATTP: Given the amount of poppycock heaped on GCMs, what they are and how they work needs to be better explained to the general public. Because the people responsible for their care and feeding don’t seem to have time to do so, we need to pick up the slack.

12. BBD says:
13. John Hartz says:

BBD: Thanks for reminding me about ATTP’s prior post on mdles. The problem is that articles like that one are too few and far between.

Lest there be any misunderstanding, my observation is menat to apply accross-the-board of pro- climate science websites inculding SkS.

14. Andrew Dodds says:

John h –

We have a time machine. Goes forward 24 hours a day.

15. anoilman says:

Choosing to ignore Global Warming models is just another form of cherry picking. Is geological modeling somehow better? What about nuclear collision modeling? The oil industry does both to build oil wells and their heavily dependant on statistics. Oh and they aren’t completely successful.

The one thing that seems elusive to model is economics. Why are they at the table?

16. Weather predictions are great for testing the atmospheric part of global models. People claiming that climate projections are wrong because of some simplifications here or because the numerical solution of the flow equations is imperfect by definition, have the burden of proof against them. That is not impossible, but highly unlikely given the success of numerical weather prediction.

However many other parts of the climate system are not tested in weather predictions, the oceans, the cryosphere (ice), the vegetation, etc. Thus that weather prediction works, in itself is not sufficient to show that climate projections are reliable.

Great to hear that Willard is joining me, we need more people studying non-climatic changes. I expect that 2015 will show this.

17. John Hartz says:

Meanwhile, back in the real world…

For the third month in a row, global temperatures reached record territory according to newly available data from NASA. And if one global temperature record isn’t enough, the Japanese Meteorological Agency also provided new data on Friday that showed the warmest October on record.

NASA, Other Data Show Globe Had Warmest October by Brian Kahn, Climate Central, Nov 14, 2014

18. anoilman says:

Victor, most weather projections are made by meteorology technicians. It’s a 2 year diploma in Canada, and they constantly move around. (So they aren’t even familiar with local trends.)

19. Andrew Dodds says:

Aom –

In my more cynical moments, I suspect that the reason that we don’t have good models for economics is that such models would give the ‘wrong’ answers, politically speaking.

20. BBD says:

This sounds like an interesting recent paper on feedbacks with implications for nonlinear response: Donohoe et al. (2014) Shortwave and longwave radiative contributions to global warming under increasing CO2.

PNAS introduces the paper as follows:

Significance

The greenhouse effect is well-established. Increased concentrations of greenhouse gases, such as CO2, reduce the amount of outgoing longwave radiation (OLR) to space; thus, energy accumulates in the climate system, and the planet warms. However, climate models forced with CO2 reveal that global energy accumulation is, instead, primarily caused by an increase in absorbed solar radiation (ASR). This study resolves this apparent paradox. The solution is in the climate feedbacks that increase ASR with warming—the moistening of the atmosphere and the reduction of snow and sea ice cover. Observations and model simulations suggest that even though global warming is set into motion by greenhouse gases that reduce OLR, it is ultimately sustained by the climate feedbacks that enhance ASR.

Well well well. Who knew?

Abstract:

In response to increasing concentrations of atmospheric CO2, high-end general circulation models (GCMs) simulate an accumulation of energy at the top of the atmosphere not through a reduction in outgoing longwave radiation (OLR)—as one might expect from greenhouse gas forcing—but through an enhancement of net absorbed solar radiation (ASR). A simple linear radiative feedback framework is used to explain this counterintuitive behavior. It is found that the timescale over which OLR returns to its initial value after a CO2 perturbation depends sensitively on the magnitude of shortwave (SW) feedbacks. If SW feedbacks are sufficiently positive, OLR recovers within merely several decades, and any subsequent global energy accumulation is because of enhanced ASR only. In the GCM mean, this OLR recovery timescale is only 20 y because of robust SW water vapor and surface albedo feedbacks. However, a large spread in the net SW feedback across models (because of clouds) produces a range of OLR responses; in those few models with a weak SW feedback, OLR takes centuries to recover, and energy accumulation is dominated by reduced OLR. Observational constraints of radiative feedbacks—from satellite radiation and surface temperature data—suggest an OLR recovery timescale of decades or less, consistent with the majority of GCMs. Altogether, these results suggest that, although greenhouse gas forcing predominantly acts to reduce OLR, the resulting global warming is likely caused by enhanced ASR.

21. anoilman says:

Andrew, I think that’s just cynicism. Market changes are impossible to predict. Trends are sorta kinda easy to spot, they can be bumpy and subject to whims or shifts.

For instance, no computer model could predict a sudden upswing in Solar Panels, and that is because there are many reasons driving the decisions to buy them. From altruistic (enviros), to safety (less supply runs for the military), to fashion (it’s good to be green), to a desire to consume less (no fuel input), to pollution taxes, to price competitiveness.

On the other hand reading the relevant materials, would lead you to the conclusion that solar will do well, especially in the long term, but it will be a tough road for those involved due to competition.

In short, large scale market models are as silly as “Pattern Recognition in Physics” (yes that’s a dig on deniers), because you actually need to understand the underlying causes of what is happening in order to see where it will lead.

22. Richard Erskine says:

Surely, economic models are irreducibly dependent on human behaviour: humans and their transactions are the atoms and interactions of economics, and any non-linearities are also, at route caused by humans. Whereas the climates atoms and interactions are just that, and follows established laws of physics. Of course, scenarios for the future depend on human decisions, but once this decisions are made, the unfolding of the physics and all those non-linearities are based on physics/ earth science, not the fickleness of human decisions. Oh, and to compound matters … If you lay all the physicists in a row … You DO reach a conclusion.

23. izen says:

@-Richard Erskine
“Surely, economic models are irreducibly dependent on human behaviour:”

Probably not.
While the individual sentient intentionality is the basic component of economic systems, as with atoms, molecules and oceans, how hydrogen and oxygen interact, and how it behaves in bulk reveals emergent properties that are not reducible to the component parts.

I suspect the problems with economic modelling is not just the irrational unpredictability of individual behaviour, but the ideological bias in reifying the emergent properties that are ascribed causative status in any model.

While the ‘invisible hand’ is a concept from economics that has been stolen by evolutionary biology as a metaphor for emergent evolutionary processes, it may be that economics is irreducibly complex, indicating it was ‘designed’.
Not necessarily intelligently.

24. Richard Erskine says:

Fair comment Izen. Complexity etc. May be off topic, but what is so beautiful in science is how complexity emerges out of the simplest low level interactions. In economics, it is not clear we can characterize things as simple even at the lowest level.

Getting back to models … In a mind blowing way … models of climate and economics Seem now to be becoming linked (crudely, through scenarios). We need ultimately to find how we can co-evolve better outcomes for the planet and economies. Complexity squared?

25. Since some here seem to understand economic models (well better than I do), does anyone know if there is an economic model that actually models economic growth? My simple understanding is that they normally assume some kind of growth and then consider how various factors (climate change, for example) may influence that underlying growth. This seems, in some sense, more like a linear perturbation analysis and, if so, presumably they’re largely unable to really capture any non-linearities.

26. My understanding is that the difficulty of understanding and modelling human behavior is the largest single factor that makes economic modelling so difficult. Individuals do not act as independent actors, whose differences and individual decisions could be modeled analogously to gas that consists from interacting molecules. People behave as groups in ways that are essential for the economy. The use and spread information in complex ways, etc.

The whole dispute between Keynesians and other schools of thought is really about the behavior of people. We have also scientists like Kahnemann, who have studied behavioral economics (and in his case got the Nobel price for that).

ATTP, Economic growth is a major field of study in economics and in economic modeling.

27. Pekka,
Ahh, yes, I wasn’t suggesting that economists don’t study economic growth and my comment was a little too general. I was thinking more of things like IAMs, which try to understand the implications of climate change. As I understand it, they don’t self-consistently model economic growth, they simply assume some kind of underlying growth and then attempt to understand how various other factors will influence this growth. I may of course be wrong, in which feel free to point that out. The underlying growth assumption could also be based on other models, so may be more than a simple assumption.

28. Andrew Dodds says:

Ok, to clarify.. I’m going from this sort of reportage:

And my own observations as a computer modeler.. if I was modelling the economy, I’d start from individual psychology and predicted economic behavior – which you have at least a chance of determining by experiment – and try and build that up into bigger systems based upon large numbers of parameterised agents, with those parameterisations based on the original research. As far as I can tell this hasn’t been done, at least not to completeness. (At which point anyone may feel free to point out exactly how wrong I am..)

This would be a seriously hard exercise, but done well it would allow you to determine the effects of – as above – subsiding solar panels. It would also have the effect of being politically neutral.

Whereas the kind of top-down ‘modelling’ that often seems to characterize economics (Reinhart and Rogoff’s spreadsheet being a particularly appalling example) has no real-world basis, like trying to model climate without rooting your model in basic physics. This means that it’s very easy to come to the conclusions that you want – just as, indeed, many climate skeptics throw away physics and concentrate on applying near-random formula to selected timeseries.

And this is the problem. If someone came up with a model that had even modest skill in modelling the economy, it would have huge political implications. If we could say with high confidence that – for instance – tax rises would be more effective than spending cuts at reducing the deficit, an awful lot of influential people might get upset.

29. Joshua says:

==> “My simple understanding is that they normally assume some kind of growth…”

Probably something you already know, but related to your question (and Pekka’s response) there has (in the past if not necessarily anymore) been an assumption of “utility maximization” among rational actors which – which I think boils down to an assumption of growth.

30. Joshua says:

==> “Whereas the kind of top-down ‘modelling’ that often seems to characterize economics (Reinhart and Rogoff’s spreadsheet being a particularly appalling example) has no real-world basis, like trying to model climate without rooting your model in basic physics. ”

I think of a cartoon I once saw where two economists are looking at a blackboard filled with complex formulas and one says to the other, “The formula is perfect. Now all we need to do is eliminate humans and it will predict the future.”

31. Willard says:

> the difficulty of understanding and modelling human behavior is the largest single factor that makes economic modelling so difficult.

The second one may be that to model human behavior economists might need to revise most of their theories.

32. Rachel M says:

Whereas the kind of top-down ‘modelling’ that often seems to characterize economics (Reinhart and Rogoff’s spreadsheet being a particularly appalling example) has no real-world basis, like trying to model climate without rooting your model in basic physics.

I’m reminded of that joke about economists who see something work in practice and say, “But does it work in theory?”.

33. Andrew Dodds says:

Willard –

There’s been some darkly hilarious cases in the UK over energy prices, with the politicians and their ‘economic advisers’ basically saying that the people are to blame for their high prices because they are not changing suppliers enough. I can see the Rational Agent Reeducation Camps being prepared as we speak.

34. anoilman says:

Andrew Dodds: That is no joke. Invariably have these libertarians showing up and blaming all the worlds problems on the current choice culprit of the day.

i.e. all jobs are being exported because of environmental regulations. Which is full on BS beyond the pale. Imported products generally must meet local national regulations, and to reduce costs via going to cheaper regulations (i.e. switch from ROHS to cheap old lead) would in fact trigger an expensive and healthy design cycle (circuit board design is dependent on the manufacturing technology), which results in the product being impossible to sell.

More on topic to what you said, in Alberta we deregulated the energy market (Supply, Transportation, and Billing are all separated), so that everyone was free to choose their supplier. Most people saw a huge increase in their bills because there are now more players. Part of the deregulation is a ban on collective negotiation (no libertarians complained that day), which would leverage things in favor things like communities.

And coming full circle, vis-a-vis rational market behavior is confusing at best. All participants are expected to act rationally, however what one participant considers rational, is not necessarily rational to another. In fact its downright irrational and therefore impossible to predict.

35. Willard says:

> There’s been some darkly hilarious cases in the UK over energy prices […]

Citation needed.

36. jsam says:

“There’s been some darkly hilarious cases in the UK over energy prices”
http://blogs.ft.com/off-message/2013/10/21/the-real-big-six-the-problems-with-britains-energy-market/

It is astounding how much attention and effort the UK government has had to apply to try and make a market almost work. :You’d almost be forced to conclude that the natural outcome of a free market is oligopoly or monopoly.

37. John Hartz says:

For a thoughtful discussion about climate change and the future sans the science jargon, check out Lindsay Abram’s interview of science journalist Gaia Vince about life on a transformed planet.

Humanity’s epic planetary facelift: Climate change, mass extinction and the uncertain future of life on earth by Lindsay Abrams, Salon, Nov 15, 2014

In my opinion, Lindsay Abrams is one of the leading journalists in the US covering climate change.

38. ATTP,
I don’t think that anyone would have included growth modeling in IAMs, and I don’t think that would be wise. IAMs may include too many things even without. It’s often better to study different parts of the whole separately rather than in a single model, when each of the parts contains independently large uncertainties.

People have studied, how climate change might affect growth using highly aggregated models, some of that work has been discussed on this site a couple of months ago. I mentioned in that connection some work done by Thomas Sterner, if I remember correctly others brought up some other research. Work of development economists does also study issues that have similarities to the question on, how climate change and climate policies affect future well-being, which is a more general notion that economic growth, but contains growth as one factor of well-being.

That brought to my mind that in some analyses the rate of growth cancels largely out. Higher wealth may lead to larger economic equivalence of damages, when more valuable assets are affected. On the other hand higher wealth may reduce the present value of the future damages through discounting or assumed improved resilience. Whether one of the factors dominates or the cancellation turns out to be almost complete depends on the model and assumptions used in the analysis.

39. guthrie says:

But it is also important to remember that economists and their theories can be falsified by events. A UK example is monetarism, tried by the Thatcher government for a few years, but it didn’t work. Ealrier forms of Keynesianism didn’t cope well enough with the 1970’s, and we can prove now that austerity policies and trickle down economics don’t work. However whether they work or not is irrelevant to the actual political situation.

Pekka’s post at 9:37 brings up a point related to RIdley, Lomborg and other politically motivated science deniers. They are generally cornucopians, believing that economic growth due to wonderful clever humans will make us all much richer in the future which will translate into the ability to magically repair all problems caused by climate change. However, if things like land and certain resources useful in the economy are very expensive, then despite being richer we won’t actually be able to afford to do anything. IN the UK this can be seen with housing, lack of leading to much higher prices. We are richer than 30 years ago, but the cost of housing is much higher, meaning we are spending a great deal of money on it.

40. anoilman says:

I wonder if intelligence, or the lack thereof is in any of the economic models…
http://www.skepticalscience.com/how-sapiens-in-the-world-of-high-co2-concentration.html

41. Eli Rabett says:

IAMS use a discount rate to model growth of the economy. Tol, for example threw a fit at the Stern Report using a low discount rate, and there were huge shenanigans about the Yohi Tol report for the Copenhagen Consensus. You can have any outcome you want at the discount restaurant.

42. anoilman says:

I rather found this useful;
(Eli, did you influence that guy? He has Otters!)

We should stop pretending this is an economic science to be argued over by experts, and realize that this is about morals and ethics and talk about it as such.

As I’ve said, it’s one thing to see starving people on TV, and it’s quite another to see that and know that you intentionally inflicted it on them for your own benefit.

43. We had discussion on the discount rates here ,some time ago. The problem is that looking at the discount rate from different perspectives gives different results. In theoretical papers scientists concluded that a single constant discount rate leads to contradictions. Longer term considerations justify a lower discount rate than shorter term considerations, but I don’t think that a self-consistent picture has emerged.

A discount rate that’s not too low is the best guide for comparing alternatives that differ in the short term, but have the same long term outcomes. Such a discount rate may, however, lower the present value of long term effects far too strongly.

My own conclusion remains that very long term cost-benefit analyses are of little value. One indication of that is the impossibility of fixing the discount rate accurately enough. Another reason is that we cannot determine any better the values we should discount. What worth is an analysis that multiplies extremely uncertain numbers by extremely uncertain coefficients and then sums the terms up?

To make more meaningful comparisons of between decisions that we can do now, we need better methods. The near term part of the consequences may be handled by discounting, the long term effects cannot, but they must be included in some way. That way must also result in numbers that can be compared with the short term part.

44. Andrew Dodds says:

It occurs to me that the whole concept of a discount rate is bound up with the mostly-imaginary concept of money itself. For example..

£1000 in cash kept under the mattress would have lost the vast majority of it’s value over a century – you’d be far better off spending it earlier, so it has a high discount rate.

£1000 worth of agricultural land a century ago – the land would still produce food (actually more) , so would you ever be better off liquidating?

Yet if we took the price of all agricultural land 100 years ago and applied a discount rate of 5% to it’s monetary equivalent, we would conclude that by now it was worthless and we didn’t need food. Likewise if we effectively monetize the entire economy and apply a discount rate, we can justify pretty much any level of environmental destruction..

Sense/Gibberish? Not sure..

45. anoilman says:

Pekka: I truly dislike reducing everything to a number. By definition ethics go out the window, and you reach some truly awful conclusions. It makes sense not address Global Warming because those who are most affected are produce the least GDP. So… let them burn.

It also conveniently hides numerous serious problems behind a number. Like species extinction, and the mythical belief that food production will happily shift closer to the poles. (It won’t.) It also seems to think that people want to loose their homes and move away from where they’ll get flooded. (Anyone ask them?)

In short, Global Warming has a very real human component. Even right now, people are being financially liquidated by Global Warming.

Andrew: Money is an abstract concept. I think David Suzuki was right when we said that perhaps we should be charging not only for the cost of production for goods and services, but also their environmental impacts. (I say full value upfront… until we have a decent appreciation for what it costs later on… ) Right now, we only act when the costs are too blatantly obvious.

46. AOM,
When a rational choice is made, one alternative is judged as superior. Ordering alternatives is almost equivalent to assigning them numerical values. I know that this cannot be done in a unique way. Different people do not reach same values even, when they are highly competent and have access to the same sources of information.

I would consider it really important that people can argue on the choices looking at various arguments and that they can tell each other, why they consider one factor more important than another, and also describe, how much more important. That’s not possible without quantification, i.e. without something essentially equivalent to introducing numbers.

It’s essential to avoid the trap of considering those factors more important that can more easily be quantified. This error is very common, and perhaps the reason for your dislike of the idea of presenting the conclusions by numbers. My solution is giving numbers also for the difficult-to-quantify factors on scale that reflects their importance. Such numbers may be very inaccurate, but that indicates real irreducible uncertainty in deciding, what’s the best choice.

But what is the alternative, if we wish to compare the importance of various factors in order to decide, what’s the best choice, when all factors are taken into account? If we cannot find a clear best choice, we can at least find choices that are not clearly inferior to any of the others.

I cannot see or even imagine any alternative for reaching the two goals:
– Finding a choice that we cannot improve on.
– Communicating our views to others and comparing with their preferences.

The only alternative seems to be to forget rationality.

47. Meow says:

On the other hand higher wealth may reduce the present value of the future damages through discounting or assumed improved resilience.

I’m glad you used the word “assumed” there. I have great difficulty with the idea that richer societies are more resilient than poorer ones. If a rock similar to the K-T asteroid hit earth, the remaining humans (if any) would survive as hunter-gatherers and pillagers of the remains of technological civilization, but that civilization itself would certainly fall.

48. Windchasers says:

Yet if we took the price of all agricultural land 100 years ago and applied a discount rate of 5% to it’s monetary equivalent, we would conclude that by now it was worthless and we didn’t need food. […]

Sense/Gibberish? Not sure..

Sorry, but I’m going to go with “gibberish”.

Discount rates are used to compare present and future values. If I want to figure out how valuable a piece of farmland is to me today, I can look at how much its products will sell for, today, tomorrow, next year, 10 years from now, etc. But the food produced 10 years from now is less valuable to me today than food produced today, because I could sell the food I produced today and invest it in other meaningful projects, projects with a positive growth rate.

Think of discount rates as the inverse of investment growth rates. I can use a compound growth rate to tell me what my $10 today will be worth in 100 years if, say, I invested it in the stock market. (It’s straightforward, the formula would be$10*((1+r)^100). Whereas discount rates tell you what \$10 in 100 years would be worth to you today.

Anyways – we can’t do what you did in your quote, and take the present value of future cashflows from the past, and then claim it means the present is worth nothing. That’s an inappropriate use of discount rates.

49. matt says:
50. Pierre-Normand says:

Pekka Pirilä wrote:

“When a rational choice is made, one alternative is judged as superior. Ordering alternatives is almost equivalent to assigning them numerical values.”

I think that’s only true for a narrow subclass of rational choice situations where it already has been established that maximizing some outcome (according to some already defined metric or ordering) is the goal to be achieved. And even in those cases, to task of defining the relevant item to be maximized may have been the hardest part of some broader prior practical problem.

Whenever the quantity that is to maximized isn’t prejudged (assuming something indeed must be maximized at all) then the main task isn’t to order alternatives, but rather discover what relevant alternatives exist that are so much as worthy of rational consideration. Only then may an ordering task ensue, in some cases.

“I know that this cannot be done in a unique way. Different people do not reach same values even, when they are highly competent and have access to the same sources of information.”

It’s worse than that since people don’t really have access to the same information or “data” when they aren’t even able to properly conceptualize it. Hence, whatever practical paradigms condition the way that they can pick up practically relevant information (or give weight to practically relevant considerations) shape the they in which they frame the very problems that need to be addressed. Hence, practical deliberation among multiple agents largely consists in promoting and explaining practical paradigms — and thereby, hopefully, open up other people’s eyes to value (needs, obligations, etc.) that they may no already have been able to pick up on.

This is quite similar to scientific disputes in the theoretical domain where what is to be believed rather than what is to be done is at issue. There aren’t neutral empirical domains of raw data that competing theories can strive to provide the best explanation for. Hence there can’t generally be a ranking of scientific theories according to brute ’empirical adequacy’ either (which isn’t to say that all theories are equally good — but rather that standards of empirical (predictive and explanatory) adequacy are in some way internal to them).

“I would consider it really important that people can argue on the choices looking at various arguments and that they can tell each other, why they consider one factor more important than another, and also describe, how much more important. That’s not possible without quantification, i.e. without something essentially equivalent to introducing numbers.”

I agree with the first sentence but not the last. Oftentimes, the dispute can be resolved without quantification when one way to frame the practical problem has been disclosed to be wicked, immoral, confused, etc. Likewise, to pursue the previous practical/theoretical analogy, scientific theories can be jettisoned when they are found to embody misconceptions, lack predictive ability, etc. When this happens, what previously was conceived as supporting “data” for them can be seen to have been misapprehended, improperly analysed, cherry picked, etc. Likewise in the practical domain — when one comes to give up on some misconceived practical paradigm — some preferences or desires that were thought to be rightfully maximized can thereafter be seen to be preferences not worth having, all things considered.

51. Pierre-Normand,

My comments are written having the climate policy in mind. Climate policy is an exceptionally difficult issue for any approach, because we face there something no one has learned to manage based on past experience. Therefore we are really dependent on the analyses. If some factors are really determining, it must be possible to show that in a quantitative analysis that may use input derived from ethical considerations. If that cannot be done then the argument is not convincing. Then it must be accepted that other may legitimately disagree not only on the ethical principles, but also on the validity of the conclusion in spite of accepting the ethical principles.

To convince those, who disagree with your conclusions, you must be able to quantify the chain of arguments. Otherwise it remains only your opinion that you cannot support by anything else than your beliefs.

You write:

Oftentimes, the dispute can be resolved without quantification when one way to frame the practical problem has been disclosed to be wicked, immoral, confused, etc.

Of course we see people to present weak arguments that can be dismissed on logical grounds or as being dependent on ethical approaches rejected by most, but that does not solve the problem. Excluding some claims does leave only one possible alternative. More quantitative arguments are needed to avoid choosing one of the worst among those that cannot be dismissed without any quantification.

52. Pierre-Normand says:

Pekka,

I am unsure how the mere complexity that inheres to the practical decisions relating to climate policy makes arguments that aren’t primarily based on quantification subjective. I grant that quantification of some features of the problem (and justifying attempts at maximization of some outcomes) may be part of an argumentation that favor some specific policy. I am rather arguing that the choice among the values to be assessed, promoted or maximized, and also the determination of the options that must be rejected as impermissible on categorical grounds (because they are illegal, unjust, immoral, etc), often is a matter of disclosing the right practical paradigm and this often must occur together with, or oftentimes prior to, quantification.

I view the excessive focus on the idea of ordering potential policy options in respect of outcomes, according to some preferential metric, as a hark back to an obsolete empiricism. The rejection of this empiricism in the theoretical domain has been motivated by the realization of the essential theory ladennes of experience. But I think a similar empiricism has been more stubborn in practical spheres such as ethical, economic or political theory since values are wrongfully viewed as being essentially subjective (or culturally relative) and practical solutions therefore are viewed as outcomes of negotiations among actors that have equally valid (subjective) preferences that can’t be rationally criticized. However, while it may be true that, on account of irreducible individual differences, and cultural, historical and institutional peculiarities, one must tolerate some degree of value pluralism, there remains a broad enough conception of justice, human basic needs, and other universal human values such as to provide a ground for rational criticism of preferences and (putative) values. My main point remains, then, that such rational criticism often must take the form of an elucidation of a practical paradigm.

This isn’t something very esoteric. It’s just the idea that in the same way that many sorts of empirical data only are visible on the condition that one masters some relevant set of theoretical concepts, which make up a theoretical paradigm, likewise, for one to correctly asses a problem of public policy (or whatever practical problem whatsoever) one needs *understand* the relevant set of values that can brought to bear to it. It may be the case that political actors (e.g. different nations, or different parties within one nation, etc.) can’t agree on the relevant paradigm and must negotiate some form of compromise. However, it is often the case that some of the actors inhabit a genuinely wicked or irrational paradigm and the primary focus of the political discussion ought to be to bring them to realize that there are better paradigms available. Since this concerns the very definition of value, it come prior to the idea of ordering policies in point of promoting agreed upon values.

53. P-N,

We seem to be discussing different issues. You are describing situations, where quantitative arguments are not applicable or are not needed, while they are essential in the situations that I discussed.

Both types of issues are real and important.

54. Pierre-Normand says:

Pekka wrote: “We seem to be discussing different issues. You are describing situations, where quantitative arguments are not applicable or are not needed, while they are essential in the situations that I discussed.”

Quite the contrary, I granted that quantitative issues can be applicable to many practical problems, and that certainly is the case for complex problems that pertain to climate policy. I rather insisted that *even* in practical cases where quantification and linear ordering or outcomes is possible and is a sensible thing to do, there always are tacitly assumed by all the participants to the debate scores of assumptions regarding what are the salient features of the problem: those features that relate to human value and are worthy of consideration. Hence people who argue from incompatible and/or partly incommensurate practical paradigms will arrive at practical judgments (e.g. proposed policies) that can not be ranked linearly according to outcome in an agreed fashion.

It seemed to me Anoilman wasn’t either simply opposing the idea that quantification is required in assessing policies, but rather the idea that practical choices reduce to a rational activity of ranking alternatives in a unique linear ordering of desirability. This may work with classical utilitarianism and some modern forms of consequentialist ethics. It is inimical to most other contemporary (and ancient) ethical theories that aren’t grounded anymore in 18th-19th century empiricism regarding value and happiness.

In short I am disputing you apparent suggestion that determination of salient and practically relevant values is *not* a rational process, while quantification and ranking of outcomes or proposed policies is the sole or even the main terrain of application of practical rationality between political disputants. I rather view the exercise of practical rationality as a dialectical process whereby human values and basic needs (and our conceptions of them) on the one hand, and policies and institutions that seek to promote or protect them (on the other hand) are assessed back and forth in light of evolving and competing practical paradigms. I also granted that since there is little prospect for all political actors to settle down into a unique practical paradigm, then room often must be made for acceptable compromises (lest we never act) and hence proposed quantification of perceived benefits and harms likely will be part of the negotiation process.

55. P-N,
I have not written anything that you say that you dispute in my writing – nothing. all my statements on the necessity of quantitative comparisons go totally outside your counterarguments. We are really writing almost exclusively about different issues.

56. Pierre-Normand says:

Pekka, I was specifically targeting you claims, which I strongly disagree with, that:

“When a rational choice is made, one alternative is judged as superior. Ordering alternatives is almost equivalent to assigning them numerical values.”

Followed by your dismissive suggestion that hard to quantify factors merely introduce *uncertainty* to the practical problem. And finally your claim that:

“The only alternative seems to be to forget rationality.”

These claims strikes me as expressing a misunderstanding of the very nature of practical rationality, and threaten to muddy up somewhat the science/policy interface. When assessing not just the degree of desirability of outcomes (along some metric), but also whether they are desirable *at all* (in light of the policies that must be enacted to produce those outcomes), or practicable, or moral, or rational, there often arise considerations, or tacit assumptions, that *are* rational (or suitable targets for rational criticism), but that occur prior to quantification.

Things that are hardest to quantify oftentimes are things that we are most certain about (especially regarding impermissible actions, crimes, etc.), and quite rationally so, and only are hard to quantify because it is pointless to do so in practice. Many things must be conceived clearly, and broadly understood, prior to quantifying them. We only quantify, in view of ranking them in order of desirability, options that are *viable* (permissible, etc.) relative to a practical context, and practical contexts only are assessed from the standpoint of practical paradigms.

Participants to the climate policy debates often have incommensurate views of the very practical context of the debate because they are assessing it from incommensurate practical paradigms. In the particular case of AGW skeptics, as I suggested earlier, not only their view of the practical context, but their very practical paradigm is liable to be contaminated by their inadequate understanding of climate science — i.e. their adhesion to an inadequate theoretical paradigm). The cure for that must largely bypass quantification of outcomes in point of desirability, though, of course, quantification is a sine qua non for understanding the underlying science.

57. P-N,

Note that I’m discussing only comparison of real decisions, not any discussion of more general nature. Nothing of your criticism seems to apply to these comparisons.

A real decision may concern an actionable policy or an investment or some other comparable action. Background discussion that’s only less directly connected to the real decisions is a separate issue. At that level many more considerations are possible and allow for rational argumentation, but I’m not discussing those, while all your criticism is related to this level. This is the difference between our comments. Therefore my comments and your criticism do not meet at all.

58. Pierre-Normand says:

Pekka,

I had been discussing real decisions too. Investment decisions or enactments of actionable policies indeed are typical acts of practical rationality. I quite understood this discussion to be about climate policy. While my main arguments were laid out in terms of exercises of practical rationality quite generally, what is true of mammals is true of elephants. While your focus may have been narrower, I don’t see that it doesn’t fall squarely under the ambit of my (or of anoilman’s) criticism. I had the narrower topic in mind, as a target, while making the general arguments.

You earlier had (quite rightfully) suggested that discount rates have a diminishing usefulness for longer term outcomes owing to irreducible disagreement among actors about how to define them (while I myself think they tend to suffer from this problem already for the short-to-mid term). But you also later seemed to suggest that much of the problem stemmed from *uncertainty* about correct discount rates or the value of outcomes, and this is something else than mere rational disagreement about the values of outcomes or of their predictability.

In the same vein you also suggested that since discount rates are of limited practical use, then other methods ought to be devised so as to enable quantification of long term outcomes and enable relevant comparison with short term costs and benefits. This strikes me as a programmatic attempt to supply more epicycles to deficient rational choice theories that are modeled closely on classical utilitarianism, that imports several of the latter’s theoretical and ethical shortcomings, and that obscures core features of practical rationality tacitly understood by ordinary rational agents who aren’t in the grip of some theory.

59. P-N,
There’s essentially one single argument behind the point I have had in mind in my responses to you. It is:

Only one set of decisions can ultimately be applied, because we have only one world (in a sense I say that we don not live in many parallel universes). If that set is based on rational decision making, one set of arguments has been picked as superior or at least as good as any of the others. For this reason we can give for this set of decisions and to the alternatives numerical valuations, and the valuation of the chosen set must be at least as high as that of all alternatives.

I didn’t say anything on the actual measure. Furthermore I stated that the value is not unique in the sense that different people are likely to give different valuations. I have also explicitly accepted that making the valuation is likely to involve subjective judgment. Thus the valuations determined by different people differ both due to their different (ethical) values and due to their different judgment of the outcomes of the alternatives.

I wrote also that making a rational judgment of the alternatives is essentially equivalent to giving numerical values for the alternatives. That implies that the values are not necessarily specified beyond the implied implied valuation unavoidable in reaching rationally a conclusion. Even people, who do not want to discuss any numerical values imply something on them, when the present their preferred choice.

When the decision making is rational, it must involve attempts to determine the consequences of the alternatives. The alternatives are likely to result in many differences in expected outcomes. Taking many factors rationally into account means that some weights are given to each of the differences – either fully explicitly, or implicitly. Again we meet issues that must be quantified in some way for the comparison to be rational.

I have proposed that discounting is useful in aggregation of short term effects. The main reason for that is in the opportunity costs. Postponing the use of some resources for one purpose allows those resources to be used for something else. Some of the alternatives produce positive returns at some rate. Excluding such alternatives leads to an opportunity cost that includes some rate of return, and leads therefore to a need for discounting. This was also one side of the argument in the Weitzman-Gollier puzzle. (As I wrote, I’m not entirely happy with their resolution, because I think that it leads to new problems.)

In my reply to AOM I emphasized that the necessity of making the comparison quantitative must not mean that easily quantifiable factors will be emphasized more than equally or more important factors that are difficult to quantify. It’s, however, a fact that two different factors cannot be compared rationally without making then commensurable in some way. When there are many such factors it becomes important to give them explicitly some numerical values.

60. Pierre-Normand says:

Pekka,

I don’t think this single argument is valid. It is true that if we rationally select one consistent set of arguments (or set of decisions that those arguments support) then we can, if we wish, invent numerical valuations such that no other set of arguments (deemed inferior, misguided, or simply less convincing or less probably right) has a higher valuation. That doesn’t begin to show that quantification was necessary while initially selecting the superior set.

You also are arguing that valuation of alternatives must be possible on pain of them not being rationally comparable. But what is needed for practical rationality to engage is for arguments that favor some alternatives over other alternatives to conceptually bear on them. The relevant concepts can be categorical rather than quantitative. Rational preference of A over B need not be grounded on any quantification at all. That’s just a special case of rational deliberation when there already is a well defined value (or set of values) such that A is deemed preferable to B because it promotes it (them) more. Granted that may be a way of construing practical rationality that economists have favored because they enjoy putting price tags on everything.

More generally, the practical problem is to determine in *what* respects our practical predicaments (our circumstances — what it is feasible for us to do, and what is liable to happen to us) rationally bear on broad values, commitments, needs and obligations that we already have (or ought to appreciate having though we may have lost view of some of them). It is to discern the salient features of our circumstances that call for action. Different views of what features are salient yield, correspondingly, different ‘values’ to be prioritized. Convincing another party, or discovering for oneself, what features ought to be seen as salient (what are the urgent problems, what ought our priorities to be, etc.) constitutes a large portion of the work of practical deliberation and it is either co-occurring with, or it may even largely preempt, the quantificational aspect of deliberation.

61. If another rational way of reaching a conclusion is valid then a properly quantified way of arguing cannot contradict it. If a properly quantified argument contradicts a conclusion then it’s not based on strong rational argument.

If no quantification can be made then different people can legitimately disagree on the conclusion, and we cannot rationally resolve the disagreement.

That does. of course, not mean that decisions cannot be made, it means only that the side that has more power wins without means of convincing the other side that the decision is otherwise justified.

62. anoilman says:

Both of you are obviously very precise thinkers. I think this will go on forever. Possibly near the end you’ll find you’re in violent agreement. 🙂

63. Pierre-Normand says:

Pekka wrote: “If another rational way of reaching a conclusion is valid then a properly quantified way of arguing cannot contradict it. If a properly quantified argument contradicts a conclusion then it’s not based on strong rational argument.”

This talk of “proper quantification” presupposes the validity of your “basic argument” that I took issue with. So, it seems question begging. I can still say that your modus ponens is my modus tollens. True, if some “properly quantified” argument is sound, then there can’t be another genuinely rational way, which doesn’t primarily rely on quantification, that reaches an incompatible conclusion. Hence, if there *is* such a rational way, then the “properly quantified” argument that purports to invalidate it must be unsound. Either that, or, as I have suggested, the ordering of the practical options in point of desirability (or rationality) *depends* on prior arguments that aren’t primarily quantitative in nature. It is this dependence, then, that will supply the ranking with its validity, and not the merely derivative quantification.

“If no quantification can be made then different people can legitimately disagree on the conclusion, and we cannot rationally resolve the disagreement.”

This indeed get to the core of our disagreement. There are very many ways in which a consideration can rationally bear on an intention, plan, or actionable policy, apart from quantifying the consequences or the options themselves. For instance one can argue that an action is illegal or that a policy is unconstitutional, unjust or immoral. This is different from arguing that (e.g.) policies that are constitutional have a higher value than policies that aren’t. When such a case of unconstitutionality is shown, the the burden is transferred to the policy advocate to show either that the policy *isn’t* unconstitutional (as claimed by the opponent) or that the situation is so urgent that the constitution ought to be repealed, amended or bypassed. (This can be also be done through reflecting on the core values and principles expressed by the constitution).

Well, this is just a counterexample that you may think isn’t relevant to policy negotiations that occur through evaluating options within the bounds of legality or constitutionality (for the nations involved). But there also remains my own ‘basic argument’ (which I haven’t noticed you engage) that prior to there being quantification of alternatives there ought to be some level of agreement on the values that primarily ought to be promoted (and the basic human needs that ought to be provided for, the citizen rights that ought not to be infringed upon, etc.) before it can even be possible for one to rank options at all. And securing such agreements can be a genuine exercise of practical rationality construed in a much broader sense than decision theorists do.

“That does. of course, not mean that decisions cannot be made, it means only that the side that has more power wins without means of convincing the other side that the decision is otherwise justified.”

There are more ways to convince someone that a course of action is irrational than quantifying the consequences likely to accrue from following that course. And, again, even in those cases where quantification is centrally relevant to the rational decision process, the main *rational* arguments may concern *which* features of the practical situation ought to be emphasized in light of prior shared commitments, values and obligations. This may be difficult to see for modern empiricists who have a tendency to relegate everything that pertains to value and (basic) preference in the subjective realm rather in the way Hume, Bentham and Mill did.

Only if Hume were right that “reason is and ought to be the slave of passions” would it be the case that the irrational passions of those who hold more power would be preferentially fulfilled when the disagreements gets to the level of core values. But I think this modern empiricist practical philosophy is obsolete, while contemporary forms of consequentialism haven’t succeeded in overcoming it and rather are keeping it alive with the use of epicycles in order to mimic sounder principles of practical reason.

64. P-N,

I have had a feeling of the type AOM proposes for much of the discussion, but so far we are not there.

I have mentioned values and their important role early in this discussion.

We both have unavoidably some hidden assumptions that are needed to fill the gaps in the arguments. At the level of net discussion their role in the whole remains large. I may give for the concept of quantification a wider meaning than others have in mind. If we look at a case where we have two arguments that are both valid but lead to the opposite direction, my way of thinking is that telling their relative importance implies always quantification. That’s an essential part for my claim that quantification is unavoidable in rational decision making, where we have in practice always contradictory goals.

You say that there are more ways to convince someone that a course of action is irrational, but I continue to believe that I would find explicit or implicit quantification to be part of every argument based on rationality. We might agree on everything else but maintain our disagreement on, whether quantification is involved. That by itself itself is more semantic than substance, but there’s more substance in my next conclusion that making the quantification (that may be implicit and hidden) as explicit as practical, is likely to be useful in very many situations. That requires, however, that explicit quantification is used only as a tool, not as a straitjacket.

65. Brian Dodge says:

Does the exponential increase in total column water with increasing temperatures[1] outweigh the log concentration GHG effect, resulting in a larger (nonlinear) water vapor amplification as the temperature rises?

Lindzen proposed that the increasing Sea surface temperature would increase low cloudiness at the expense of high (cirrus) clouds. A blogger with the handle Tallbloke has helpfully pointed out that infrared radiation gets absorbed within microns of the surface of the oceans, so that energy from infrared forcing preferentially partitions into evaporation rather than heating deeper layers of the oceans – which should amplify the water vapor feedback compared to other forcings, e.g. changes in shortwave radiation which penetrates and deposits energy deeper. Essentially all the water vapor in the atmosphere is in the first 11km (the troposphere), but only 75% of the CO2 is there. Satellite observations of lightning show that scattering in clouds increases the effective path length through clouds by nearly a factor of 5 [2] – the same AGW effect as multiplying CO2 concentration by 5. Also, the relative humidity in clouds is ~ 100%, higher than the average RH in the atmosphere, which will also act to increase water vapor amplification; neither of these effects have been addressed by Lindzen. A further effect of increasing cloudiness is to move the cloud cover towards a continuous layer. An individual isolated cloud will scatter some light out the sides, and it will escape to space even if emitted at shallow angles above the horizon. If another cloud is close by, it can intercept some of that light, re-scattering and adding additional path length and probability of absorption. The limit is when clouds coalesce into a continuous layer, which results in even longer effective path lengths. [3] I have oversimplified this analysis – for instance, the lapse rate means that water vapor isn’t spread uniformly throughout the troposphere; an equivalent path length at sea level temperature and pressure is on the order of 2.5km. Because of declining atmospheric pressure, the equivalent sea level path length for CO2 is smaller also, but I haven’t done those calculations. To fully model this, competent climatologists would need to know how cloud properties vary with altitude, and be able to model the genesis, development, and vertical distribution of clouds, validate the model with field measurements, and predict the changes as the globe warms. There are no doubt regional differences, e.g. due to Hadley circulation, tropical-polar temperature gradients, land-water differences, orographic lift, and so on.
I wonder if the combination of these effects result in nonlinearities in water vapor feedbacks and amplification of the effects of increasing CO2?

[1] “Equation 1 therefore predicts increases in water vapour of order 6–7%/K for temperatures close to the surface assuming fixed relative humidity; further discussion is given by O’Gorman and Muller ( 2010 ). Observations and modelling indicate that column integrated water vapour, averaged over sufficiently large scales, increases approximately exponentially with atmospheric temperature (Raval and Ramanathan 1989 ) as predicted by ( 1 ).” http://www.met.reading.ac.uk/~sgs02rpa/PAPERS/Allan12SG.pdf

[2]”… an additional time-delay due to the scattering of light through the clouds. This scattering delay was determined to be ~138 us (representing an additional 41 km path length) for light from CG events…” https://shareok.org/bitstream/handle/11244/384/3028812.PDF?sequence=1 {(41 + 11)/11) ~= 5 Since 25% of the CO2 is above the troposphere, the effective path length increase would be equivalent to ~ 3.75}

[3]”As a result, an event occurring within a planar cloud will appear brighter when viewed from directly above than if it were observed elsewhere; also, those photons reaching the zenith will have been on average delayed quite a bit, in that they traverse the plane for some time before being redirected. These horizontally extended clouds therefore give rise to wave forms with extended tails, whereas spherical clouds yield wave forms with a more abrupt cutoff.” http://www.forte.lanl.gov/science/publications/1999/Light_1999_1_Monte_Carlo.pdf

66. A blogger with the handle Tallbloke
Tallbloke doesn’t even agree with the existence of the greenhouse effect, so I wouldn’t take what he says particularly seriously.

Lindzen is probably wrong. At the moment (Soden & Held 2006, for example) the net feedback response is positive. We have water vapour (positive), lapse rate (negative), clouds (small positive), albedo (small positive).

67. All energy transfer from the oceans to atmosphere (or directly to the space) is from the top few microns. Evaporation and conduction originates from the molecules at the surface, IR radiation is both emitted and absorbed at the depth of hundreds or thousands of intermolecular distances. Emission of IR is always stronger than absorption, because the emissivity of liquid water is more uniform than that of water vapor. Thus IR always cools rather than warms the surface. Looking at one component only as the reference to Tallbloke is likely to lead to totally wrong conclusions.

The issue of Surveys in Geophysics, where the paper of Allan was published (Vol 33, Issue 3-4, July 2012) is a special issue that contains many interesting articles (28 articles in all) on Observing and Modeling Earth’s Energy Flows by well known authors like Palmer, Trenberth, Gregory, Forster, Soden, Schwartz, and Stevens. One of the articles by O’Gorman et al on energetic constraints on precipitation was mentioned in a recent blog post of Isaac Held.

68. Pierre-Normand says:

Pekka Pirilä wrote:

“[…] If we look at a case where we have two arguments that are both valid but lead to the opposite direction, my way of thinking is that telling their relative importance implies always quantification. That’s an essential part for my claim that quantification is unavoidable in rational decision making, where we have in practice always contradictory goals.”

If we have contradictory goals, then we will likely disagree on the way in which outcomes must be quantified. Further, how does one apply quantification to arguments for the reasonableness of the goals themselves? Rather than emphasizing that deep practical disagreements stem from incompatible core goals (which I acknowledge may happen in many cases) that must be weighted against one another, in the spirit of fair compromise (which may be fine), I think it may also be quite helpful to attend to a logical feature of practical reasoning that has been called ‘defeasibility’ (originally a term of legal theory).

In the theoretical domain, deductive arguments are undefeasible. If a conclusion can be validly derived from true premises, then the argument is sound and the conclusion is therefore true. In that case, the addition of further true premises can’t defeat the conclusion of the original argument and can only be compatible with it (and also, of course, with the original true premises). In contrast with deductive theoretical argument (about empirical truths), practical arguments that aim to justify actions (intentions, plans, policies, etc.) on the basis of (1) some general wants and/or imperatives, and (2) of the empirically ascertained means to achieving them, are *defeasible*. If a conclusion of such an argument (e.g. that we ought to do X) on the basis of the premises that (1) we want U, V, W, etc. and that (2) doing A is a reasonable way to satisfy *those* goals, then the addition of further premises (e.g. that there is a better way B to achieve the given goals, or that there are other significant wants X, Y, etc. that are frustrated, or not optimally satisfied, by the proposed action) can defeat the previous conclusion. In summary, the conclusion that rationally followed from the narrow set of premises doesn’t rationally followed from the wider set. This is the essence of fallibility.

So, one proposal to cope with the defeasibility problem, and account for the soundness of practical arguments, is to assume that a sound practical argument ought to be complete and hence rationally aim to incorporate all our wants, and consider all the possible ways of satisfying them. I think the mistake in this unreasonable demand is the purported parallel with theoretically sound deductive arguments. I would rather adapt from Anthony Kenny (without quite agreeing with his full analysis) the suggestion that practical arguments rather are better seen as being structurally analogous with theoretical arguments *to the best explanation*. And those arguments *are* indeed defeasible. The conclusion of such an inductive argument isn’t a proposal that deductively follows from observations but rather a theory (or theoretical ‘paradigm’) that best accounts for the observations. Such argument is defeasible just because of the well know fact of the underdetermination of theory by experience, on the one hand, and the open possibility of gathering new experimental data such that better theories are called for. Once this parallel is clear, it becomes easier to accept the essential defeasibility of practical arguments. One is then more alert to the fact that disagreements in the practical domain can be objectively resolved through appeal to a wider set of wants.

So far, this broad account of (defeasible) practical reasons also seems to mesh quite well with your insistence on the necessity of quantification in order for various practical premises (i.e. those that pertain to goals, not those that pertain to means) to rationally bear on one another with a common measure. But I think the above step inspired by Kenny’s discussion (in the 5th chapter of his book Will, Freedom and Power, Blackwell, 1975) only is a first step in the right direction and still is a bit flawed. What is illuminating is the parallel with the theoretical *inductive* case, but this parallel must be pursued a bit further.

Once we attend not only to the underdetermination of theory by experience, but also, more importantly, to the essential theory-ladenness of experience (often associated with Kuhn, but also strongly endorsed by Popper) then it is clear that not only new premises have the power to defeat an inductive reasoning, but also the manufacture of a better theory. Of course, in many cases it’s a combination of both empirical and theoretical inquiry that leads to scientific progress. But the main point of the analogy is that the truth of any purported ‘data’ is liable to be completely overturned by some good theoretical argument.

So, back to the case of (defeasible) practical reasoning, I am claiming that, likewise, proposals for a better — or the rational defense of an existing — practical paradigm can overturn the reasonableness of some prior wants, goals, or imperatives that were articulated within an inferior paradigm. The disputation of such premises of practical reasoning on the basis of a practical paradigm is no less a rational activity than is the criticism or advocacy of a theory, in the light of which some data can not only be better explained, but also dismissed, or newly disclosed when it wasn’t even previously in view. And also, analogously with the theoretical case, theoretical progress can result entirely from better conceptualization while quantitative arguments come later and as a result of the deployment of the better concepts. The overturned ‘data’ or ‘wants’, or paradigm, can be seen to have been categorically wrong.

There is a third proposal for improving further this view on practical reasoning, which is to dispense altogether with the idea of defeasibility, and replace it with the idea of fallibility. But that is quite enough for now.

69. Pierre-Normand says:

“…This is the essence of fallibility.”
Sorry. I misspoke there. This is rather the essence of *defeasibility*.

70. Brian Dodge says:

Like a stopped clock, even a tall denialist bloke can occasionally be correct; you just need an accurate clock, and an independent reliable source to tell when that occurs. I find it “helpful” and amusing to point out when their accidental epiphany can be used to contradict other denialist arguments. It can be confirmed that tall bloke is qualitatively correct that shorter wavelength radiation warms deeper layers of the ocean. if we consider the effect of an additional watt/m^2 of forcing from GHG back radiation (IR) or alternatively from solar radiation (VIS + IR) on the Gulf Stream, they would result in different temperature profiles in the water, different amounts partitioned into latent heat of water vapor and sensible heat in the water, and different dynamics of heat transport to the pole. More heat in the water from increased insolation as opposed to GHG IR might result in more Arctic ice melt from underneath. A larger proportion of heat from GHG IR going into evaporation might be expected to cause more Arctic ice melt from the top, faster albedo feedback from ponding, combining for a faster and more quickly accelerating(e.g. nonlinear) fall in Arctic ice – and acceleration of Greenland ice melt.

“We find that during this time period the mass loss of the ice sheets is not a constant, but accelerating with time, i.e., that the GRACE observations are better represented by a quadratic trend than by a linear one, implying that the ice sheets contribution to sea level becomes larger with time.”
“The best fitting estimate for the acceleration in ice sheet mass loss for the observed period is −30 ± 11 Gt/yr2 for Greenland and −26 ± 14 Gt/yr2 for Antarctica. This corresponds to 0.09 ± 0.03 mm/yr2 of sea level rise from Greenland and 0.08 ± 0.04 mm/yr2 from Antarctica.”
Increasing rates of ice mass loss from the Greenland and Antarctic ice sheets revealed by GRACE, I. Velicogna,,The Cryosphere, Geophysical Research Letters, 13 OCT 2009; DOI: 10.1029/2009GL040222

It is known that high clouds have a positive effect (even Lindzen agrees &;>). If low clouds increase the effective photon path length by a factor of 3-5 in the lower atmosphere where the concentration of GHG molecules is highest (due to higher atmospheric pressure and the fact that relative humidity is ~ 100% in clouds), then the “small positive” feedback from clouds may be larger than expected. Especially since even tallbloke knows IR forcing increases absolute humidity by preferentially partition energy into evaporation – &;>)