Abandoning the idea of an “optimal pathway” for climate policy

Since I had a recent post about criticising economic models, I thought I would highlight a recent paper by Jonathan Koomey and colleagues suggesting that we should adandon the idea of an “optimal economic path” for climate policy. I’m certainly not an expert on this, so it might be best to read the paper, or Jonathan’s blog post.

My basic understanding is that this is a response to a view that we can use economic models (typically, integrated assessment models, or IAMs) to determine the optimal emission reduction pathway. This would be the path where mitigation, or abatement costs, balance the benefits of reducing emissions. If we were to do more to reduce emissions, then these models would suggest that the incremental benefit of these extra emission reductions would be smaller than the incremental cost of achieving this.

This paper is suggesting that there is a fundamental problem with this basic assumption. As I understand it, the paper is making a couple of related arguments. One is that there isn’t necessarily a single optimal pathway; [t]here are many possible paths with comparable societal costs, and we can choose paths we prefer. Also, it’s suggesting that any assessment of this an optimal pathway is very sensitive to the initial conditions.

If we were to determine some optimal pathway and then spend some time doing more than this pathway suggests we should do to develop and implement alternative energy sources, the optimal pathway will change. You might argue that we would have then followed a less than optimal pathway, but once this has happened, we can’t really turn back the clock and go back onto the original pathway. That, at least, is roughly my understanding of what is being suggested (as an aside, I think one of the issues with trying to determine an optimal pathway is defining the baseline, which can change with time).

Again, I’m not particularly familiar with the details of this type of work, so may well misunderstand what is being suggested. I’m also not claiming that what this paper is suggesting is correct. I will admit, though, that I’ve also had a bit of a problem with what seems to be implied by these optimal economic pathways. One is simply that they do sometimes seem to suggest that it would be optimal to follow a pathway that has a good chance of leading to a substantial amount of warming.

The other is that I don’t think any model (scientific, or economic) should define what we should do. They can certainly provide useful information, but any decisions that are made should also be influenced by our values and what we think is right and wrong, which cannot – in my opinion – be determined by an economic analysis alone.

This entry was posted in Carbon tax, economics, Philosophy for Bloggers, Policy and tagged , , , . Bookmark the permalink.

136 Responses to Abandoning the idea of an “optimal pathway” for climate policy

  1. Dan Riley says:

    My naive impression is that economists should put more effort into uncertainty and sensitivity analysis

  2. Dan,
    I vaguely recall a paper that tried to look at how well economic would have done in predicting, for example, economic growth over the second half of the 20th century. IIRC, they didn’t do very well, but I can’t seem to find the paper.

  3. Ben McMillan says:

    I feel like these IAM-optimisation exercises have anyway faded a bit in real-world prominence. So e.g. they are conspicuously absent from the discussion about 1.5C vs 2C.

    The early optimization results look pretty much discredited, and the “improvements” after the fact that are more consistent with reasonable policy settings don’t really inspire confidence. If the method allows you to get whatever answer you like, it isn’t really predicting anything.

    I think part of the point that Koomey is making is that local optimisation is not enough, because a local optimum may not be a global optimum. This is partly just a technical complaint about IAMs (because in principle you could find the global, rather than local optimum) but also about economists avoiding models with concavities like economies of scale because they are more interested in mathematical neatness than accurately representing the real world. You can’t discover a remote optimal path by using local prices.

  4. Economists and petroleum industry executives are not going to think this sounds like a good idea. It is, however, an excellent idea. There is no optimal path. I think that is the case for the reasons cited and also because we don’t sufficiently understand the global climate system well enough to identify the “optimal” path even if the other factors were fully fleshed out.

    At some point soon, I think a lot of people who have been on the fence about the climate crisis are going to jump off the fence in favor of significant action. The current warm ENSO season is providing a look at the future in front of us. Lots of folks are finding the weather to be too hot and too stormy and too unpredictable now.

    We may soon reach the point where the US GOP party begins to embrace green energy and aggressive action on emissions. When that happens, the USA will move on from the talk the talk stage to the walk the walk stage of climate crisis response. When the GOP does embrace the action, they will probably blame the Dems for delaying the action on the crisis by just talking about action. That complaint will have some basis because the current Dem administration has continued to sell off petroleum reserves even after candidate Biden promised to stop that practice.

    Should be ok. Just looks a little ugly this summer. Temps are high, but it’s relatively cool in Cascadia. My grapes, kiwis and hops are doing great. The two mini split heat pumps I added this year are keeping us comfortable inside. And I am finishing the touches on an outdoor shower under our grape trellises today. Almost everybody in the household is looking forward to using this new water installation once I open the valves.

    M and I watched a pretty encouraging PBS special about cooling the planet. I think it was this one: https://youtu.be/PeYJTluQ5tM

    Daily CO2
    Jul. 22, 2023 = 422.64 ppm
    Jul. 22, 2022 = 417.1 ppm

    CH4
    March 2023: 1920.74 ppb
    March 2022: 1908.97 ppb

    Cheers

    Mike

  5. wmconnolley says:

    An “optimum pathway” is clearly a bad idea; just ask Hayek. Which is presumably why Nordhaus doesn’t say it. Attempting to balance costs and benefits, though, is clearly a good idea, which is why Nordhaus says that.

  6. WMC,
    I can’t quite work out if you’re being pedantic. Firtly, the word is “optimal” and it appears in many places in his 2018 paper. Is your issue with the word “pathway”? His paper seems to use “case”, “scenario”, and “policy” after “optimal”, so “pathway” seems reasonable to me, unless I’m missing the point you’re trying to make.

    https://pubs.aeaweb.org/doi/pdfplus/10.1257/pol.20170046

    If you’re actually trying to make a broader point, you could make it, rather than being somewhat dismissive. YMMV, of course.

  7. wmconnolley says:

    The problem is the word pathway, which implies planning to an unreasonable level; Hayek should have been a clue. Making up words and putting them in the mouths of people you disagree with is not a valid form of argument.

  8. WMC,
    Okay, sigh (this reminds me of a discussion I once had with Lucia). I’m not sure if you’re criticising the paper (which, I think, used “path”) or my blogpost, which does use “pathway”, but if the latter, it was just a word I was using to try and explain what I was getting at. I was using it in the sense of “scenario” and was clearly (I thought) referring to the idea that there is some cost-benefit optimum. I wasn’t trying to make up words and put them in the mouths of others, I was just writing a quick blog post on a Sunday afternoon about a paper I happened to have come across the day before.

    FWIW, my preference in these kinds of discussions is to put a bit of effort in, try and work out what someone is actually trying to say, clarify things if necessary, and maybe be a little bit charitable. Again, YMMV and if you can’t be bothered, that’s obviously fine. Unfortunate, maybe, but it is what it is.

  9. wmconnolley says:

    > maybe be a little bit charitable. Again, YMMV and if you can’t be bothered

    Well, you could try that yourself? I’m referring to your source, JK.

    I think substituting the word pathway for scenario is an error. It appears to be JK’s error, though it is also possible that N has used the P-word in some work that for some reason JK doesn’t choose to cite. I think you should have noticed JK’s error.

    I’m judging JK based on the blogpost; I don’t have access to the paper. Phrases like ‘Many economic modelers believe that there is an “optimal economic path”’ or “a single optimal economic path” don’t really work if you replace “path” with “scenario”.

    “Path” is dirigiste; it is probably a natural word for JK, but would be (I hope) not natural for N.

  10. Well, you could try that yourself? I’m referring to your source, JK.

    Fair enough.

    Nordhaus does indeed use the word “path” in his paper. Doesn’t quite add it after optimal, but does seem to regard the various emission scenarios as “paths”. However, my understanding of the argument in JK’s paper is that there isn’t necessarily a single cost-benefit optimum, partly because there may actually be more than one (not sure if this is correct, or not) and partly because of the sensitivity to initial conditions.

    For example, as I understand Nordhaus’ 2018 paper, his cost-benefit optimum is a scenario that would lead to 3.5C +- 1C of warming. However, it seems that many today think that our current policy pathway is heading towards something like 2.7C +- 1C. This might imply that we’ve done too much and shouldn’t have done so, but given where are now, it would seem that the cost-benefit optimum is no longer one that would lead to 3.5C +- 1C of warming (maybe this is wrong, but I think this is the kind of thing that JK’s paper is implying with regards to sensitivity to initial conditions).

  11. wmconnolley says:

    Yeah, I’d just got there too. Perhaps “path” wasn’t a great hill to die on. Nonetheless, if you read the paper (https://elischolar.library.yale.edu/cgi/viewcontent.cgi?article=2261&context=cowles-discussion-paper-series) as well as the headline the “path” element fades out. It is explictly said that his policy options considered are limited, and optimal appears in quotes. I think JK errs in thinking that N believes there is a single identifiable path; this is a crude mischaracterisation of N.

  12. WMC,
    Sure, and one could argue that the cost-benefit IAMs are simply providing some information that we could use to inform policy options (implement a carbon tax, for example). However, I think there is an aspect of this that implies that there is some kind of cost-benefit optimum, or an optimal carbon tax that would lead to this cost-benefit optimum.

    This is what the paper claims:

    This commentary focuses attention on underlying ideas about “optimal paths” that are in our view not widely enough understood and are often unstated, namely that,

    1. there IS a single unique optimal path to solving the climate problem,
    2. this path exists independent of human choices, and
    3. society can discover this path in advance through better data collection, analysis, and logical thinking.

    I certainly don’t know enough about the field to know if this really what underlies the concept of a cost-benefit optimum, but cost-benefit IAMs seem to at least be implying something like this.

    If this isn’t what is actually intended, then doesn’t JK’s point have some merit (we shouldn’t really think there is a single optimum) even if he is potentially mischaracterising the ideas that underlying cost-benefit analyses?

  13. Willard says:

    Vintage 1992, an optimal transition path for controlling greenhouse gases:

    Source: http://stephenschneider.stanford.edu/Publications/PDF_Papers/OptimalScience1192.pdf

    There are 18 *optimal* in that 5-pager, so there are more than 3 *optimal* per page, about one per column. Economists might have a hard time letting go of optimal talk.

  14. I think WMC’s objection was to “path” or “pathway”, rather than to “optimal”.

  15. Willard says:

    Well, the two concepts go hand in hand. It is an immediate interpretation of the curve in a graph that evolves in time. The ideal trajectory that basically maximizes the solution to a bunch of equalities. It would be hard for economic optimisation not to give us what looks like paths or pathways.

    Those who argue that we ought to drop the use of a Business as Usual scenario ought to understand the idea that we ought to drop the usage of an optimal path simpliciter. An optimal path is always relative to a model. As soon as we rely on many models, there cannot be one and only one single optimal solution. Unless the models are equivalent, which they seldom are.

    So at best we got something like very rough ballparks. This is fine for very broad guidelines, as long as we factor in the uncertainties. Witness how Bill’s optimal solutions evolved over the decades.

  16. wmconnolley says:

    I think all of JK’s points 1-3 that you quote are misrepresentations of what N is saying, and I think that reading the reffed paper makes this clear. “there IS a single unique optimal path to solving the climate problem” is particularly wrong. At least, as a representation of N. It’s probably a view you could take from a naive reading of the more mathematical papers.

    N’s key point, which is from the 1992 paper but I quote from his Nobel address, is “Perhaps we should aim to limit the global temperature increase to 2°C, or even more ambitiously to 1½°C. However attractive a temperature target may be as an aspirational goal; the target approach is questionable because it ignores the costs of attaining the goals. If, for example, attaining the 1.5°C goal would require deep
    reductions in living standards…, then the policy would be the equivalent of burning down the village to save it. If attaining the low-temperature path turns out to be easy, then of course we should aim for it. These points lead to an approach known as cost-beneft analysis…”.

  17. Just to clarify something, JK was claiming to present the assumptions underlying the development of cost-benefit optimums in general, with N being a classic example.

    I think what you quote from N, in general, makes perfect sense. Clearly we don’t want to do something that does more harm than good. I would hope most people would agree. On the other hand, this doesn’t immediately imply that the cost-benefit optimum that comes from an IAM is necessarily correctly representing an actual optimum, partly because different IAMs produce different results. It may be that each analysis produces the “correct” cost-benefit optimum given the modelling assumptions, but I don’t think one can claim that these analyses are converging in some way. In fact, I think there are some that suggest the Paris targets are indeed consistent with a cost-benefit optimum (which doesn’t mean they’re right, just that these analyses do seem to be quite sensitive to the assumptions).

    So, if you think JK’s paper is “wrong” because it mischaracterises the assumptions underlying these cost-benefit analyses, that’s obviously fine. I still think that some of what it suggests (in particular the sensitivity to initial conditions) are worth thinking about.

    I would also argue that people can assess risk in different, but also valid, ways. It may be perfectly reasonable to be concerned about aiming for the Paris targets because of the risk that it may lead to deep reductions in living standards. On the other hand, others may regard the risk of doing less and potentially undergoing irreversible warming of > 3C as a risk they would rather we avoided. I’m not suggesting that there is some “right” answer, just that I don’t think any modelling will alone allow us to determine this (not that anyone is necessarily suggesting exactly this, but it does seem to be implied the suggestion that “[t]hese points lead to an approach known as cost-benefit analysis …”).

  18. wmconnolley says:

    > this doesn’t immediately imply that the cost-benefit optimum that comes from an IAM is necessarily correctly representing an actual optimum

    Indeed, and I don’t think N ever asserts that. All he’s trying to show is that there’s a cost-benefit tradeoff, and providing one way to think about it; to show you the kind of parameters that tradeoff depends on.

    Skipping lightly over stuff we agree closely enough on for govt work…

    > suggesting that the Paris targets pass the cost-benefits test

    It isn’t possible to say that, without specifying how you’ll meet those targets. Obviously, it can’t be: the target alone provides no estimate of costs. Looking at that paper, it is using DICE, so I think it is making the original DICE assumptions: that the targets will be reached in an economically efficient manner. That isn’t the path we’re on.

  19. WMC,

    Looking at that paper, it is using DICE, so I think it is making the original DICE assumptions: that the targets will be reached in an economically efficient manner. That isn’t the path we’re on.

    Indeed, I wasn’t suggesting that we’re on that path, just that you seem to be able to find analyses that suggest that the Paris targets do (or did) pass a cost-benefit test and others that say they don’t (or didn’t). In a sense, I think this is one of the points being made in the JK paper. Even if there is a single scenario that would be optimal, unless you actually follow it in some way, the cost-benefit optimum will probably change. It could presumably work both ways. It may no longer be possible to meet the Paris targets in a way that means the costs don’t exceed the benefits, but we do seem to be on a path that suggests that the cost-benefit optimum suggested in Nordhaus’s 2018 paper is not the cost-benefit optimum of today.

  20. Willard says:

    From the horse’s mouth:

    The DICE model is a dynamic optimization model for estimating the optimal path of reductions of GHGs (8). The basic approach is to estimate the optimal path for both capital accumulation and reductions of GHG emissions in the frameworkof the Ramsey model of intertemporal choice (9, 10). The resulting trajectory can be interpreted as the most efficient path for slowing climate change given inputs and technologies; alternatively, the trajectory can be interpreted as a competitive market equilibrium in which externalities or spillover effects are corrected with the use of the appropriate social prices for GHGs.

    Both interpretation comes with its problems. Taking the second one seriously may have led Bill to ask for expert advice. This in turn led to a model that increased the cost of carbon with each iteration of the exercise.

  21. Everett F Sargent says:

    So a Nobel for cost-benefit analyses? Pretty much all CE’s are formally trained in CB analysis as undergrads. Which pretty much sums up my opinions of giving out a Nobel in the so-called field of economics to begin with in the 1st place.

    Maybe the rest of Earth’s species should do economics too.

  22. Joshua says:

    Climate change = covid origins = climate change.

    There are people saying it could be natural variability,” he said. “Absolutely we can’t conclusively rule it out yet. But it’s very unlikely.”

    https://www.abc.net.au/news/2023-07-24/antarctic-sea-ice-levels-nosedive-five-sigma-event/102635204

  23. Let’s say you are “optimizing” with a cost-benefit analysis, where the optimum you are seeking to maximize for is the *minimum* *SUM* of the net present values of both future damages from climate and abatement costs (i.e., costs associated with decarbonization).

    **AS NORDHAUS HIMSELF SHOWS IN HIS NOBEL LECTURE** (Slide 7👇) those combined NPV costs are *LOWER* in the “T ≤ 2° avg over 200 yrs¹” scenario than under the “Optimal” **scenario**.

    WMC is correct. “Optimal” refers to specific stipulated **scenarios**.

    There’s an interesting recent podcast by Pete Irvine and Jesse Reynolds, “Challenging Climate”, where they interview Richard Tol. The hosts ask about Nordhaus, and Tol – no timid, unassuming piece of work himself! – readily, almost affectionately, describes Nordhaus as fairly arrogant and as someone who has little time for people who aren’t minimally versed in the work he – and many others! – have been doing. He’s written several very accessible books on the topic, review papers, given lectures, etc.

    He’s not much concerned if a bunch of people on twitter are incorrectly saying *Look, Nordhaus says in his Nobel lecture that the optimal temperature is 3.5°C!!!”. (Which he *doesn’t*.)

    One might notice as well – perhaps in a bit of a stubborn nod on this point – that in the 2023 paper updating DICE, in the ***SCENARIOS*** section (Section V. “Scenarios to evaluate”) for *INPUT INTO* the DICE model, he has changed the name of “Optimal” *SCENARIO* to “Cost-benefit optimal (C/B optimal) “.

    And SAYS in this regard:

    “[Note that this scenario was called “optimal” in earlier versions. We have added the modifier “cost-benefit” to emphasize that it relies on monetized impacts and uses standard economic approaches to welfare maximization. It differs from other approaches that rely on precautionary or threshold-avoidance principles.]”

    By the way, in Jonathan Koomey’s blogpost, he shows a graph where marginal abatement costs of the next tonne of CO₂ emissions intersects with the marginal damages of the next tonne of CO₂, and, if I recall correctly, neatly has them intersecting at “a least cost optimum”.

    I’m going to perhaps write more about this – maybe there’s more in the paper itself – but this is seriously problematic. First of all, the shape of the curves is very misleading – the real-life marginal damages curve is (vexatiously) *VERY* flat. This a straightforward ramification of the transient climate response to (cumulative) emissions, which implies that each tonne is increasing the marginal (temperature) damage by 1/2.5 trillionᵗʰ of temperature change already having occurred. Or, 0.00000000000045°C. Whereas the marginal abatement cost curve rises quite steeply as we move towards *eliminating* carbon from the production of, say, steel, chip-grade silicon, aviation, etc. (Further, Koomey – in the blog, anyway – seems to put words in Nordhaus’ mouth, suggests that the “optimal price of carbon” is at this intersection, and pointing to a quote from Nordhaus just above this claim for evidence of this. But Nordhaus *ACTUALLY* says there that the optimal carbon price is equal to the
    *damages* of the marginal tonne of emissions, i.e., the social cost of carbon (SCC).

    This is *VERY IMPORTANT*for a number of reasons, but notably because the stylized “intersection” plot Koomey shows – which is derived from similar standard plots from environmental economics literature for *FLOW* pollutants *never actually solves for zero carbon emissions* (unless in the unrealistic case of *marginal* abatement costs also being $ZERO).

    Nordhaus addresses this dilemma if you look further into DICE. (One of the more controversial things that Tol concludes, *conversely to Nordhaus*, is that we *ARE* likely to ultimately “solve for” a small *but non-zero* level of ongoing CO₂ emissions – and he says this fully aware and comprehending of the science behind the requirement of net-zero CO₂ emissions for stabilizing temps. You can agree with him or not, but one might be surprised at one of the key reasons he concludes this, which is essentially coincides identically to an argument very, very popular amongst the most extremely agitated climate activists.🤷)

    I’m getting away from my original point about being that “optimal” as Nordhaus is using it is referring to specific, stipulated *scenarios*, not pathways. Which stipulate things such as international cooperation/participation on mitigation is “optimal” – i.e. 100% and a harmonized carbon price or similar. If real-life conditions are assumed to be worse than that ideal, then the cost of abatement goes up, and yadda, yadda through to welfare-maximization.

    I’m done for commenting for the moment. Perhaps more later.

    ¹ meaning a scenario with some overshoot of 2.0°, but subsequently some deliberate anthropogenic intervention to accelerate carbon drawdown after net-zero emissions such that the 200 yr average temperature is limited to ≤2°C.

  24. Ben McMillan says:

    Honestly, Koomey is making some pretty simple points here about global vs. local equilibrium (among other things), it is a pity to that the conversation has turned to defensive quibbling about Nordhaus’ exact choice of words.

    By the way, DICE doesn’t calculate a minimum-NPV optimum: it is maximising a welfare function, which is subtly different (largely due to inequality aversion).

  25. “By the way, DICE doesn’t calculate a minimum-NPV optimum: it is maximising a welfare function, which is subtly different”

    👆CORRECT!

  26. Rust,
    Yes, I do get that what Nordhaus is presenting a cost-benefit optimum scenario, but I also get why some interpret this as Nordhaus suggesting that the optimal temperature will be 3.5C (although I must admit to still not being entirely sure what the difference is between a “scenario” and a “pathway”).

    However, as Ben suggests, it still seems that Koomey is making some reasonable points. If the cost-benefit optimum depends on the scenario, can we just implement – or aim for – a different scenario? Also, as you point out in your comment, if real-life conditions don’t follow the ideal, then things changes.

    So, what I realise I’m now confused about with regard to all of this work is what is it actually telling us about the real world (i.e., how do you take the results of a cost-benefit IAM and use it to inform real-world decision making?).

    It also seems odd that the response to the confusion on social media (for example) is to be dismissive, rather than to put some effort into clearing up this confusion. I realise that Nordhaus may well have tried to do so through his writings, but clear explanations of this seem a bit lacking. I do have some sympathy, as it must be quite frustrating, but I’m also aware of how climate scientists would have been criticised if they’d simply dismissed critics of their work for simply being very confused (as many were).

  27. Everett F Sargent says:

  28. Everett F Sargent says:

    Just remember … just say no … just do it … you got opposable thumbs … they took our jobs … if you did understand things then you would see it for the house of cards that it is.

  29. Just to try and clarify one confusion. I had interpreted “path” or “pathway” to be the trajectory that emerges from the model, given the assumptions in the model and interventions that the model assumes are implemented. I wasn’t interpreting it as being designed, simply as being what emerges.

    So, it’s not so much that we’re trying to find the optimal path, but that this is what emerges from a cost-benefit analysis. It seems that others seem to interpret it as what we’re trying to follow, rather than what we end up following.

    As far as scenario goes, that seems to be what I might think of as the model assumptions, or maybe the initial conditions (or maybe both). So, the cost-benefit optimum that emerges from a model then depends on assumed scenario (i.e., international cooperation/participation on mitigation is “optimal”). So, one obvious question would seem to be how sensitive this is to the assumed scenario?

  30. Everett F Sargent says:

    The more assumptions it takes to build something the less likely it is to stand the test of time.

  31. Willard says:

    > among other things

    My favorite argument of the paper is that actual choices open up future options. Investing in renewables drove their cost down in a manner even the most techno optimist could not foresee. Such surprise cannot be anticipated very well, for if we could it would already be priced in.

    Jonathan & al also quoted from the Nobel lecture. In it, Bill clearly states that Integrated Assessment Models (IAM) help calculate carbon prices, and that an optimal policy prices carbon to its exact social cost. Since the tax changes as years go by, an optimal pathway is created simply by going from optimal price to optimal price. There is little else needed for Jonathan et al’s argument to hit home.

    When Bill refers to this optimum as a purely economic solution, he does seem to intimate independence from human agency. The whole exercise presumes that this kind of calculus can and should inform economic agents and markets. So I am not sure which misrepresentation Jonathan & al committed, or how it would counter their demonstration that IAMs can only be lousy cost predictors.

  32. Rust,
    I was hoping to better understand this

    I’m going to perhaps write more about this – maybe there’s more in the paper itself – but this is seriously problematic. First of all, the shape of the curves is very misleading – the real-life marginal damages curve is (vexatiously) *VERY* flat. This a straightforward ramification of the transient climate response to (cumulative) emissions, which implies that each tonne is increasing the marginal (temperature) damage by 1/2.5 trillionᵗʰ of temperature change already having occurred.

    I had once thought that the SCC was largely scenario independent because of a comment in one of Chris Hope’s papers. However, he was arguing (IIRC) that this was because the logarithmic relationship between concentrations and warming compensated for the non-linear dependence of damages on warming.

    However, this is wrong, because – as you note – warming depends linearly on emissions. My understanding is that the damages due to an extra tonne of CO2 is determined by comparing two scenarios that differ only by that one tonne of emissions. Hence, if we start with a scenario with rising emissions, we’d expect this to be much bigger than a situation with rapidly declining emissions (I think, at least).

    Hence, it’s not clear to me that Jonathan’s illustration is all that misleading, but I may misunderstand your point.

  33. I don’t know how to link pictures, was talking about the schematic in his blog about his paper.

    I’ll try this and see if it saves me some words.

  34. Rust,
    Yes, that worked, but I’m still not convinced that it’s all the misleading.

  35. Ok, it worked.

    So, this is the classic MAC vs MD curves, and for a flow pollutant, you keep reducing emissions from left to right until the marginal damages are no longer above the marginal cost of abatement. Which intersects at “Least Cost Optimum” on Koomey’s plot.

    But it leaves you still emitting some “acceptable amount” of the pollutant, where the marginal abatement cost of any additional emissions reduction exceeds the damages avoided.

    But note that the slopes/shapes of the curves are roughly symmetrical. Yes, it’s a schematic, but if the curves *aren’t* symmetrical, it can make it very difficult to get to the edge cases.

    Stock pollutants like CO₂ tend have relatively flat marginal damage curves vs the steeply falling one above.

    If I recall correctly, Weitzman did some of the early work on these situations in the 1980’s.

    I need to pause for a second to look at how Koomey has oriented his version.

  36. I may mess this up slightly, but if I do it’s because *I’ve* made an error.

    But if you have a situation where the MAC curve is quite steep – which the mitigation research tends to indicate it is, at least after some low-hanging fruit, but the MD is fairly flat (or even sloping *slightly* up?) for the reasons I suggested related to TCRE, then the intersection occurs far too far to the left, and you end up with far too much residual emissions.

    (and always remember, there’s a third “marginal” involved in the background – marginal *benefit* of the marginal tonne of emissions which can be *very* and make paying a small fee to cover *marginal* damages of that emission seem quite trivial)

    Nordhaus solves for this by employing a “backstop technology”, where the MAC curve *also* flattens out when it gets deployed (think CDR) and as long as the MD eventually intersects the backstop technology abatement cost, you eventually get zero emissions.

    But if the MD curve itself was very steep and the MAC flattish, we’d get to very low residual emissions far quicker.

  37. Now, some people say “that’s the wrong way to look at this – we should just set the emissions target and discover what it costs to achieve it”… and, in fact, that’s largely what the Paris Agreement and the NDC’s are doing.

    But it doesn’t mean the dilemma of these marginal dynamics aren’t still operative in the real economy. 🤷

    And, this is largely the tradeoffs that the CBA-IAM’s were designed to surface and explore.

    Just another aside, but Nordhaus was very explicit that the main uses of DICE were for descriptive analytics, not prescriptive policy.

    Yes, he had some favoured policy prescriptions.

    But he was also skeptical about slotting in certain objectives or assumptions that he didn’t think were warranted under real-world decision making.

    It’s easy enough to get DICE to solve for practically any pathway you’d like, depending on how you parameterize it.🤷

  38. Steven Mosher says:

    Now, some people say “that’s the wrong way to look at this – we should just set the emissions target and discover what it costs to achieve it”… and, in fact, that’s largely what the Paris Agreement and the NDC’s are doing.

    no There is no optimal pathway.

    that means whatever you do will be wrong or right depending.

    so, just decide how much you want to spend and discover damages and benefits along the way.

    i look at it through this Lens.

    in building a weapon system offensive or defensive its impossible to predict the future or the optimal mixof forces.

    so, you decide what you can afford to spend. then spend it all diversifying your basket of goodies

  39. Ben McMillan says:

    Koomey is using figure 1 to recall in schematic terms how cost-benefit analysis works. The point is not to argue for this particular shape to these curves, but to criticize the idea that the mitigation cost curve really exists (in some time-invariant, path-independent form) at all. Or at least, that knowing the shape of these curves in advance might be impossible. And there might be multiple local optima, with the conventional approach leaving us stuck in the one near the status quo.

  40. Willard says:

    > you decide what you can afford to spend.

    That reminds me of a story:

    If I remember correctly, at one point, the technical analyst drew a support line on a chart and said to the fundamental analyst that it was a good time to buy and the price won’t go below the price level he drew.

    “You are wrong”, said the fundamental analyst.

    The expert technical analyst obviously disagreed and showed how the price has reacted from that support level before.

    The fundamental analyst then picks up his phone and sells around 2 million quantity of the security they were analyzing.

    As soon as the big sell order was placed, the price made a big move in the downward direction and easily crossed the support level that the technical analyst so confidently drew.

    The expert technical analyst was left in shock and made a complete fool of himself in front of the expert fundamental analyst.

    We ain’t in a pure world of observers.

  41. Rust,

    Just another aside, but Nordhaus was very explicit that the main uses of DICE were for descriptive analytics, not prescriptive policy.

    Indeed, and I think this is a perfectly reasonable thing to do. I don’t think this is well understand, though, amongst those who claim he thinks 3.5C is optimal, and amongst those who use these kind of analyses to argue that the impact of CC will be small even along a high-emission pathway.

  42. Steven,

    no There is no optimal pathway.

    that means whatever you do will be wrong or right depending.

    so, just decide how much you want to spend and discover damages and benefits along the way.

    Isn’t that essentially what the Koomey paper is suggesting?

  43. Rust,
    Thanks, for the explanation. I must admit to now being even more confused by this

    Stock pollutants like CO₂ tend have relatively flat marginal damage curves vs the steeply falling one above.

    If I have a flow pollutant, then if the emissions are constant, the warming stabilises. If we then do some abatement, there will be some costs and initially the damages will reduce much more than the cost. However, there will come a point where the incremental cost is greater than the incremental reduction in damages (where the curves cross) and it will no longer be cost effective to do more abatement. Emissions would then stabilise at that new level and, for a flow pollutant, so would warming.

    However, for a stock pollutant this isn’t what happens. Any level of emission will lead to continued warming, so should then lead to increasing damages. So, how does one define the cross-over point? Is there some maximum timescale (i.e., until 2100, for example) or am I missing something?

  44. verytallguy says:

    “just decide how much you want to spend and discover damages and benefits along the way.”

    The issue with this is that damages are committed decades in advance.

    Alternatively view: decide how much damage you are prepared to tolerate and spend to limit to that.

    Second alternative: admit fossil fuels are finite, and design policy taking that into account.

  45. Ben McMillan says:

    The effective discount rate sets the timespan of interest; so the model is comparing incremental damages vs mitigation over the next 20 years (given an effective discount rate of ~4%).

    Eventually damages rise to the point where it is worth employing the CCS-like ‘backstop’ technology to prevent further warming (around 5C or so?).

  46. verytallguy says:

    If the effective discount rate is 4% surely you’re essentially ignoring damages, as pretty much all damage will be >20 years in the future of any relevant decision today?

  47. VTG,
    Yes, that’s probably the case.

  48. Willard says:

    There is an old paper that argues that we should distinguish between prescriptive and descriptive discount rates:

    Source: https://www.nber.org/system/files/working_papers/w18301/w18301.pdf

    To start with something we can observe makes sense, but since the point is to change behavior, to follow what finance folks do may not suffice to increase social welfare.

  49. Willard says:

    As for backstops, Bill had em all along. Here is how the 1992 paper ends:

    [W]e have examined five different goals or approaches to GHG control: no control, an economic optimization, geoengineering, stabilization of emissions, and stabilization of climate. Among these five, the rank order (from a purely economic point of view) at the present time is geoengineering, economic optimum, no control, emissions stabilization, and climate stabilization. The advantage of geoengineering over other policies is enormous, although this result assumes the existence of an environmentally benign geoengineering option. The policies of no controls, the economic optimum, and emissions stabilization have impacts that are less than 1% of discounted consumption. Climate stabilization would appear enormously expensive

    That ought to put his “the most efficient path for slowing climate change given inputs and technologies” into perspective. In the middle of the woods, economists are often given can openers to open their cans. In fairness, a spoon would do:

  50. Each life is equal, and I’m not just talking ’bout a single species either.

  51. Speaking of optimal and non-optimal: “I don’t consider myself very alarmist. In some sense it’s not fruitful,” Peter Ditlevsen, a professor of physics and climate science at the Niels Bohr Institute in Copenhagen, told Live Science. “So my result annoys me, in some sense. Because it [the window for possible collapse] is so close and so significant that we have to take immediate action now.”

    https://www.livescience.com/planet-earth/climate-change/gulf-stream-current-could-collapse-in-2025-plunging-earth-into-climate-chaos-we-were-actually-bewildered?fbclid=IwAR3u4fOWA92DWSGTXyMLNt99DdVgd9L_WLB1uERYjN-FdjwkhtRQTd-Tvvw

  52. Willard says:

    Following the “optimal pathway” tag led me to a comment that seems to make the inference-we-are-not-supposed-to-make:

    One thing that’s really interesting about the 2 C target is that it was created by William Nordhaus back in 1975 as an ‘initial first guess’ yet he’s moved well beyond his initial first guess. The recent results of Nordhaus’ DICE model suggest that a 2 C target is undesirable relative to a 2.5 C target.

    Source: https://andthentheresphysics.wordpress.com/2017/05/19/emission-pathways/#comment-96232

    When someone says that A implies B and that B implies C, it’s really hard not to infer a willingness to say C. The inference from the consequences of an optimal pricing scheme to its desirability appears quite natural.

    For history’s sake, here is the conclusion to the 1975 paper:

    To summarize, we have indicated what the efficient program for meeting certain carbon dioxide standards is in a long-term energy model. These indicate that for reasonable standards (limited to between a 50 percent and a 200 percent increase in the atmospheric concentration) the program appears feasible. Moreover, it is a program which requires no changes in the energy allocation for the first two 25 year periods, and only in the third period, centering on 2020, do modifications in the allocation take place. These modifications take the form of reducing the fossil fuel use in the non-electric sector, and replacing it with non-fossil fuels.

    Moreover, it appears that the efficient programs have rather high implicit shadow prices on carbon dioxide emissions but that the total effect on energy prices and the total cost of meeting the energy bundle of goods is relatively small. It appears that a rise in the final price level for energy goods of in the order of 10 percent is the range of estimates for the three programs investigated here.

    Subject to the limitations of the model used here, then, we can be relatively optimistic about the technical feasibility of control of atmospheric carbon dioxide. If the control program is instituted in an orderly and timely way, the world energy system can adopt to controls of the magnitude examined here without serious dislocations. It remains to be determined what a set of optimal controls would be, and how these controls could be implemented.

    Source: https://pure.iiasa.ac.at/id/eprint/365/

    To ponder on 2020 in 1975 looks funny now that we’re past it.

  53. Everett F Sargent says:

    History? Well history, in general, tells us quite alot about our most likely future behaviors (nee selves). For instance, I see this thread as just another blame game. As in, (editorial) we don’t like those economic justifications, but we want to be bound to something that sure looks like so-called homo sapiens economical theories, just not IAM’s that give higher temperatures that (editorial) we don’t like. Blame is timeless.

    Economics is basically about fat rich people or even lower middle class and above people. And property and resources. You got no money, no property or no resources, well then, you don’t count for much in our methods ’cause $$$.

  54. I am going to come back to this, and I am looking for something I can point to that perhaps both saves me time and does a better job than I’ve done above.

    But the point I was getting at was indeed related to the Weitzman Theorem as to the efficacy of using prices or quantities as regulatory instruments depending on whether the pollutant was a stock pollutant or a flow pollutant. And it does indeed relate to the relative steepness of the slopes of the MD vs MAC curves. And – I *think* the idea has more generalizable insights than just for the choice of policy instruments, although I am still considering how to convey this with respect to Koomey’s paper – which I have worked through a bit more now.

    In any event, the point about the relative flatness of a stock pollutant MD curve vs that for a flow pollutant is captured here:

    If the marginal benefit cost curve is steeper (less steep) than the marginal abatement cost curve, then mistakes with quantity (price) instruments are less costly than mistakes with price (quantity) instruments. This is the Weitzman Theorem.

    Climate change is driven by the stock of emissions. That implies that the marginal impacts of climate change do not change much if emissions are reduced or increased by a little. The effect of a change in emissions is dampened by the stock of emissions. In other words, the benefit cost curve is shallow. The marginal costs of emission reduction do, however, vary with emissions. Therefore, for a stock problem like climate change, mistakes with a price instrument (tax) are less costly than mistakes with a quantity instrument (tradable permits). In other words, mistakes with the quantity of emissions do not matter much. After all, climate change is driven by global emissions, accumulated over decades and centuries. Mistakes with the price of emissions do matter, as the costs of emission reduction directly affects people and companies. The regulator should therefore levy a carbon tax, rather than create a market for emission permits.

    Now, I’m not mentioning this to relitigate the “prices vs quantities” policy instrument debate (especially since I think it was Weitzman who also showed that either *can* achieve the same outcome, albeit less or more efficiently, depending on the problem).

    But rather to focus on the importance of the (relative) shape of the curves.

    For what it’s worth – though I haven’t fully thought this through yet – I *think* Koomey’s argument essentially boils down to (or at least depends on) a claim that the MAC curve is much flatter than generally assumed (*and* has other attributes – potential for induced technological change, path dependency, etc. – that support this but also have other policy implications).

    I’ll maybe have more to say about that later.

  55. Rust,
    When Weitzman wrote that, do you know if the understanding that warming scaled linearly with emissions was well accepted? As I mentioned earlier, I think that there was a conclusion in one of Chris Hope’s papers (which I could find) suggesting that the SCC did not depend on the underlying scenario since the logarithmic relationship between concentrations and forcing (warming) compensated for the non-linear damage relation. I think that is now wrong, which may then also influence the shape of the MD curve when taken into account (unless the discount rate than compensates for this, or dominates).

  56. Russell says:

    Besides doing macroeconomic damage to the EU ‘s 1.9 trillion dollar touristeconomy, extreme heat has oddly afflicted Miami Beach. Sea surface temperatures above 100 F ( 101.2 was the peak at the Manatee Bay oceanographic buoy ) have led to tourist flight from the sands.

    Summer sunbathers might as well head for Death Valley as risk hyperthermia from a cooling dip.

    https://vvattsupwiththat.blogspot.com/2023/07/the-heritage-foundation-inherits-wind.html

  57. SCC continues to rise significantly through time under the assumption of a TCRE-like linear relationship between cumulative emissions both because *damages* are modeled as non-linear with respect to temperatures, and because GDP is expected to grow (so more to damage, as it were).

    You can see this pretty simply in Myles Allen’s paper (which includes a simple excel implementation of his stab at a simplified DICE-style model he built from first principles… [first time Myles said he’s used excel, by the way!]). This is clearly using a TCRE-like module for converting emissions to temps.

    I don’t think I am really following your larger point.🤷

    https://www.nature.com/articles/nclimate2977

  58. I’m not sure I have a larger point 😁 It w mostly just whether or not the Weitzman relation still holds if we take the linear relationship between emissions and warming.

  59. Willard says:

    Since this is a thread about Bill:

    The Dismal Theorem depends upon some special assumptions. First, it is necessary that the value of the utility function tends to minus infinity (or to plus infinity for marginal utility) as consumption tends to zero. This first condition holds for all CRRA [constant relative risk aversion] utility functions with α > 1 , but not for many other utility functions. Second, it is necessary that the (posterior) probability distribution of consumption has “fat tails.” The fat tails for the distribution of consumption means that the probability associated with low values of consumption declines less rapidly than the marginal utility of consumption increases.

    https://papers.ssrn.com/sol3/papers.cfm?abstract_id=1330454

    As I read it, the tail needs to be “very” fat and grow “very” fast for the theorem to apply. If we do not bind our calculus, we’d spend inordinate resources to meet every infinitely improbable threat. This is where kicks in the infamous killer asteroid counterargument against tackling AGW.

    That being said, Bill takes the theorem seriously, for it underlines that very rare and consequential events do indeed happen. He cites the markets crashes of 1987, 1998, and 2008, which he estimates as 23-sigma events. Contrarians may dispute the real color of these so-called “swans.” For everything except braggadocio, they should look quite black.

  60. Infamous? Please illuminate me if you don’t mind (with a link. Thanks in advance.

    Three 23-sigma events in 21 years? Black Tuesday = 666666666666666666- sigma?

  61. Willard says:

    If you take historical time series, Everett, the variance for one day in equity markets is about one percent. As for the *But Asteroid Strike* argument:

    The lack of attention to pandemics by U.S. presidents on the eve of the biggest may also reflect a lack of attention to researching pandemics and pandemic preparation by us academics. In 2013 Nick Bostrom lamented of issues of truly existential risk, “it is striking how little academic attention these issues have received compared to other topics.” For my 2015 talk I collected data from that year on the number of published papers on the familiar topic of climate change and compared that to four emergent and extraordinary risks: asteroid impact, global pandemic, super volcano and the discovery of extraterrestrial life. That data is shown in the figure below.

    https://rogerpielkejr.substack.com/p/catastrophes-of-the-21st-century

    In doubt, search for Roger Junior and a Climateball keyword.

    He got em all.

  62. Ben McMillan says:

    The last 20 years have pretty neatly illustrated the point that Koomey has been making in various forms for some time: a whole bunch of things that looked very expensive and “hard to abate” now have a low-carbon option that is about as cheap as the traditional fossil option (solar/wind/EVs), because we were willing to pay the transition costs.

    We seem to be in a situation where one-off transition costs (mostly, learning how to do things) are more important than ongoing costs. That makes discussing marginal costs largely besides the point. Also, it is pretty much impossible to know how much transformative change will really cost until you are a fair way down the learning curve. Deep uncertainty means that even if there is some ‘optimal pathway’ given perfect knowledge, it is impossible to know in advance, and thus practically irrelevant.

    So, the early money spent on mitigation is really about exploring the solutions landscape, not directly aimed at carbon reduction. That is quite a different strategic game to the one DICE is playing.

  63. Everett F Sargent says:

    I don’t think that people behave very rationally during financial crises. As in, it’s my money, my territory, my life, the banks close when, stuff it in a mattress now or jump out the window of a rather tall building type thinking (see don’t worry be happy above for a very deep academic discourseful discussion).

    In other words, market runs are bound to occur on at least decadal time scales as history so illustriously teaches us. So there, so much for ludicrous sigma concoctions. :/

    No further comment as to you know who, or their mumblings, as they are the WTFUWT? of so-called academia. 😀

  64. Everett F Sargent says:

    Remember the I in IAM’s, as you don’t have to get the details right, just guess the right integrated outcome as there will be a so-called integrated outcome by default (one of those D’oh! moments for you deeeeeeeeeep thinkers). Kind of like Roulette, cover the table and someone wins and becomes famous for their deeeeeeeeeep insights. :/

    Not a new idea by any means, but moving humanity out of the elements, so to speak, is my preferred mitigation solution.

    My IAM suggests that homo sapiens peak at 12B and after serious climate catastrophes is reduced to 6B by 2300 when temperatures reach 10C above PI. We lose most of the global south in the process, but who cares, they weren’t worth anything anyways, global north economically speaking. After a few millenia, after the ice sheets are gone and geoid rebound occurs, the new Eurotrash countries of Greenland and Antarctica emerge, become global superpowers, cause WWIII, at which point, the remaining mutated homo sapiens devolve into mammalian cockroaches. Kind of an antithesis to the movie A.I..

    That’s my Hollywood script anyways, needs alot more work, but we all are on strike right now because those AI’s … well they … they took our jobs! 😀

  65. Bit off-topic as far as Koomey goes, but under the broader popular category of “everyone is doing everything wrong”… There were a couple references above about (older) IAM’s not using the newer understanding that ΔT is proportional (linear) with cumulative emissions.

    Anyway, lo and behold, I am reading through a hot-off-the-presses screed of IAM’s by someone who loathes everyone, especially the damn economists who never consult with climate scientists to get their models right!!!

    And, as I said, lo and behold, he appears to botch *his own model* from the get-go by claiming that ΔT is linear with atmospheric CO₂ **concentrations**.👇

    [“Physician, heal thyself,” I think the saying goes?]

    I don’t *think* (haven’t confirmed yet) this gets him into too much trouble because he doesn’t seem to ultimately use CO₂ in his model (emissions or concentrations!🤔). [But he definitely seems to get into other trouble… Oh, and I think I just read him claim that because there are no decarbonization technologies capable of reducing emissions, the only solution is to ban all fossil fuels by 2035 and muddle on with the remaining ~25% non-fossil energy and collapsed economy and *Damn the economists!!!*…]

    Anyway, this guy is always amusing because he gets *SO* outraged and unhinged by others’ work, while his own is always littered with howlers and examples of exactly what he accuses others of doing.🤷

    My next comment will be back on topic! – but this seemed to fit (a little).

  66. Rust,

    There were a couple references above about (older) IAM’s not using the newer understanding that ΔT is proportional (linear) with cumulative emissions.

    To be clear, I wasn’t quite sure about this, but my memory (I will look for the paper) is that Chris Hope once claimed that the SCC is scenario indepdendent because the logarithmic dependence of warming on concentrations compensates for the non-linear relation between warming and damages. This seems to suggest that his model was assuming that a fixed pulse of emission would increase the concentration by a fixed amount, rather than a fixed pulse of emissions leading to a constant amount of warming.

  67. Okay, I found the paper. In Section 3.4 it says

    The social cost of carbon does not vary between the baseline A2 scenario and the ‘550 ppm’ scenario; its mean value is $(2000) 43/tC under both scenarios, with a 5-95% range of $10 to $130/tC, reflecting several non-linearities in the chain of causality between emissions and discounted impacts that tend to offset each other. This finding is rather counter-intuitive and is a strong argument for using an integrated assessment model, as neither a scientific nor an economic model would likely pick it up.

    The reason why this is true is not straightforward. It is caused by the interplay between the logarithmic relationship between radiative forcing (i.e. the global warming effect) and concentration (which will tend to make one extra tonne under the A2 scenario cause less impacts), the non-linear relationship of impacts to temperature (which will tend to make one extra tonne under the A2 scenario cause more impacts), and discounting (which will tend to make early impacts more costly than late impacts)

    Click to access eprg0720.pdf

  68. Yes, I have little doubt that as we go back in time, increasingly more of the climate economics papers will have “scientific” assumptions that will seem “problematic”. The above would be an example.

    But, directly related, would be papers that didn’t explore getting net CO₂ emissions down below a residual, tail of say, 6 GtCO₂ yr⁻¹. Or examine more fully some of the damages expected at more modest warming, say, 1.5°C.

    But of course some of this would have been impossible for the economists to have known… because the scientists they were consulting with didn’t know these things at the time.

    I’m not excuse-making – I’ll get to my point in bringing this up.

    But Chris Hope’s paper above is from 2007, and I remember at that time and for some time after the convolutions being made to convert equilibrium temp responses to operationally useful transient temp responses.

    But it wasn’t until 2008-09 that the very first papers showing the linear response between cumulative emissions and temperature appeared. And one of *those* papers – Solomon, et al, 2008 – doesn’t actually *state* this. Because, in Pierre Friedlingstein’s telling, they themselves didn’t recognize it at the time, and it only became clear that they had *also* identified this relationship (as had the Allen, et al, Meinshausen, et al, and Matthews, et al groups – quasi-independently – realized at roughly the same same time. Matthews, et al., 2009, notably for purposes here: “The proportionality of global warming to cumulative carbon emissions”).

    And, of course, this same work has a separate implication/result that hadn’t been recognized before: that to stabilize temps, we’d need to achieve *net zero* CO₂ emissions. Prior to this, “we” were aiming for ~80% reductions. And the UNFCCC Article 2 – the treaty governing all the COP’s, Kyoto, Copenhagen, the IPCC itself I think?, etc., explicitly said that the objective we were trying to achieve was stabilization of atmospheric *concentrations* of CO₂. Which *did* imply that we could keep emitting some substantial amount of CO₂ and equilibriate at some concentration *AND* some temperature *AND* therefore – (somewhat) to Chris Hope’s point that the *temp* impact of a marginal *emission* eventually declined (and, in fact, seem to go to zero🤷, at least as far as temps).

    And, of course, the scientific research on damages well below 2.0°C really exploded in *response* to the Paris Agreement somewhat catching a lot of people off-guard and setting a 1.5°C stretch goal. My own impression is that the subsequent elevation of 1.5°C to seemingly *the* goal (instead of “pursue efforts” while committing to “well below 2°C) was in turn a response *to* the flurry of new research post-2015 examining more closely lower-temp damages (and finding disturbing point).

    So, *still* not my ultimate point!, the economists’ could only be expected to be working with the science which was at hand at the time they were publishing. Which, even then, was, not an unblemished track record by the economists.

    But – finally! – my point.

    *NOW* you have a number of researchers – some with *serious* axes to grind – making practically insane revisionist assessments as if what is known in 2023 was *known* in, say, 2007. It simply wasn’t.

    That’s no excuse either, but it’s worth keeping front of mind with some of these critiques. (And I will just note that I am *not* referring here to Koomey – although the cost declines in, say, solar was something very few people, least of all economists, foresaw as happening, in, again, say 2007.)

    So, I tend to cut a little more slack, given some lived experience with what was occurring *at the time* and not just retrospectively finger-pointing and blame-storming.

    Which brings me back to my *actual* *point* for having brought this up.

    This appalling new work I was looking at last night (again, 👇)

    whines on, and on, and on about how “neoclassical” economists did not consult with climate scientists (and he names people like Marshall Burke, Sol Hsiang, Gernot Wagner, Simon Dietz and a host of others and it is just laughable.)

    But what climate scientists did *HE* consult who told him that temperature is linearly proportional to atmospheric CO₂ *concentrations*???

    Now, as I said, he doesn’t actually pay the price for this mistake, because for both the reason that for the period of extrapolation in question logarithmic appears “close enough” to linear that he gets away with it *AND* he basically goes on to define an “emissions-less” damage function where temperature seems to just rise with time irrespective of emissions. (Which is *another* a-scientific assumption, but it means any of his emissions→concentrations→temp chain basically just drops out, as far as I can tell.)

    And *this* sort of vindictive-yet-error-prone stuff just drives me bonkers! “I accuse my enemies of having the temerity to not consult with climate scientists!” But then – or at least it appears to me! – that Neither. Does. HE!!!

    And there appears to me to be *another* whopper in this report.

    He seems to calibrate his damage function on the NOAA weather disasters time series (non-inflation adjusted by the way, which seems problematic in itself).

    So, this is about 0.7% of global GDP presently.

    And then he fits a quadratic, a logistic, an exponential, and I believe a function similar to that of his nemesis, the Kramer to his Newman, the Moby Dick to his Ahab.

    And, presto, *100%* of global GDP is lost by as early as 2060.

    But hang on a minute!

    The NOAA disasters damages are measuring actual *damages* – mostly property damages – and expressing them for *context* as a % of global GDP. They’re not an estimate of how much GDP *itself* is damaged.

    So, for instance, with the global property market worth about $550 trillion in 2020 (and overall assets around $1,600 trillion) and global GDP at $85 trillion, *even* disasters that destroyed “100% of GDP”-worth of property in a year would leave 85% of the property ok. I know this would be horrific, but the takeaway is that he’s not comparing the same things – while accusing others of being sloppy.

    Now, he’ll just bluster something like “well, even still, my methodology can’t be any worse than that of *these* criminals and frauds!”.

    And his non-peer-reviewed work (“Gatekeepers! Help! Help! I’m being repressed!”) will go (quasi)viral on social media and everyone will go “wow! I can’t believe the other economists did all these terrible things he said!”🙄

  69. By the way, *if* temperatures really *were* linear with atmospheric CO₂ concentrations, I think it’s the case that we’d expect to begin to see *cooling* were we to reduce emissions by about (just) 50-60%.

    Alas, were we to *actually consult with climate scientists*, we’d find that this *isn’t* the case.🙄

  70. By the way #2.

    That regression he’s done on temperature and CO₂ is not on the CO₂ perturbed concentration, but on the the concentration itself.

    Which has a global temperature at the depths of the last Ice Age about 2.8°C cooler than 1900.

    And he consulted with *climate scientists* about this “well-known” relationship?🙄

  71. Everett F Sargent says:

    The economics of scale argument has always been there for solar/wind/ev’s. Economics of scale for anything has always been there.

    My favorite? Containerships!

    There are prople here who are purposefully ignorant of something that has been known for literally millenia.

    That means at least you RNS. :/
    https://en.m.wikipedia.org/wiki/Economies_of_scale

  72. Everett F Sargent says:

    The forest for the trees argument suggests that details are irrelevant to the big picture. That means that the uncertainties are much greater than say any details of the curve bending between temperature and CO2.

    Grasping at straws here. :/

    And seriously where are we now at say 1.1-1.2C above PI, and you are still complaining of damages at only 1.5C PI. We may for the 1st time pass 1.5C this year. 1.5C game over. :/

    IEA said coal emissions just broke all time record for 2022 and will do so again this year and next. :/

    We will never get to net flatline let alone net one half of current emissions. Therefore, in time we will pass 2C above PI.

    The word you all should be using is LATENCY. That means that the current system simply can’t adjust fast enough, because the economics of scale can’t possibly be implemented in the timeframe you all want to magically happen, from your living rooms, your armchairs, your talking and your writings.

    Sorry for my boots on the ground thinking.

  73. Oh, so there “are people here” – especially me! – who weren’t aware of Wright’s Law or Moore’s Law or learning curves?

    [“Impressive. Very nice.”, if the American Psycho gif doesn’t work.]

    Can you tell all of “us people” your story about razor blades, Uncle Everett? I’ll bet only the select few with open minds have even ever heard of those learning curves! Can you? Please?

    [Not that anything I said above says I didn’t understand that. You have no idea what I am “purposefully ignorant” of or not.]

  74. Willard says:

    Wright’s Law sure gets the mojo of Mr. Volts flowing. In an old podcast episode, he was touting this paper:

    The projections shown correspond to scenarios with the most aggressive climate policies and highest rates of technological innovation, i.e., those that produce the highest rates of key green technology deployment and the most optimistic cost declines. Nonetheless, their projected costs have been consistently much higher than historical trends. The inset of Figure 3A gives a histogram of all 2,905 projections of the annual rate at which solar PV system investment costs would fall between 2010 and 2020, as reported by nine separate IAM teams in the AMPERE modeling comparison project.41 The mean value of these projected cost reductions was 2.6%, and all were less than 6%. In stark contrast, during this period, solar PV costs actually fell by 15% per year.

    This makes it clear that it would have been a bad idea to treat these projections as conditional forecasts. By contrast, the stochastic experience curve method produces reliable conditional forecasts of known accuracy (and a published forecast of 2020 solar costs, made in 2010 using the deterministic version of Wright’s law, was indeed far more accurate than any of the IAM or IEA projections made at the time). One of our goals in this paper is to illustrate how such forecasts are useful for planning the energy transition. (Note that IAM and IEA projections are better for mature incumbent technologies such as fossil fuels, but their projections for solar PV, wind, batteries, and electrolyzers have systematically underestimated deployment and overestimated costs.).

    Source: https://www.sciencedirect.com/science/article/pii/S254243512200410X

    The upshot is that IAMs are OK for sure things. To guesstimate the potential rewards of long shots, learning curves may be a must. The episode *Learning curves will lead to extremely cheap clean energy* is still available:

    https://www.volts.wtf/p/learning-curves-will-lead-to-extremely

    The page also links to the transcript.

  75. Everett F Sargent says:

    RNS, you are obviously here saying stuff. I thus conclude things based on things you have said above.

    Oh have a nice day.

    Me getting your attention? I win, 😀

  76. Everett F Sargent says:

    Oh and the Koomey paper is a big nothingburger. Can it address the penetration of renewables and the digging up of mountains of battery acids?

    That at least would have made for a more interesting paper anyways.

    I am of the general opinion that there are always unintended consequences and those follow an inverse relationship to economics of scale. Meaning? When you do anything that affects the environment globally expect there to be unintended consequences also at global scales.

  77. Willard says:

    > Me getting your attention? I win

    Glad you admit that you are trolling, Everett.

    Perhaps you could comment on this bit:

    The NOAA disasters damages are measuring actual *damages* – mostly property damages – and expressing them for *context* as a % of global GDP. They’re not an estimate of how much GDP *itself* is damaged.

    Looks like quite a consequential blunder for an economist to make.

  78. Everett F Sargent says:

    Willard,

    So all of a sudden I proportedy am defending economists? Meanwhile, I appear to be the only one here that bluntly wants to discount completely anything that economists have to say about climate change mitigation. Sheesh. :/

  79. Everett F Sargent says:

    Oh and there are economists that you all appear to agree with and economists that you all appear to disagree with. Blame game stuff.

    Meanwhile, I love to hate all economists equally. 😀

  80. Everett F Sargent says:

    To be very clear, I am 100% for mitigation, yesterday’s, today’s and tommorrow’s.

    The problem is that mitigation is not happening fast enough. Perhaps one to two orders of magnitude not fast enough.

  81. Willard says:

    Nay not worry, Everett. Your I-fart-in-your-general-direction comments are loud and clear, just as are your Reviewer Two stances on just about every contribution to this blog. I asked you to evaluate an important point Rust made so that you think twice before dismissing those who make an effort to write constructive comments.

    While comment sections are outlets to voice frustration, something gets lost when every topic becomes a sounding board to amplify ultimate despair in humanity. As a drive-by remark, it is fine. As a technique to constantly get attention, it is not.

    Thank you for your understanding.

  82. Everett F Sargent says:

    And you miss my larger point entirely. Time and history are on my side, perpetual wishful thinking not so much. :/

  83. Okay, this is maybe instructive.

    Yes, “Empirically grounded technology forecasts and the energy transition”, Way, et al, 2022 a *VERY* nice piece of work, *VERY* encouraging to confirm these insights so empirically, *VERY* instructive insofar as identifying which sorts of production processes are prone to steep technology learning curves *AND* how we can perhaps foster more to get on steeper pathways.

    Yes! Impressive! Very nice!

    But! Are people aware that the *SAME AUTHORS*, who mostly work out of the same Oxford Smith School with Myles Allen, point out that they don’t think the main “fast technology-production curve” processes they’ve identified can attack more than about 75% of CO₂ emissions? That we probably *DO* need some sort of backstop technology, that likely *won’t* have this characteristic? And hence *WHY* where the marginal cost for *these* sorts of abatement gets below marginal damages are actually *CRITICAL* to *NOT JUST* getting the bulk of emissions eliminated cheaply (which *is!* *VERY IMPORTANT*) but getting all the way to (net) ZERO CO₂ emissions?

    Here👇 is a screenshot I took where co-author Matthew Ives was presenting the results of Way, et al., pre-publication.

    Note that the white arrow is *not* intending to connect the point about “carbon capture” to the lower black line labeled “Decisive Transition”.

    Rather, what he was highlighting at this point in the talk that the encouraging results about the “fast learning technology” in Way, et al., *STILL* left about 25% of current CO₂ emissions (*just* CO₂ emissions!) unaddressed.

    If I recall correctly, at this point in his talk, he was starting to bridge to the next speaker (who happened to be Myles Allen) because they (Way, et al.) had realized their pathways could only get us so far.

    So, when I say “instructive” – not that anyone asked that I *be* instructive!🤷- (and I could add constructive as well)… What I am saying is more along the lines of “Yes, Koomey. Yes, Nordhaus. Yes, Way. Yes, Allen.”, etc.

    All these people are looking at the problem from a different angle and making contributions, and it is bewildering to me the idea that people need to seemingly spend *so* much time attacking everything that has gone before in order to make their (barely) new points.

    And it assumes malfeasance and/or incompetence on the part of others that is simply stunning.🤷 Yet is often brazenly stated by people who clearly aren’t all that familiar with what it is they are attacking.🤷

    [ok, I have to qualify that *I myself* upthread was “attacking” someone’s research, but that person is making a *career* out personal attacks on others and whose own work is demonstrably *terrible* and invariably riddled with errors. Not so much an actual economist but a combination 🤡 – Ellsworth Toohey – “Subscribe to my Patreon!” huckster who is just asking for it.]

  84. Willard says:

    > All these people are looking at the problem from a different angle and making contributions, and it is bewildering to me the idea that people need to seemingly spend *so* much time attacking everything that has gone before in order to make their (barely) new points.

    Words of wisdom, Rust. Words of wisdom.

  85. Everett F Sargent says:

    That graph is per annum emissions and lulc (most here would already understand that). Should not the stalled transition arrow be pointed at that portion of the green line that flatlines? Where it is pointed now is in no way stalled, in fact, it looks worse (in slope) then BAU (rcp 8.5).

  86. izen says:

    @-W
    “While comment sections are outlets to voice frustration, something gets lost when every topic becomes a sounding board to amplify ultimate despair in humanity.”

    I think this may be a rather glib misreading of the attitude some of have.
    While we are aware that we will not live long enough to see how this process of climate change plays out, we have lived long enough to recognise the inherent inertia of human societies and economies.
    It is possible to see the advantages of mitigation and the effects of economies of scale, but to also strongly suspect that the rate at which they can actually be achieved has little to do with the mathematical formalism of economic forecasts. Every societal change takes longer, and is less impactful than predictions describe. Change also always triggers a swing back against it a few decades later as those who were young before the change become old and powerful enough to want a return to the ‘good old days’.

    It is not an ‘ultimate despair in humanity’ that we express, but the lived experience of distrust of economist and economeritricians. A knowledge about the nature of the slowness of significant change, and the cyclic pattern of political action. It is the ultimate inadequacy of human society rather than any deep depression about the fate of humanity.

  87. Willard says:

    > It is the ultimate inadequacy of human society rather than any deep depression about the fate of humanity.

    This is a distinction without a difference, Izen. Whatever interpretation conferred to that kind of epilogue, it extends beyond economics or econometrics. It has the power to turn every exchange into a fixed point. When that happens, something is amiss.

    AT asked Rust to clarify himself. He obliged. That is all there is to it.

    No more playing the ref, please.

  88. Ben McMillan says:

    The “surprise” of some kinds of radically different technology actually being pretty competitive with (or far cheaper than) the status quo is pretty much a regular feature of new tech of any sort. Exactly the same thing happened with the Montreal Protocol (where the economists also thought the alternatives would be rather expensive). In some sense the whole industrial revolution is predicated on the discovery of an new, cheaper, source of energy.

    Basically, “nobody ever comes up with anything competitive with fossil fuel” is a radically pessimistic point of view, but baked into the way conventional IAMs work. The first thing you do is make up a convex mitigation cost curve that makes any move from the status quo expensive, and increasingly so the further you go from the status quo. i.e. assuming the status-quo is some kind of (externality-ignoring) optimal, not just a product of historical happenstance.

    If it is in fact cheaper to make a radical, not marginal change, then discussion about marginal costs/damages is a red herring.

    Conventional IAMs inevitably tell you to look for low hanging fruit, and delay action as late as possible. But, really, we need pretty much the whole tree in short order. Technology research and deployment (as well as things like sustainable city design and lifestyle change) is a slow game, so moving as soon as possible to capture the center of the board is usually a far superior strategy.

  89. izen says:

    @-BM
    “If it is in fact cheaper to make a radical, not marginal change, then discussion about marginal costs/damages is a red herring.”

    Cost is not the only factor. IAMs make the implicit assumption that the economic form of society will remain. Any changes will be within the current setup.
    Radical technological advances can alter things rapidly, the prime example would be the replacement of horses with cars between 1900 – 1920s. That also changed the social forms of life, but with a radical change in consumerism. It derived from the application of assembly line procedures to vehicles. When the same process was applied to other things the consumerist society was born.

    To significantly reduce CO2 emissions will require an equally profound change in the way society and its economic basis is set up. While this could come from a technological breakthrough such as cheap room temperature superconductors, it would still require a significant change in the structure of society.
    Whatever future path is followed humanity will survive, if only as scattered bands of hunter-gatherers, or local farming groups.
    It is the social and economic patterns that will have to radically change either as a response to serious global warming, or in mitigation of it.

  90. Ben McMillan says:

    Izen: Vanilla DICE assumes a ‘benevolent world dictator’ who ‘optimises welfare’ (in my view, incompetently) so I guess I would argue this is already rather different to the current state of affairs.

    Over the course of the last 100 years, and across different nations, there have been pretty wide variations in the way societies and economies have been organised. I think I’d argue that some of these have been much more successful than others in providing and promoting sustainable lifestyles and economies so maybe the change needed is not that radical.

    For example, in general, EU nations have much lower footprints in various ways than Australia. But even within Australia, (state-vs-state, dense inner suburbs vs. sprawl) there are big variations in personal impact, car-dependence, and how extravagantly energy is used. I guess I feel like the economic system is not so radically different in, say, Germany, compared to Australia; these are both, roughly speaking, capitalist economies but with large public sectors. And that this differs substantially even within a country is an indication that there are other levers than the overall economic system.

    South Australia has, for example, rapidly gone to a dominantly renewable electricity system, whereas Queensland is still dominated by coal. I agree that this has little to do with cost, but more about attitudes and politics. But are these societies/economies really so radically different?

  91. Nathan says:

    “South Australia has, for example, rapidly gone to a dominantly renewable electricity system, whereas Queensland is still dominated by coal. I agree that this has little to do with cost, but more about attitudes and politics. But are these societies/economies really so radically different?”

    NewsCorp… Our political system is heavily influenced by Murdoch media. In 2010, the Labor Party (yes, they spell it wrong) won Government with the Greens holding the balance of power. That Government (with the Greens) introduced an emissions trading scheme and set about attempting to develop renewables as alternatives. It worked for the couple of years in was in play.
    The response from the opposition and media (almost all Murdoch owned) started a campaign to ‘Ax the Tax’ (the Emissions Trading Scheme).
    The labor Party lost the next election, and only just won power back last year. They won’t do it again, because of the influence of the media.

  92. Ben McMillan says:

    I guess there is a mix of positive and negative news about Australia: like Norway, some internal dynamics are positive for climate, but they continue to expand fossil fuel exports.

    For example, the current federal government have committed to an 82% renewable electricity target by 2030, and several states/territories have banned gas connections in new houses.

  93. Off topic a bit but: “The study, which asked 2,000 Norwegian adults how they felt about the climate crisis, found the link to activism was seven times stronger for anger than it was for hope. The effects were smaller for other actions, but fear and guilt were the best predictors of policy support, while sadness, fear and hope were the best predictors of behavioural change.

    On average, people reported having fairly mild feelings about the planet heating.”

    https://www.theguardian.com/environment/2023/aug/21/anger-is-most-powerful-emotion-by-far-for-spurring-climate-action-study-finds?utm_term=64e48e2a1dd57181ed0a82b5917976a1&utm_campaign=USMorningBriefing&utm_source=esp&utm_medium=Email&CMP=usbriefing_email

    I think that’s right. People on average, have fairly mild feelings about global warming. That changes rapidly if you are on vacation in Lahaina on the wrong day, but, yes: fairly mild feeling generally.

    Here in the PNW we have a bit of cooling onshore flow with a side of smoke. Overall not too bad. I had a crazy idea and decided I wanted an outside shower under all my grapes this summer. I was thinking of something rather basic, a shower head with hot and cold water, but I ended up building a cedar shower room with two seats. All out of native cut western red cedar. Most of it is slab leftovers with live edges put together shingle style. It’s fabulous. I have to lean the ladder against it today and start harvesting grapes. The vines are full and the grapes are ripe.

    Wishing fairly warm thought to you all,

    Mike

  94. JP says:

    Further to my last post asking how long a trend needs to be for it to be considered significant, would you do such an analysis by taking a trend prior to the change and one after, encompassing the entire period, and if the slope is different then it would signify a true change? But wouldn’t any change in the trend line, even if short, always result in a change in the overall trend, even if by a very small amount? I think I confusing myself. Better to wait for knowledgeable people to chime in.

  95. JP,
    Again, it depends on the uncertainty/significance. Let’s imagine you take the period prior to the change and estimate a trend of x_1 \pm y_1 and then take the trend over the whole period and get x_2 \pm y_2. Consider if x_2 < x_1 then one might conclude that there has been a change if x_2 + y_2 < x_1 - y_1 (i.e., the trends plus/minus uncertainties don't overlap). If not, then you would probably conclude that the trend over the full time period is still statistically consistent with the earlier trend.

  96. Bob Loblaw says:

    I followed up JP’s question on the other thread; I’ll follow up here, too.

    Fitting a linear regression has the obvious assumption that there is indeed a linear aspect to the data. The slope of the regression is only an estimate of that true linear slope, though – and the uncertainty in the regression coefficient tells you how far off the true slope the estimate is likely to be. Just like calculating a mean from a sample gives you an estimate of the mean.

    And, like calculating a mean – where a larger sample will reduce the uncertainty in the estimate of the mean – a larger sample in the regression (more time) should also reduce the uncertainty in the estimate of the slope. Assuming, of course, that the true slope is not changing.

    There are ways to do “change point analysis”. Some of this is discussed here at ATTP, in the comments to Ken’s update on the Skeptical Science Escalator:

    The Escalator

    If you see many change points through visual analysis, you end up with a complex “statistical fit” that actually has very little physical meaning. Willis Eschenbach does a masterful job of botching it royally on that thread. I refer you in particular to my rebuttal of Willis’ change point analysis in this comment on that thread.

    [Source:] https://andthentheresphysics.wordpress.com/2023/02/02/the-escalator/#comment-215359

  97. Bob Loblaw says:

    [Fixed. Just add something at the beginning of the line to prevent WP from interpreting your links. -W]

  98. Joshua says:

    I had a similar question seven years ago at Judith’s blog (which stumbled upon again after someone led me back to some stuff she posted that she’s never been accountable for).

    If a tree falls in the forest….

    If the “pause” ends, and is subsumed by a longer-term trend of increasing SATs, is it really a pause? Or was it merely noise within a longer-term signal?

    IMO, what makes sense is to look at the longer term (post-industrial) trend, in terms of a number of metrics (SLR, OHC, SATs, etc). To the extent that the “pause” in SATs (as just one metric) was/is meaningful, it will show up in the longer-term trend by virtue of reducing the rate of increase in the longer-term trend.

    End of the satellite data warming pause?

    I never got a great response there… and your responses here have been a bit too mathy for me to follow. Could you try again in a less mathy way?

  99. Joshua says:

    Also, did “da pawz” change the longer term trend? Is it too early to tell? If it’s not too early and it didn’t change (lower) the trend, then was it really a pawz?

  100. Bob Loblaw says:

    “… your responses here have been a bit too mathy for me to follow. Could you try again in a less mathy way?”

    Who is “you” in this sentence, and exactly which responses are your referring to?

  101. Joshua,

    Could you try again in a less mathy way?

    I’ll try :-). If you have a noisy dataset, then it’s probably not possible to tell, initially, if there has been some change in trend because you won’t know if it’s simply variability, or real. However, as we saw with the “pause”, if you wait long enough then either the long-term trend will end up consistent with the trend prior to the period of the trend change, or the long-term trend will actually change, indicating that there was a change in trend (i.e., the trend for the full time series differs from that of the trend prior to the period of the change).

  102. Joshua says:

    Bob –

    Both your answer and Anders’.

    So his less mathy seems to answer my first question – that my intuition was reasonably close.

    And so then the remaining question is whether “da pawz” that got “skeptics” so excited was just noise, or whether it’s too soon to say (in a mathy consistent way)

    .

  103. Joshua,
    I’m not sure that we’ll ever really “know” what happened during the period called the “pause”. When you look at ocean heat content, the climate system clearly continued to accrue energy at a rate that seemed largely unchanged. However, only a few percent of the energy being accrued heats the surface and lower atmosphere. Hence, it’s quite possible that internal variability produced a period where this region did warm slower than it had been. However, it seems clear that if this was the case, it was short-lived and has had little impact on the long-term trend. If anything, the long-term trend appears to be higher now than it was before the period referred to as the “pause”.

  104. Joshua says:

    Anders –

    Seemed to me that the difference between this putative “pause” and what could fairly be called noise was always in question – because (as you’ve pointed out in the past) there was no accompanying mechanistic explanation for how the ongoing processes of global warming had “paused.”

    I think in the real world, usually when when we think something has paused, it’s distinguishable from noise.

    It it possible that it was really a pause but we just haven’t found the mechanism? I guess. But I think if someone wants to call it a “pause” then they need to come up with an explanation for why it’s not just “noise”

    That’s particularly true when people like Judith didn’t merely just say that it was a “pause” in one relatively minor metric, but went before Congress to equate a temporary slow down in a long-term trend of increase in one relatively minor metric, to a “pause in global warming.”

  105. noise and natural variability seem to fit in same category. We need big data to talk about trends. Cherry picking and small data sets are great way to lead folks astray. Easy way to spot bad faith actors. Maybe we can educate stupid, but I can’t think of anything to do with bad faith except ignore them or block them.

  106. Bob Loblaw says:

    Joshua: The most enjoyable answer: “It depends”.

    Noise, how much data is needed to detect a trend, or a change in trend, etc. are not characteristics of statistics. They are characteristics of the data. The statistics are only trying to describe the data. The question is “how much data do I need with this data set?”, not “how much data do I need with this test?”

    Let’s take a simple example. The proverbial two-sided coin. A “fair” coin with heads on one side and tails on the other should fall 50:50 heads or tails over a long period of time. Can we test for fairness? How many coin flips do we need to do to be (almost) certain the coin is not fair?

    If the coin is jigged so that it falls 90:10, then that is going to be pretty clear that it is not fair after a few dozen tosses. That big an effect is easy to see. If the coin is jigged to 55:45, then it is going to take a lot more flips to see that 55:45 ratio with confidence. Even with a fair coin, the 55:45 ratio may show up for a short time.

    Now, let’s take the case where you know the coin is not fair, and a long sequence of flips has demonstrated the 55:45 ratio pretty strongly. Then someone says “I think I fixed the coin – it’s now 50:50”, but only has a few flips showing the 50:50 ratio. Is the coin really fixed?

    First of all, you don’t test the recent short sequence against 50:50. You are asking “did we fix it?”, which means “did we change its behaviour from 55:45?”. So, the test you need to do is whether the recent period is different from the 55:45 ratio that is “expected” from the long-term (observed) ratio. After all, what you did to fix it may have actually made it worse, not better. Or maybe you over-corrected and made it 45:55. You may need a lot of flips to see if the trend has changed. If it is now 45:55, you can probably see that as a significant change with fewer flips than you would need if you are testing to see if it is 50:50.

    Now, if the altered coin was doing 90:10, then it won’t take long to see if it has changed to 50:50. On the other hand, if you only “fixed” it to 85:15, then detecting that will take more flips.

    Does that help?

  107. Joshua says:

    Bob –

    Sure, I think that helps with the more generic question. I’ll try paraphrasing: There’s no universal answer and the difference between noise and a trend depends on the attributes of the data (and I’d add maybe other relevant aspects of context?).

    So then getting back to the matter at hand.

    What could we have said 20 years ago, with the data we had at that time (and other aspects of context known at that time), that would have set up the conditions to allow us to distinguish the putative subsequent “pause” from noise? And would those conditions be something that we could assess at this point in time, or would we need to wait longer?

    It seemed to me that even talking about a “pause” was ceding an (un-earned) language mid-direction to “septics.” Not unlike talking about “lockdowns.”

    The more I think about it, the escalator seems to really nail it.

  108. Bob Loblaw says:

    OK. Let’s look at that “noise” question.

    By one definition, “noise” is just “what we have left over after we look at the trend”. In statistical terms, we have the “residuals” from the regression. The part that the trend does not “explain” (in a statistical sense).

    That does not mean that we can’t explain it at all. For this paper, I might want to talk about the trend, but the residuals is where we’ll probably find all the interesting stuff that might lead to the next paper. That does not mean that the trend is not important, though.

    Or, I might already know what sort of things are causing non-linear effects. In global temperatures, we get El Nino cycles, volcanoes, solar variation, etc. And we know that these will cause departures from a smooth increase in global temperature.

    Twenty years ago? Not sure. Twelve years ago? We knew (Foster and Rahmstorf) that if you account for ENSO and volcanic effects, the linear trend is much clearer: Tamino’s blog presents this, with links to the paper.

    The Real Global Warming Signal

    Not all “noise” is random, and not all “noise” is inexplicable. Could this have been communicated better at the time? Maybe, but certain parties will jump on any out-of-context wording they can to spread FUD.

    The one very predictable item is that each step in the escalator will see screens like this one popping up all over denialville:

  109. Joshua says:

    It always seemed illogical to me to say that

    (1) You accept unequivocally that adding ACO2 adds heat to the climate and

    (2) You don’t doubt that “natural variability” notwithstanding, ACO2 increases the global temperature above the background climate dynamics and

    (3) you’re only questioning whether the IPCC overestimates the magnitude of that increase yet

    (4) you think there’s been a “pause” in global warming, despite that ACO2 is increasing.

  110. verytallguy says:

    Joshua, re “the pause” and did it affect the long term trend.

    Taking trend from 1970 as “long term”, and “the pause” as being from 1998…

    https://www.woodfortrees.org/plot/gistemp/plot/gistemp/from:1970/to:1998/trend/plot/gistemp/from:1970/to:2022/trend

    What do you think?

  111. Willard says:

    The argument is only made complete by appealing to ignorance:

    (5) Something we don’t know might be at play.

    (6) This implies we are not so sure about our current understanding of climate.

    (7) Until we have direct evidence, anything goes.

    It’s like hearing a virtual agent crash into pieces in the next room right after you heard two kids fight. You can’t say which kid broke the agent. You can’t even be sure that a kid broke it. All you heard is kids fighting, then a crash.

    Could be the cat. Could be a hacker who made your agent self destruct.

    Lots of theories.

    As a point of clarification, let’s bear in mind that contrarians could deny Da Paws in the acceleration of warming without denying the warming itself. There could be an increase without being an increase in the increase itself.

    It’s all simpler to express with a graph. Take the S&P from 1950 to 2016:

    https://upload.wikimedia.org/wikipedia/commons/c/c0/S_and_P_500_daily_linear_chart_1950_to_2016.png?20160221032042

    The overall trend (and its acceleration) ought to be clear for everyone including permabears. Tech bros ought also to concede that there were many downtrends inside this uptrend. Take the two peaks from 2000 and 2008, right before well known market crashes. Connect them together: you got yourself a Paws.

    The main difference between reading this graph and a climate chart is the underlying knowledge. We know that markets are (more or less) random walks. The secular trend may be temporary. We don’t know if stocks will continue to perform the way they do. But we sure know that dumping CO2 in the atmosphere like there’s no tomorrow has an effect.

  112. verytallguy says:

    Willard, re. the graph. My bold.

    “…the historical data for whaling tell us that an exponential rise of the prices is not the only feature of the post-peak market. The prominent feature is, rather, the presence of very strong price oscillations. We can attribute these oscillations to a general characteristic of systems dominated by feedback and time delays. Prices are supposed to mediate between offer and demand, but tend to overcorrect on one side or another. The result is a succession of demand destruction (high prices) and offer destruction (low prices)”

    http://theoildrum.com/node/3960

  113. JP says:

    Thanks Anders, Bob, and everyone who tried to help me understand. It’s going to take a little longer for me to digest all of it; I’m starting from a very low level of math understanding. Anders talked about 2 sigma or 3 sigma; learning what that is is next on my “to do” list. Thank god for the internet. I also followed the link to the “escalator” thread _ very interesting discussion. Again, thanks. Love your blog. I also love Tamino’s. Whenever I visit his or yours, I feel my IQ rising; then I venture into places where deniers congregate, and after reading a few comments my IQ drops again. Hahahaha.

  114. Bob Loblaw says:

    …then I venture into places where deniers congregate, and after reading a few comments my IQ drops again.

    There is a cure for that. Hopefully it is just a temporary Paws. A hiatus, so to speak.

  115. Joshua says:

    Speaking of the difficulty in finding optumsl pathways:

  116. JP says:

    “There’s a cure for that.” Don’t go to those places? Yeah, I know; it’s a case of curiosity not killing the cat, but killing brain cells. And then if you start engaging with any of them, that’s something else; if you had any hair on your head to start with, by the end of the engagement there’s a good chance there’s no hair left after you’ve finished tearing it out in frustration.

  117. Ken Fabian says:

    Joshua, sulfate aerosol cooling appears to present a dilemma for rapid emissions reductions but I think that is illusory.

    As a simple thought experiment I considered the global temperature impacts from a hypothetical big lot of coal burning starting at once. A near immediate cooling effect would start and reach it’s maximum quickly – days to weeks. The warming from enhanced greenhouse would start at zero and progressively rise. At some point the warming would equal the cooling and the combined effect would be zero. Beyond that the warming influence exceeds the cooling effect. Now stop the coal burning and there is a near immediate loss of the cooling effect, with rapid warming equal to it. The enhanced greenhouse stops warming further but doesn’t go away.

    It seems clear that sustaining the aerosol production can’t keep up over the long run as attempting to do so (with more and dirtier fossil fuel burning) increases the enhanced greenhouse. Warming will “win” in any scenario but one where the coal burning stops early enough, before the warming rises enough to equal it. Sustaining the fossil fuel burning so the aerosols to suppress the warming will result in more warming.

  118. Chubbs says:

    One argument from skeptics we don’t hear much anymore: “scientists are overemphasizing aerosol effects and thereby overemphasizing climate sensitivity”. One of the many areas of uncertainty that isn’t turning out better than expected.

  119. Joshua says:

    Ken –

    I don’t understand this:

    > The warming from enhanced greenhouse would start at zero and progressively rise.

    Isn’t the point that where we’re starting (with the reduction in pollution) is where we are now, which is not zero? So any loss in cooling from aerosol reduction is effectively warming that gets added on to the increased warming from increased emissions?

  120. Joshua,
    I think Ken was presenting a hypothetical in which there was initially no warming and then we starting burning coal and emitting CO2 and aerosols.

  121. Joshua says:

    Well, I doubt anyone would suggest that we keep polluting so as to minimize arming, but it does highlight how complicated all the tradeoffs are. Unfortunately our policy systems get more disfunctional as the tradeoffs get more complex.

  122. russellseitz says:

    Back when The Energy Crisis was a thing, sulfur emissions were the main argument for not switching to coal fires thermal plants

  123. Susan Anderson says:

    Last year, per the IMF, subsidies for oil, coal, and natural gas totaled $7 trillion—about $13 million per minute

    If Corporations Bore the True Cost of Their Emissions, They’d Owe Trillions: And that doesn’t even include “downstream” carbon spewing.https://www.motherjones.com/environment/2023/08/corporate-subsidies-true-cost-carbon-emissions-science-study/ – also, Climate Desk collaboration, https://www.climatedesk.org/

  124. To Susan: I know that seems like a lot of money, but I am told that it is crucial to supporting our way of life and defending the important infrastructure that keeps our lights on. Both political parties seem to agree on this, even though they don’t agree on much else, so this must be true.

    It seems wrong to me, but I don’t know much.

    I did recently listen to this podcast and wondered about our future. The good news is that climate warming is not our fault. The bad news is that it could still create a bit of discomfort. Anton thinks he will need to get a bigger fan.

  125. Steven Mosher says:

    The other is that I don’t think any model (scientific, or economic) should define what we should do. They can certainly provide useful information, but any decisions that are made should also be influenced by our values and what we think is right and wrong, which cannot – in my opinion – be determined by an economic analysis alone.

    unless you have a closed form solution, the chances of happening on the optimal pathway and not merely some local minima are next to nil.

  126. Steven Mosher says:

    If Corporations Bore the True Cost of Their Emissions, They’d Owe Trillions: And that doesn’t even include “downstream” carbon spewing.

    please, this kind of logic merely prevents finding a solution.

  127. Dan Hughes says:

    “If Corporations Bore the True Cost of Their Emissions, They’d Owe Trillions:”

    Corporations do not have, and never have had, and will never have trillions. Corporations do not print money. Corporations sole source of income are the customers.

  128. Steven Mosher says:

    sbm.
    https://www.iisd.org/articles/press-release/world-governments-hit-record-high-usd-17-trillion-fossil-fuel-support

    getting trillions in support does not mean they have Trillions.

    jesus. my ex wife got millions in support and shes broke

    Simple facts you cant gt blood from a turnip.

    Exxon

    balance sheets are easy my pops taught me how to read them when i bought my first stock

    https://finance.yahoo.com/quote/XOM/balance-sheet/

    https://finance.yahoo.com/quote/BP/balance-sheet/

    let me explain
    the figures are in thousands, so they have 100s of Billions in assets.
    think “things worth money”
    they have 100s of billions in liabilities
    think ” bills and debts they owe”
    they have 100 billion or So

    when ts all said and done if you met BP in a dark alley and stole all their property

    you’d end up with maybe 100 Billion.
    which you might be able to liquidate for 10 cents on the dollar.

    who is going to buy the assets of a busted business?

  129. Steven Mosher says:

    2 April 2022, London – Just 20 of the world’s biggest oil and gas companies, including the likes of Shell, Exxon and Gazprom, are projected to spend $932 billion by the end of 2030 developing new oil and gas fields, according to new analysis of Rystad Energy data by Global Witness and Oil Change International.

    so 20 companies multiplied by 6 years = 932 Billion

    they spend roughly 8 Billion a piece per year on exploring

    And by the end of 2040, this figure grows to an even more staggering $1.5 trillion,

    over 16 years 2024-2040 its 1.5 Trillion averaging roughly 5 billion per year

    Hint, this 1.5 Trillion is not available to you today.

    https://finance.yahoo.com/quote/BP/cash-flow/

  130. Willard says:

    > who is going to buy the assets of a busted business?

    We call them vultures.

  131. russellseitz says:

    Vultures, says your link , strive:

    ” to identify assets that have been irrationally oversold below fundamental value, or where a positive turnaround is predicted.”

    Reserve values are volatile.

    If energy storage thrives, the Seven Crones cash cow herd will thin, and cost projections will strand expensive deep oil plays from China to the Rockies

  132. Willard says:

    That is the model: buy the mines (or assets) for cheap from a company in restructuring, thereby escaping health, pension, and environmental obligations; take out huge loans to keep the mines going; pay yourself and your executives handsomely from those loans; and then, when the mine goes under anyway, pay yourself additional bonuses for “managing” your own bankruptcy and walk away richer than you started.

    https://www.vox.com/energy-and-environment/2019/7/9/20684815/coal-wyoming-bankruptcy-blackjewel-appalachia

    Something similar happened in the UK in the 80’s. The “success” rate of these recipes isn’t exactly high. Buying distressed assets carries lots of risk, risk that sometimes become reality. As long as there will be markets there will be vultures. The Public needs them the same way nature does: it gets to deal with everything else that remains.

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.