I came across an interesting paper by Frances Moore and colleagues that considers [d]eterminants of emissions pathways in the coupled climate–social system. In the context of climate science, models that consider both the climate and society tend to not be coupled. For example, global climate models will use emission, or concentration, pathways as input, but these will be pre-defined and will not be influenced by the resulting climate change. Similarly, economic models might use a simple climate model to estimate damages, or to do some cost-benefit analysis, but the latter essentially determines the optimal pathway, but doesn’t really self-consistently couple the climate-social system.
This new paper seems to be the first, or one of the first, that couples a climate-social system. There are quite a large number of factors, but essentially the model estimates the response to various factors and how that might then infuence emissions, and – consequently – climate change. For example, as alternatives become cheaper, their uptake will increase. Similarly, as social norms change, this might influence people’s behaviour in ways that influence emissions. There could be the implementation of policies and laws that will also have an influence. Additionally, our perception of climate change might also directly influence people’s behaviour and the implemention of new policy/laws.
They then run a large suites of models, sampling the various parameters, to produce a suite of outputs that they then group into categories. The main results are shown in the figure below.
The basic result is that a large fraction of their model runs suggest emissions will peak in about 2030, and then fall sharply, leading to warming of about 2.3oC by 2100. There are also some where emissions fall more sharply and warming is closer to 2oC, and others where emission reductions are delayed and warming is closer to 3oC. Overall, most of their models suggest warming of between 2oC and 3oC, but the overall range is from 1.8oC and 3.6oC.
This, however, doesn’t include the full range of climate sensitivity and other possible climate feedbacks, so it can’t quite rule out warming above 3.6oC. However, this does seem to be another paper suggesting that the most likely trajectories suggest warming of between 2oC and 3oC, but that we can’t yet rule out that warming could be kept below 2oC, or that it might exceed 3oC. In some sense this is positive (we can still limit warming to below 2oC) but also somewhat concerning (we can still follow a trajectory that could lead to > 3oC of warming).
From a modelling perspective, this does seem very interesting. The model may rely on a large number of parameters that may not be easy to precisely define, but it is still good to see that some are trying to develop these coupled models that try to self-consistently determine the evolution of the climate-social system.
Determinants of emissions pathways in the coupled climate–social system, Moore et al. (2022), Nature.
I wonder if this model does the human part regionally and accounts for technology cost curves.
Why is there so much overlap between science denial propaganda and attacking renewables? They even smeared renewables in individual countries, like why do they care if the UK with ~1% of global population builds some wind farms? I think it’s because every bit of renewable deployment makes them cheaper for others, so the next state will think “actually it’s not that expensive now, so we’ll do something”, and eventually “why are we paying so much more for fossil fuels when a renewable grid is cheaper?”
Effectively, we’ve *already* seen the effect of “delayed recognition”, and the amount of future warming and therefore human suffering is already far higher than it needed to be.
As I understand it, they do include a re-inforcing feedback where “where small initial deployments, possibly driven by subsidies or regulatory requirements, lower costs and enable further deployment.” So, I think they are trying to include how cost reductions would then lead to further implementation.
I am looking at the graph and I think it suggests the following:
Emissions peak around 2030, maybe a little before, on the Aggressive Action mode.
Emissions peak around 2035, on the modal path
Emissions peak around 2045 on the technical challenges path
Emissions peak around 2050 on the delayed recognition path
Emissions peak around 2090 on the little and late path
Does that seem like an accurate understanding of the various path ways in the graphing?
Yes, that’s probably a fair description of the figure. However, it is worth bearing in mind that the paper highlights that 48% of the runs had emissions peaking in the 2030s and then declining, to give warming of about 2.3C by 2100.
Just a preliminary gut-response prior to reading the paper. (Glad to see it is freely available!) It will be interesting to see how my attitude changes after reading it.
I’m reminded of Harry Seldon as a character in Isaac Asmiov’s “Foundation” series, who founded and developed and refined an entire field of study with respect to predicting the behavior of regions and subregions of populations in these fictional stories.
I completely agree that the domain of human social and personal behavior is a very important feedback factor in the domains of Earth, Sol, Air, Water, Ice, and Life (and all of their interacting interfaces, which are as important as the domains themselves), I frankly don’t believe we have the interwoven fabric of science and experimental result that also weaves well into the existing base of science in order to have much confidence here. I have, however, long wished for something like this to be included into the fabric so that the very important human social behavior could be reasonably applied, as they must be if one is to gain a fuller view of how the important interactions and feedbacks will play out.
Thanks for the catch! I’ll read it. But I’m initially very skeptical because I haven’t seen the development of a serious science regarding human behavior and social systems, one that has developed a clear set of theories, tested through the result of extensive experimental result, here. So I question it. Is this just tea-leaf reading, completely disconnected from the unified tapestry of science as a whole and outside of it? Or is this the result of applying very good theory, bolstered well by experimental result, and brought into the field of climate science? I doubt it. But I don’t know, either. So I really do appreciate the chance to read this!
Indeed, I think there are reasons to be skeptical. I would regard this as a very interesting thing to try, but I would also regard it as something very difficult to model. They do use various other studies to inform their model parametrisations, but this is still one of the first papers to do this, so I would be very cautious of accepting their results.
On the other hand, it is another study suggesting that we’re probably heading for somewhere between 2C and 3C and has a somewhat different methodology to some of the other studies.
do you think these predictions/models that show 2 to 3C include the rebound in heating that will happen as we reduce our particulate pollution cooling? I have seen suggestions that the particulate pollution is in the .3 to .5C range at this moment in time. I have also seen some speculation that our current accumlated warming includes several tenths of a degree that happened rather recently thanks to changes in emissions and fossil fuel use that have reduced our level of particulate pollution to the extent to produce that amount of rebound heating. (Hansen’s Faustian bargain)
suggest edit for accuracy: “The basic result is that a large fraction of their model runs suggest emissions will peak in about 2030” probably should read “in the 2030s” rather than “about 2030”.
You’re right, it’s in the 2030s, rather than in about 2030.
As I understand it, this should be included, since they should be considering all of the emissions (i.e., CO2, methane, aerosols, etc) but I’m not completely sure.
This is a bit of a tangent – although it does relate to the climate sensitivity – but the 2020 sensitivity paper by Sherwood, et al., An Assessment of Earth’s Climate Sensitivity Using Multiple Lines of Evidence, which cut off the high-end of likely climate sensitivity (as did, largely consequently, the IPCC AR6 WGI)…
Has that had any knock-on in terms of constraining the aerosol sensitivity?
As in, if the aerosol masking is high, it implies that the effective climate sensitivity (S) would also have to be high to explain observed temperatures, etc. But, conversely, since if the high-end for estimated S is constrained, presumably so must be the aerosol forcing, no?
Have there been new consensus estimates/research for aerosol sensitivity since Sherwood, et al., 2020? (I’m assuming any would not have been published in time for the IPCC AR6 WGI cutoff?)
And although the “but Hansen!” drumbeat is that there’s been a significant acceleration in global surface temperatures due to a fall in aerosols, isn’t there a bit of a diminishing return? As in, if aerosols are falling, you can’t get them to fall the same amount twice, only further?
I am not as up to speed on aerosols as I should be, but it just seems to me to be some inconsistent assymetries in how “but aerosols!” gets trotted out.
As far as I can tell from AR6, they’ve slightly narrowed the range of aerosol forcing and slightly reduced the best estimate, when compared to AR5. However, it doesn’t seem all that different. In AR6 it seems to be something like from -1.2W/m^2 to about 0 W/m^2, with a best estimate around -0.4 W/m^2.
Interesting paper. Humans are hard to model, though. The jury is still out on how quickly evidence of climate change will spur action.
In 2016, Presidential science advisor John Holdren gave a West Wing speech hailing behavioral science as a new governmental tool for mitigating Anthropocene climate, by changing human behavior
But some weeks after signing the Behavioral Science Insights Executive Order, President Obama spoiled the effect of John’s stemwinder by remarking that:
” The American people don’t cotton to being ruled.”
Biden’s people seem to be rediscovering this for themselves.
Ah but, Jon, psychohistory didn’t work. Those unpredictable humans, not behaving as expected even en masse. Give me a population of nice, predictable CO2 molecules any day of the week. That’s why they needed the Second Foundation to pull strings behind the scenes and keep things on track.
And (spoiler alert) even that wasn’t enough. It needed humanoid robots disguised as humans pulling the string-pullers’ strings, directed by what we’d now call a benevolent AI following the Zeroeth Law: A robot may not harm humanity, or, by inaction, allow humanity to come to harm. Indeed Seldon’s wife was one, presumably there to keep him and the cover story on track.
In what I consider to be a merging of the Robots and Multivac stories, humanity eventually uploads itself into what has become a cosmic AI (see The Last Question). Maybe we behave logically there, but The Matrix…
And even that great predictor of the future presumably envisaged his supercomputer as having lots of vacuum tubes. I do wonder if VAX was a play on Multivacs and the DEC engineers were Asimov fans.
Every field should have a Zeroeth Law, even if it’s invented retrospectively (that one is more akin to a mathematical axiom, in that you need it to make the others work but I’m not sure you can prove it experimentally, other than by the observation that in the non-quantum, non-relativity world, the others do indeed work).
This seems more interesting than the vague statistical projection from Pielke et al. but ultimately it’s also an exercise in trying to predict our decision-making. To me the main purpose of scenarios is to inform decision-making so it seems kind of bizarre to me that people seem to be trying to inject into those discussions methods which presume to know what decisions we’ll make. Of course, it’s worth understanding that policy doesn’t happen in a vacuum, and what is politically feasible or desirable is influenced by events.
Unless I’m missing something the left-hand graph above appears to suggest a high likelihood that an effective global carbon tax will increase from it’s current value of ~ $5 to hundreds of dollars within the next five years. That seems incredibly unlikely to me.
The Sherwood et al. 2020 sensitivity result was partly informed by a major assessment of aerosol forcing by Bellouin et al. 2020, which also formed the basis of the AR6 aerosol estimates.
Bellouin et al. 2020 found a much stronger best estimate for present day aerosol forcing than reported by AR5. This is in large part why Sherwood et al. found a much higher ECS in their energy balance instrumental period test than previous climate sensitivity papers looking at the instrumental period, since those used the AR5 aerosol estimate.
Thanks, Dave, for the nice trip through some of your resonating thoughts! Enjoyed!
Yeah, nature is, if nothing else, a very consistent teacher. I’ve mentioned that here, before. Large number population statistics pretty much ensures that outcome. But many seem to imagine that if their emotions are strong enough and sufficiently engaging of others then it will all serve as motivation to find solutions. And everything will be right with the world. While stories can serve to motivate humans, nature doesn’t actually listen. It just moves on, consistently. We must listen to nature. And not expect the reverse.
There are a number of SciFi stories about uploading to a cloud-AI. In fact, I just finished yet another one called The Singularity Trap by Dennis E. Taylor. It’s an interesting twist and I can recommend it.
There is some depth to what you are talking about — the 0th law. I need to think more about what you are considering. The gestalt you may be pointing towards hasn’t quite yet arrived for me, though I can perceive some depth. And I’m interested. But I just need to give it some time before something inside me may precipitate. And yes, I see a glimmer. But the fullness eludes me. Or perhaps I’m reading too much. Time will tell.
I’ve siphoned down the accepted manuscript. (Not the published version, as it is through a paywall.) It’s a long paper! If you know of subsequent commentary on the paper, or of related follow-ons, I’d appreciate the pointers. This is going to be a long read.
I take your thoughts about both “methods which presume to know what decisions we’ll make” and the very unlikely suggestion that the effective global carbon tax will rise by two orders of magnitude in 5 years. I do wish I could see how you arrived at that last part, though. I’m still struggling through the Moore et al paper and there are many references to other papers I need to get and read. Their “tuning” alone is going to take me some serious time to gather up. So any clues you’ve garnered from your own reading would be appreciated.
The trajectory of three of the five curves all “flat line” peak at the absolute maximum “policy stringency” by 2040 at the latest, and largely earlier than that. None of that seems likely to me unless the authors see some quite serious human disasters taking place in a very short time from now, followed by global emergency actions. The chart doesn’t appear to well-reflect the observed political trajectory so far and this adds to my skepticism. But I’m ignorant. So what do I know?
Yes, there is a circularity to some of this. If you’re using models to inform decision making, then it’s hard to see how you can then also predict the decisions that will be made in these models. In some sense, this models was trying to present self-consistent outcomes, so it is interesting from that perspective. However, I have always been slightly bothered by suggestions that this type of work should highlight what is possible, and what isn’t, because that runs the risk of becoming a self-fullfilling prophecy. If scholars claim that 1.5C is no longer possible when it technically still is possible, this work could then lead people to give up, making 1.5C no longer possible.
…the very unlikely suggestion that the effective global carbon tax will rise by two orders of magnitude in 5 years. I do wish I could see how you arrived at that last part, though.
It’s really just looking at the “policy stringency” figure shown above, plus the supplementary information states that parameter ‘can be thought of as the magnitude of the tax on carbon emissions’. It’s not clear that the authors intend the “policy stringency” value to equal US dollar carbon price one-to-one, but a value of 300 is broadly similar size to US dollar carbon price levels reached in SSP scenarios with equivalent temperature change (i.e. 3.4 level scenarios). So I would suggest the “policy stringency” values can be considered roughly equivalent to an effective global carbon price dollar value.
There is also a spreadsheet supplied with the paper with year-by-year exact figures. It shows that the green, yellow and black lines are associated with 95% of the monte-carlo runs performed. The fastest policy-development of those clusters exceeds 200 in 2025 and the slowest is just short of 200 in 2027. From values of 5 and 3 respectively in 2021.
So, this appears to indicate a 95% chance that an effective global carbon price will be roughly at or above $200 in 2027. That seems absurd to me.
The supplementary information also states that policy is limited by an arbitrary maximimum [sic] value of 300.
My guess is they put this limit in because the model produces even more absurd results without it.
There are lots of ways these kinds of prediction exercises could be useful if they can really distinguish which pathways have reasonable probability of taking place. Or what strategies might get us a good outcome.
If you are building a sea wall, or investing in energy infrastructure, it is critically important to have a reasonable prediction, not a conditional projection, of how things might turn out on long timespans.
As a world, we face stark choices and can shape the future, and projections of future outcomes for various policy selections make sense. But as individuals and even at the national level, we are at the mercy of everyone else’s choices, and it is perfectly reasonable to try to predict the choices others might make, in order to plan how how to best respond.
For some purposes you want a projection: “there will be no shortages as long as people don’t panic buy toilet paper”. And sometimes you want a prediction: “people are going to panic buy toilet paper, so there’s going to be a shortage”.
And so now we come full circle.
In one part-time field of mine, a very widely applied tool starts by looking at the open-loop response of a system and then we often will *design* a negative feedback (or positive feedback or some interesting combination of them) in order to achieve the desired closed-loop system response that we need to achieve some purpose. This is all done in s-space, though of course, we can also see what that looks like in the time domain using inverse Laplace.
One aspect of all this is that these are extremely well-understood tools that have a century of direct application experience and the entire highly interwoven and unified field of mathematics behind them. There is very little new or novel and we understand most of the pitfalls and boundaries of application when we apply them to specific circumstances in front of us. It’s almost boilerplate. And it just works right every time, assuming you’ve got all the important details in hand.
Where I see a problem here isn’t the idea that it would not be helpful to “close the loop” and see the interactions of a planetary system that includes Earth, Air, Sun, Ice, Water, and Life (not human) — where internal feedback systems of life itself are the key to understanding how the system supports a highly chemically active atmosphere with oxygen — to now include human behaviors as part of the fuller interaction space so that we can make closed-loop predictions that are better than without the inclusion of human behavior. I frankly don’t believe we understand enough to know that the results would be “improved.” Especially since we simply do NOT have sufficiently developed science-knowledge needed to properly describe the feedback component so that the overall system response is “understood” *better*.
That’s the problem.
Translated to that above-discussed part-time area of mine, this would be the equivalent of taking an open-loop description developed by a regularly improving understanding of climate due to the extremely hard work, both theoretical and as a function of experimental results, and jamming in a feedback block which has almost no rigorous field of study behind it and seeing what comes of the closed-loop results. As the closed-loop result radically depends upon both the open-loop description (which I have *some* confidence in) as well as the feedback description (about which I have almost *zero* confidence), the results would be essentially meaningless. It would be a serious error to consider such efforts as anything more than a parlor game.
I mentioned this earlier. When *and if* there ever comes to pass a serious understanding of large scale human behavior such that it could be incorporated into such simulations, and if and only if that understanding is well-tethered into the existing base of other science fields (which are all very highly interwoven and unified in order to create their strengths and which is the essential difference between science activities and … say… tea-leaf reading), then I might feel that there was some better chance the closed-loop analysis would be worth a darn. But because of the extreme sensitivity of the closed-loop results to the feedback itself and because I don’t believe we have the knowledge yet to characterize it anywhere near close enough to be useful, I have no confidence at all in any attempt to apply some “human behavior feedback” to the existing knowledge in climate science.
This in no way means that one should not play those parlor games because sometimes gaming leads to insights and those may create needed research initiatives. But even here I would not think that the better way to approach improving our knowledge would be exclusively through such game-playing. Instead, we need to seriously engage an understanding of large-scale non-linear human behavior models *before* we start having any confidence in applying them to a feedback needed to close the loop. It should be developed outside of climate science but within science, generally, so that it is highly unified with science and then, when prepared well enough and proven to be sufficiently useful, brought back into climate science for these purposes. I think it would be a mistake to believe that developing a subset field that only copes with human behavior in response to climate alone is likely to produce a stand-alone knowledge of its feedback function. More likely, it creates a twisted and distorted result that is bent by the process itself used to create it. And I feel the likelihood of getting good results that way is poorer.
Well, that’s my engineering piece for now.