I’ve been doing some reading, and listening to podcasts, to try and better understand longtermism. It’s currently topical because of its association with effective altruism (EA) and the collapse of the FTX crypto exchange. Some have been very critical of longtermism, regarding it as extremely dangerous, and there have been some responses to these criticisms.
At it’s simplest, longtermism simply seems to suggest that humans could be around for a very long time and that we should consider how what we do today might impact humans of the future. In particular, we should pay attention to potential existential risks, which could eliminate huge numbers of future humans. Examples are nuclear war, the development of bioweapons, and artificial intelligence.
Some of this seems perfectly reasonable. Yes, we should consider potential existential risks. Yes, we should think – in general – about how what we do today might impact people in the future. However, some of the extensions of this seem bizarre. Yes, if we consider very, very long timescales, there could well be a huge number of humans who will end up being alive.
However, considering in any detail how what we do today might impact humans in the very far future seems somewhat nonsensical. The system seems far too complex to have any confidence in any such analysis. So many factors will influence the existence of humans in the far future that there seems little reason for us to worry too much about it now.
Also, if we consider long enough timescales, there must be existential threats over which we will have little control (super-volcano, massive asteroid/comet, etc). Yes, in the far future we may have the technology to deal with such risks, but these should still set some kind of risk baseline (let’s not get too worried about risks that are comparable to that of these natural threats).
The bigger issue seems to be the possibility that longtermism may be used to justify actions today on the basis of them being of benefit to future humans. For example, not helping some people today because one is prioritising the very large number of humans who will be alive in the future, or even doing something that may harm people today on the basis of it helping many more people in the future.
Of course, I’ve only just started trying to understand longtermism, so may well not have fully grasped what’s being suggested. What struck me when I read this response to the criticism was that there seems to be a number of strands to longtermism, some of which seem more reasonable than others. Some of the response to the criticism seemed to involve suggesting that that wasn’t what their version of longtermism implies. If so, maybe it’s best not to identify with a simplistic label if others who do so have views you object to.
I’m going to stop there. Would be interested to know what other people think. There’s clearly a lot I haven’t considered, and I haven’t even really mentioned effective altruism, which seems to strongly associated with longtermism.
Something that I have been considering is the ethics of how what we do today might influence the existence of humans, or the number of humans, in the far distant future. Of course, we may do things today that will influence the future trajectory that humans follow and that could substantially change the total number of humans that come into existence. However, it seems to me that the priority should be people who are alive today and their right to choose to have, or not have, children.
Other than thinking about potential existential threats and things that influence the near-term future (climate change, environmental crisis, etc) I can’t see any real reason why we should be too bothered today about how what we might do could influence the existence of humans in the very far distant future.
In some sense, every decision we make influences the future trajectory, unless you think everything is fully deterministic, in which case it’s already pre-determined and we shouldn’t worry either.
It seems to me to be a bit of a false dilemma, we should be doing something about today’s problems, but that doesn’t mean we can’t also do something about tomorrows problems or next centuries’ problems or the problems of next millennium. The difficulty is in working out the balance. I don’t have a problem with longtermism except for those who think it means we should effectively ignore the problems we have today.
It seems a bit ironic that it is a bit like the issue of discounting, where some are happy to focus on growth now because of technological advances (vapourware) that mean the problems that will result can be dealt with more economically by future generations. For some, both discounting and longtermism seem to have the common factor of finding a rational argument for maintaining their wealth now (although buying Twitter may not have been a good move from that perspective ;o). I’m not sure that has much to do with altruism even if it is portrayed that way.
It is also ironic that discounting has the effect of downweighting the value of future lives (and those far away we will never meet) and longtermism seems to value them more, and yet both are arguments for worrying less about current suffering.
The Golden Rule seems a good start. Put yourself in the position of someone in 1000 years time and see how you would want people to act today. I suspect the first thing would be a treaty to get rid of weapons of mass destruction, followed by doing something about climate change. If it was me, I wouldn’t want them spending a lot of money on preventing asteroid collisions at the expense of the majority of the population being in poverty.
“Of course, we may do things today that will influence the future trajectory that humans follow and that could substantially change the total number of humans that come into existence. “
I don’t think that would be a good measure of success. There ought to be more to life than existence. Trouble is the proper measure of success depends on values and probably can’t be measured even if we could articulate them. Is a thousand happy people better than a million in misery?
Yes, I agree that we should be able to consider problems we face today as well as how what we do now might impact the future. Of course, as you say, the balance will depend on our values.
Yes, this does indeed seem to be the case. Judgements that are made about discounting and longtermism both often seem to be based on justifying maintaining some kind of status quo, rather than a genuine desire to do what is “right” (although I realise that this is non-trivial to define).
I also noted the apparent similarity between discounting and longtermism, even though they should be in opposition.
Indeed. I was mostly just highlighting how decisions we make today could have a big impact on how many people come into existence in the far distant future. I don’t really see a good reason why we should worry too much about this.
I don’t really see a good reason why we should worry too much about this.
I think it would be a good idea if we (as a species) could live within the resources available, so future population sizes are something we should be thinking about, but I think it is more a near term issue that will have been resolved in the distant future as culture adapts to technological change (especially medical advances). At least I hope so (although I am reminded a bit of the documentary “Idiocracy” ;o)!
To me the Golden rule is more a show stopper than a good start.
Try this at home. Say you want to eat a croissant. If you eat it, someone else will not eat it. If you apply the Golden rule, nobody will ever eat the croissant. This applies in general, whenever symmetry breaks, but the problem is at its worse when there are six croissants. Don’t ask me why. It just is.
(Any resemblance with recent biographical events is merely fortuitous.)
Consumption comes with an unfairness that deontological ethics may never be able to resolve. Yet suppose it could. That is, assume that we could act in such a way that is fair and square with everyone including ourselves. Everyone, past, present, possible, future. Also assume omniscience – we know what everyone (I mean *every* one) wants or needs. It is all the same. We are like every single being.
In principle, this framework could only help in cases where there is one best solution, in which the choice is not indifferent, with clear outcomes, etc. In other words, we would be in the perfect world of decision theory.
Now, for the kicker – has anyone really tried to decide anything using decision theory?
Return to the croissant case. You decide that your spouse wants to eat the croissant more than you. They, in turn, decide *you* want to eat the croissant more than you. And so on and so forth.
Hence why there is always one donut left at the workplace, at least until someone is alone and tells themselves – the hell with the Golden rule!
PS: If you are reading this, Simon, we all know it is you who eats the last donut. We leave it for you.
if we consider long enough timescales, there must be existential threats over which we will have little control (super-volcano….., but let’s not get too worried about risks that are comparable to that of these natural threats).
Modulating global radiative equilibrium may be energetically more ambitious than considering how to manage volcanic eruptions, especially in the light of climate-change driven research on advancing the scale and scope of geothermal energy extraction
Considered as energy resources rather than forlorn threats, magma chambers and rifts from Italy to Indonesia and California might pay for their own long-term control with geothermal fields sited to maximize heat transfer from dangerous hot spots. We have the interesting precedent of Iceland using sea water spraying to freeze and divert advancing lava flows to save parts of Sturtsey from incineration or burial.
The hard part is reimagining the grid as a vehicle for underground geoengineering.
Such a non sequiter that I have never seen before. Very rich people putting their own private wealth into whatever THEY think will help the less fortunate. You know, like the Gates Foundation. New lamps for old. Place very mad emoticon here.
It still amounts to selfishness and greed by any other name.
Remember, no one here will ever be in this private sector longtermism crowd. We all are just a bunch of moronic consumers adding to the wealth of the richest people, or some such.
There has never been, nor will there ever be, enough money to go around. Period. Full stop.
You only get rich if things are very unfair to begin with in the first place. D’oh!!!!!! There is every reason to think of better alternatives as that leads to neartermism! The exact opposite of longtermism. Doing the most/best now, rather than procrastination of the very rich private sector into the very far/distant future.
My cynicism has just now reached an all time new low. Only Musk placing billions of people on Mars would make it go lower!
Boo Longtermism, boo! 😦
Hooray Neartermism, hooray! 🙂
It seems to me that concern for 5 to 7 generations makes sense. Our current trajectory and big model planning looks like it will produce big trouble for 2 generations from now. That should not be acceptable. I am loathe to mention taxes because I am an American and there is nothing more unAmerican than taxation, but I think we may need to reinstate a tax table that has a top rate that is back at or near the Eisenhower top tax rate.
I wonder if there is an optimial timescale for future planning?
Politicians think in 5-year election cycles. Accountants depreciate assets over 20 years. Climate scientists tend to think 80 years ahead to 2100. Longtermists are thinking billions of years ahead.
Personally I tend to think out about 200 years. For me the critical limit is that we have about 200 years until our resources run out. We have to establish permanent and self-sustaining access to space resources before then or be stuck on the surface of the planet indefinately.
Longtermists are thinking trillions of years ahead.
There, fixed that for you!
10^54 versus 10^58 virtual, or real, humans, or some such, oh the humanity of The Matrix and humans being controlled totally by an alien species that thought of the idea of longtermism billions of years ago from the opposite side of the Milky Way galaxy!!!
The 1st existential risk are these longtermists, and as we all know, philosophers are the 1st to go when humanity is facing existential risks. 🙂 Oh the inhumanity of losing so much future human potential as to lose philosophers of all stripes, or maybe not so much. 😀 😀 😀
The proliferation of existential threat inflation will never deter some people from demanding more.
Looks like the universe will die in 10B years or so:
Longtermism may die of longwordism long before that.
I would like to think we should be thinking out to around 200 years. To my thinking we do appear to have duties of care to people now living for impacts our choices and actions will have over their lifetimes, which takes us halfway there. But if we can manage that much planning based on foresight we will be better placed for the century that follows it.
Mostly I’ve encountered Longtermism in the form of Space Colonisation/PlanetB type thinking – what I call Grand Space Dreams – and I have to say I remain very doubtful of the viability of space colonies by intent for anything apart from activities that contribute significantly to the parent economy. There is no separate space economy and using resources in-situ to reduce the costs of projects in space is not the same as those projects being self supporting, as in being income positive. And at this point I think any commercial space enterprise based around utilising space resources rather than Earth ones will be trying very hard to NOT include any need for astronauts; including them will greatly increase the complexity and costs and shift the primary focus from the commercial activity itself to the safety and comfort of the astronauts.
I think successful colonies are historically less a matter of planned intent to have colonies as emergent outcomes of outposts developing successful trade in resources with and within the parent economy. And I don’t see any such opportunities in moon or Mars, where the fundamental basics of survival – the bits that Earth provides for free or at very low cost – require a comprehensively capable advanced industrial economy to sustain.
Sustaining a healthy, wealthy Earth economy looks to me to be the essential ingredient for making some of these longer term possibilities possible. Pushing ahead with Mars colonies before we are capable of it won’t make it happen faster; no matter the successes of space programs as R&D hothouses I see the totality of R&D as being the essential source of the needed progress and breakthroughs. The desire to colonise space isn’t going to go away by failing to support premature attempts and diverting our greater efforts into fixing more near term Earth based problems.
This is cool:
The concept of moral trade appeals to me more than another three-syllable ism-word.
” … and there have been some responses to some responses to these criticisms.”
or archived here …
Rich people being such rich smelling turds.
Obviously, longternism is in no way related to STEM, as their math skillZ are just so atrocious, 10^14, 10^18, 10^45, 10^54 and 10^58, or only 44 orders of magnitude. Whereas the math should be normalized by the then existing population of human lives, so equating one billion lives today with one billion lives in a zillion years makes no sense whatsoever, not even if one marginalizes today’s billion by an extremely small, say 10^-20 coefficient of vast future human numbers a zillion years from now.
Dogma, secular or religious, is no different then say a totalitarianism worldview, in my honest opinion. It’s the end of the philosophy as we know it (and I fell fine) …
Willard, if one accepts that:
” humanity only has about 1 billion years left unless we find a way off this rock. That’s because the Sun is increasing in brightness by about 10 percent every billion years.
one should be horrified by the moral hazard of delaying geoengineering’s start, for the path is long and the learning curve is steep.
Has the American Philosophical Society nominated the Space X chaps for the Peace Prize yet ?
From this anti-EA article …
See also …
Straight out of The Terminator sci-fi franchise, where super intelligent AI take over the world. So sophomoric is their ideology that AI’s will dominate our human futures that we no longer have the means to colonize space, call it Our Manifest Destiny to Space Colonialism.
The Matrix where we live perfect virtual lives but are actually merely batteries of a more advanced intelligent AI machines. This ideology is simply the creation of Hollywood for crying out loud.
FWIW, Bostrom comes from physics and Ord worked on oracle machines.
These are not POMO chaps.
You would think sustainability would be important to Longtermers. Good luck switching energy sources every couple of hundred years.
As a Cornucopian technophile, I have greater confidence/optimism in the ability of technology to improve the lives of present and future humans.
I also think the time horizon for consideration of the future should be roughly 200 years.
With that in mind, I think this issue could be best approached by looking at the past and seeing what lessons it might offer.
If Oppenheimer decides not to work on the Manhattan Project…
If religious and moral thought turns against the institution of slavery earlier…
If Wakefield decides not to publish….
The list could easily be extended and could perhaps replace ClimateBall as a parlor game. (Happy holidays, willard.)
But of course, the focus of this excellent venue is climate change and longtermism with a time horizon of 2-300 years is the lens most habitues of ATTP seem to have adopted.
Much of the friction between y’all and myself stems from my desire to keep today’s distressed more firmly in mind as we allocate scarce resources to projects that may benefit future generations.
I think the best thing we can do for future generations is learn and get smarter. While we are so engaged, I think we should feed the poor, house the homeless and comfort the afflicted. Sorry I’m so old-fashioned.
Happy holidays to all.
These so-called chaps are mostly sexist, mostly racist, mostly conservative, mostly older, but almost always white males or their coddling white female counterparts.
Some things never really change, now do they?
Will MacAskill – so sexist that he took his wife’s name. He argues that men should change their names:
I was’t implying they were ,as I haven’t finished reading the Aeon piece, but pop Apocalypticism is all over the board, As Aeon notes early on:
” young climate activist Xiye Bastida summed up this existential mood in a Teen Vogue interview in 2019, the aim is to
‘make sure that we’re not the last generation’,
because this now appears to be a very real possibility.
The SpaceX & Starship chaps hoopla is Stockholm Fever-worthy relative to the death of the sun because the journey of a thousand light years begins with a single parsec.
Everett: What an Earthist AND astrophobic thing to say!
Especially given interstellar socialism’s recent debut in The Nation
A lot of people seem to be suggesting that ~200 years is a sensible timescale to consider. I mostly agree. It’s about the timescale over which someone can have some understanding of family history and is the timescale over which you might expect your grandchildren’s children to be alive.
There are, maybe, some exceptions. We’re aware that emitting CO2 into the atmosphere is likely to lead to elevated atmospheric CO2 concentrations that will last for thousands of years. Of course, one could argue that if we focus on the impacts that will probably occur in the next ~200 years and make decisions that will benefit those who will be alive in the next ~200 years, it would also be the kind of decisions that would benefit those who will be alive in ~2000 years.
One thing that struck me when listening to an Ezra Klein podcast with William MacAskill is that many of the examples he gave of why we might want to consider the impact on people in the far distant future is that it seemed to be things that could also impact people in the near future, and hence we might want to avoid these impacts for their benefit, rather than for the benefit of those living in the far future.
I guess one could construct a scenario where we do things that benefit people now but would have a very negative impact on future humans, but it seems tricky to construct a scenario where people would benefit for the next ~200 years but then suddenly start to suffer. Even if you consider environmental issues, we’re already in a position where we’re being guided by the impact these issues would have now, or in the near future, rather than being in a position where we’re slowly degrading the environment so that we can ignore it for the next couple of hundred years.
Or perhaps instead of just feeding or giving medicines to the poor people of the world we give them a job instead. Moving them up the chain much sooner then would otherwise be possible. Joining the ranks of positive GDP growth or some such.
And no, a billion people a zillion years from now is not the exact same thing as a billion poor people today. Like I said above, there are just philosophers, worth literally nothing for their free thoughts, then are in no way, shape or form STEM literate. They are in fact retarded.
I think ATTP posts this stuff mostly because he thinks they are absurd, but does not wish to say so. Like Curry saying “very interesting” or some such food for thought.
I, on the other hand, call them as I see them, including every logical fallacy that you, or anyone else for that matter, can think of.
It is like arguing with a fool, bystanders never know who the real fool is. You might want to move on now and join Jordan Peterson for all that I could care, Willard.
Their will, Russell. Their will.
The argument ought not to be complex. The priority is to stay in the game. Instances of risk to be wiped out accumulate and end up becoming a certainty. Basic risk of ruin, a concept that served professional gamblers and traders at least since the dawn of casinos and exchanges. It is a sound guideline, but just like the golden rule, we should not get carried away with it.
Abiding by Dogen’s Law, i.e. Do Not Be a Jerk, cannot provide a complete decision procedure. One (not overly technical) problem with such a rule is that it only sets a limit. To think about the longest term possible is all well and good, but what does it tell you exactly about your daily routine? Nothing much.
As far as I can see, this looks a lot like rebranding stoicism for a more sanguine market filled with less testosterone-brimming bros. What does not create light creates its own darkness, say. It can lead to interesting meditations.
Reconnecting with the Iroquois cannot be that bad:
Seven generations is not far from the 200 years that has been evoked in the thread already. There might be something inherently human in thinking for posterity. Therefore what some of these kids say that makes me think they are alright.
Still, that kind of branding can also attract the Less Wrong crowd and quirky billionaires. This indicates to me that mistakes will be made. We can almost predict that eugenics advertisement is not far down the line.
Speaking of which, my new favorite podcast is called If Books Could Kill. Latest episode is about The Population Bomb:
Ehrlich is an entomologist, so technically a STEM guy, right? Kidding. I’m on Everett’s side for once. Classism reeks out of that crap.
> including every logical fallacy that you, or anyone else for that matter, can think of.
I could think of fallacies, Everett, but could you? I usually don’t, for fallacy theory sucks and X seldom follows Y unless the argument is closed under deduction.
As a recent Climateball episode made me find back:
The fact that Lupron is used against sex offenders does not imply that it is bad to use to treat autism, right?
Never forget that the guy who built the first gas chamber must have known about chemistry, and that anyone who would cry about guilt by association ought to meditate on using not-a-STEM as a denigration marker.
To extend my game a little further, if we want to shape our decisions to any extent based on a 200 year horizon, perhaps we should look at the state of the planet 200 years ago.
What decisions in 1822 could have prevented civil wars in the U.S. and Italy just 40 years later? That was the year that Simon Bolivar liberated Quito and the Greeks had their own war of independence.
What could have been done to hasten the end of slavery? That was the year that The American Colonization Society landed in what would become Monrovia.
1822 might well be remembered as the year that Charles Babbage finished work on his Difference Engine. What potential decisions could have received support or dismissal by proper usage of it?
Would prognosticators generally have been able to envision 2022? That decade did see the opening of the Santa Fe Trail and the Erie Canal. Could people foresee freeways, train and air transport?
The Rosetta Stone was deciphered–could smart folks (perhaps taking advantage of the Difference Engine) have prefigured modern cryptography?
Global life expectancy was 29 years. Could people then have predicted the Methuselah-like life spans of today?
I like this game.
I think the argument here is that it isn’t worth considering timescales longer than ~200 years. It isn’t that we should be focusing on 200 years into the future when we make decisions. Personally, I think we should mostly be focussing on what would help people who are alive today, with some consideration for the longer term future.
ATTP, I think I understand that. What I’m trying to communicate is that a) it’s hard to know what will benefit our descendants that far out and b) that using the past to calibrate our efforts isn’t entirely stupid.
And… as of course you remember from my myriad comments here, I’m all in on directing the bulk of our spending towards those suffering in the here and now.
Okay, fair enough, but I think that is similar to the points that others have made.
Here could be a more fruitful analogy. Suppose you want to plan for retirement. You look around and you find the 4% rule. According to that rule, your retirement fund ought to have enough so that you could subtract from it 4% each year and never lack any income for the rest of your life. If you want to have 40K per year, that means you need 1M.
Simple, right? Well, that is when things starts to get interesting:
The TL;DW is to plan for 2% instead.
Interestingly, what isn’t considered in the episode is to live with less. In my previous research on the altruists, I saw that Ord gave all his money above 18k pounds. He even downgraded it to 16K. The arithmetic of the proposal would then be quite simple. You will not need to save 2M if you learn to be frugal enough to live with half you got. You will need the same as the optimist. And if you are lucky and your capital accrues more than you dream of, you can always give more.
I see two problems with the line of thinking. First, being told to consume less is no fun. Our parents conditioned us to dislike it. Puritanism is unsexy. Second, few likes to think about retirement. Most outsource the problem. So the idea that to think about the future is what will make people tick seems counterintuitive.
Perhaps all this will remain a geek hobby.
“1822 might well be remembered as the year that Charles Babbage finished work on his Difference Engine. What potential decisions could have received support or dismissal by proper usage of it?”
I suspect virtually none. Perhaps you were thinking of his analytic engine, which was never much more than vapourware.
I’m not sure looking at history is a good method of calibration as we as a species wield considerably more power to affect the world than we did – it isn’t commensurate.
The Two Century /7 generation rule does srrm a wise point of departure in the csae of terrestrial threats, as it coincides with the length of the Age of Discovery- populations can and do migrate anywhere on Earth on that time scale.
Absent superluminal transport, but given relativistic time compression at as V converges on c, the Nation authors asking the post-Fermian question
“where are all the United Federation of Socialist Planets starships?”
ought to have considered the possibility that instead of benignly conquering the universe, they went the way of [ Insert name of defunct Socialist Republic, Editorial Collective , or Maximo Lider here ], in which case SETI should switch focus to SRECAT:
the Search for Red-Shifted Extraterrestrial Corporate Advertising Transmissions.
I think there is good reason that we can develop messaging and public policy that would be attractive to many people and would involve consuming less. Describing these ideas as stepping away from mindless consumerism and having more time for family, or time to find our true calling instead of simply trying to move faster on the economic treadmill might be more productive than public policy messaging that tells people they must consume less and live like puritans. Same policy and outcomes, different framing and values.
I don’t want to be overly pessimistic, but societies that are so easily — even eagerly — gulled by such transparent frauds as cryptocurrency, Musk, Trump, Johnson, etc. may simply be incapable of long-term thinking, at least under present management.
> Much of the friction between y’all and myself stems from my desire to keep today’s distressed more firmly in mind…
I’m thinking you just can’t help yourself. Am I right?
Consider that you are no more desirous of keeping today’s stressed firmly in mind. That you’re not different in that regard.
How much that change what you write in your comments?
What would be like to be less sanctimonious?
Been following these issues and ideas for a while, and in general I agree with those who are most critical, finding a lot of cloistered academic thought and excuse making. Avoiding dealing with today’s problems today seems merely an excuse exercise for the acquisition of wealth and power for their own sake. [Everett Sargent, thanks for the clarion message on that.] I’m not up to writing coherently about it at the moment. Some extinction events are outside our power to prevent, so hubris won’t help. AI, again, cloistered etc.
But I did have a head-scratching moment at anybody thinking we have 200 years, let alone the consensus here. Here, if the image comes through, is what I think about that (though it’s not the planet, but human continuity on it, that is at risk).
Like counting on being able to move to Mars. Though I’d say trillions. Speaking of trillions, did y’all know we’d invested $1.5 trillion in coal in the past two years, mostly from banks that are part of the net zero alliance?
Tom: “Much of the friction between y’all and myself stems from my desire to keep today’s distressed more firmly in mind as we allocate scarce resources to projects that may benefit future generations.”
The first paragraph written by someone other than ATTP:
Dikran: It seems to me to be a bit of a false dilemma, we should be doing something about today’s problems, but that doesn’t mean we can’t also do something about tomorrows problems or next centuries’ problems or the problems of next millennium. The difficulty is in working out the balance. I don’t have a problem with longtermism except for those who think it means we should effectively ignore the problems we have today.
Magma, sadly I don’t think that is being overly pessimistic.
To me the Golden rule is more a show stopper than a good start.
my proposal was
if doing X can be a categorical imperitive AND
not doing X can be a categorical imperitive, then X is not an ethical issue
professor rawlsian didnt like it
I think the argument was more that if we are going to consider some kind of long-timescale, than maybe a realistic one might be something like 200 years. As far as I can see, most things that could harm people in the far distant future are things that would probably also harm people now, or in the near future. So, it’s not clear why we need to really consider those in the far distant future when thinking about how we might deal with these various issues.
Our ability to predict the outcome of our actions becomes more uncertain the further we look into the future, which also makes 200 years a reasonable limit (if we actually need a threshold) – at least from a utilitarian (if not a Kantian) perspective.
There’s been some interesting discussion on this in various places recently, including Crooked Timber.
The question of how to value welfare in the long-term future has, as DM noted, also been ‘tackled’ by economists in the form of ‘discounting’. In some way the whole justification for discounting is to avoid absurd conclusions, where the needs of infinite future cumulative generations make the needs of those alive today seem unimportant; longtermism just embraces these absurd conclusions and blasts off with them into space.
There is solid and long-standing support for the idea that we should (and do) somehow ‘discount’ the far-future, even if not through the simplistic ‘discount-rate’ approach (which leads to the opposite absurd conclusion of completely ignoring the very-long term).
First, ‘discounting’ clearly resembles the actual ethics that human individuals and societies exhibit and profess. Longtermists are looking at what we should do if we followed a radically different code of ethics, but we do not, so that seems a bit moot. Their first job is to somehow change everyone’s system of ethics.
Second, the far future may not actually exist at all: at some point, humans at least will become extinct, and this may be essentially inevitable. If demise is inevitable, then humans as a whole have to cherish each moment of their society’s existence, just as individuals cherish what life they have left. More generally, longtermism runs straight into the problem that the possible long-term futures of humanity are both extremely diverse, and practically speaking, unknowable; grounding a mathematical formalisation of ethics around a specific science-fictional narrative of the far-future is a doomed project. ‘Discounting’ takes a pragmatic approach of focusing on shorter timescales where outcomes are more knowable.
Obviously, a serious approach requires some kind of compromise, where we dedicate some (probably small) fraction of effort to the very long-term, especially trying to avoid doing irreparable damage to nature, and reducing risks of human extinction as well, but still focusing most efforts on the needs of the current and the next few generations.
I’m sympathetic to the idea of considering the very-long term future, but I seem to come to radically different ideas about what that means practically than the rocket-bros: things like preserving cultural artefacts, long-term robust storage of knowledge, seed- and bio-banks, and avoiding species extinctions. Part of that is just a question of values; I’m not convinced some utopian human future awaits that far outweighs that of the diverse system of ecosystems that still exist on earth. We have little knowledge what far-future societies will value, but at the very least we should pass on the (natural and cultural) legacy that was given to us in as intact a form as possible.
Yes, I agree.
What’s interesting, as I think Dikran was highlighting earlier, is that people seem to use what appear to be entirely contradictory philosophies to justify similar actions. For example, set a large discount rate which justifies not doing/spending too much now. Prioritise future humans as a justification for not doing too much for today’s humans. I realise that the latter may be an overly simplistic interpretation of some of what has happened, but it does seem that people are using these various “philosophies” to basically justify a status quo that suits them.
ATTP in Douglas Adam’s book “Dirk Gently’s Holistic Detective Agency”, the backstory of one of the characters (Gordon Way) is that they founded a software company (Way Forward Technology) that made his fortune with a single program called “Reason”:
Reason allows users to specify in advance the decision they want it to reach, and only then to input all the facts. The program’s task was to construct a plausible series of logical-sounding steps to connect the premises with the conclusion. The only copy was sold to the US Government for an undisclosed fee.
I think a bootleg version was made available in the early days of the WWW and has been circulated more and more widely as the WWW has grown! ;o)
I also agree with Ben. I have no end of computer media that can no longer be read because the hardware is no longer available. I don’t think any of it really counts as cultural artefacts (except the Petrachan sonnet I wrote about a lost bicycle pump), but a lot of our digital history is being lost rather rapidly.
FWIW, deontologists ought to fear no consequence as long as they abide by their principles, including death. They only go by what they judge right. Uncertainty is less of a concern to them for they more or less bootstrap themselves by the sheer power of their reason. Consequentialists go the opposite way: they would only fear when outcomes are unclear. Hence they often portray themselves as having on hand an immaculate calculus of costs and benefits, as if that was a gimme. Uncertainty might be managed with an indifference rule, say a coin toss. So death is not a big issue to them either.
I never understood how either side would succeed in completing their grocery shopping. Perhaps the framework they offered was never meant for practicality. They amplify two important conditions, which we could call consistency and fruitfulness. Both are fine, as far as generalizations go. And since we are on theorical ground, we could even try to optimize for both at the same time, or choose our fighter according to circumstances.
Economists tend to see utility as some kind of consequence calculus. A deontological way might be like how Anton Chigurh proceeds in No Country for Old Men. He often asks his victims: if following your rule led you here, what worth was it to you? Anton offers survival no discount. This might be a fair question. In the long run we are all dead, pace John Maynard. Trying our best to make the longest run possible makes sense. At least longtermists got that right. Or do they?
Survival can only be a good thing up to a point, as Victor Venema suggests in his last post. What worth would it be to suffer for all eternity? This is the very definition of Hell! To turn survival into a moral imperative might very well be how biological life succeeded in making it so far. But as a society, to make the world the best for the longest while remains the best of the best to shoot for. I would not care being more wrong if that obtains.
Is less worst dot com taken?
I was very recently diagnosed with prostate cancer. And no, I am not taking Lupron.
I am sorry, Everett, and hope it was detected at an early stage.
Turns out that all the CO2 we have been emitting is just what the doctor ordered to head off the next ice age (based on Milankovitch Cycles – we have about 50,000 to 80,000 years). How lucky! We couldn’t have done better long term thinking if we had actually planned it out.
My latest brain fart on longtermism is this, that there have been several rather major mass extinction events in the long ago (say above 10^4 years to say 10^8 years). We are after all just another species with environmental conditions that will remain completely out of our control, perhaps as long as we are on this particular rock called Earth.
But I ask you, why are longtermist so selfish as to only consider the here and now human species (and their apparent do nothing approach). And if one were to go down the science fiction route, there are perhaps millions to trillions (or more) of other sentient species populating the universe, some perhaps existing for billions of years already. An no, I don’t have a problem with space being very vast as to make travel to other star systems at anything below a few percent of light speed virtually impossible.
So what is longtermisms stance on alien hordes doing the exact same thing (as I obliquely noted above with my The Matrix comment) and wiping out humanity or enslaving us forevermore?
Longtermism = Selfishism by my math
When does a philosophy go so off the rails that those thinkers of that particular philosophy are almost exclusively funded by the selfish rich people of the world who will most benefit by keeping their vast hordes of cash to themselves?
I’m talking to you Musk and Bezos and their ilk! This is indeed all about science fiction.
Billionaire proliferation may lead where corporations fear to tread.
Three hundred years ago Ben Franklin started a perpetual compound interest savings account with the intent of funding the future of American philanthropy, After only a century and a half the bank was still in business, but a Massachusetts court ordered it to liquidate the account lest longtermists end up with all the currency in the Commonwealth.
Let’s hope vacuum cleaner Dyson’s immortal Delaware corporation doesn’t start taking Dyson Sphere’s seriously.
Here is a possibly less apocryphal account of Franklin’s experiment, in which the $8800 invested was paid out on schedule after 200 years, with $6,500,000 in interest:
Some of the above did not come out quite the way as I meant it to …
“But I ask you, why are longtermist so selfish as to only consider the here and now human species (and their apparent do nothing approach).”
Longtermest are mostly not thinking about the here and now, it is their followers who appear to usurp this philosophy to justify their own IMHO rather selfish ends today.
Anything above say a few hundred years is merely science fiction, as Tom suggests/mentions above in 1822 who could have envisioned the world as it exists today?
I actually hold an opposite view to longtermism, that helping today’s poor or less fortunate, if done in the right way (define right way as bettering their lives and their offspring’s lives) will improve the lives of all of humanity going forwards.
We are currently a very wasteful species, wrecking the environment, creating species extinctions, pollutiong the air, water and land (both short and long term). That list is endless. But hey, let us keep on doing nothing as long as some of humanity can live on forever on other planets we can trash, and then move on further to trash the entire universe eventually.
May I suggest an alternative multisyllabic as a ground for an ethical way of being? Longdistancism, i.e. not discounting present living people who live in other countries/warzones/a long way away/out of state, etc. AFAICT longtermism is just a sci fi version of Utilitarianism, taken to its logical extreme, and singularly unattractive to boot. In a nutshell; let real, present others suffer more now so that other unknown others might suffer less at some point = c- at best.
200 years is fine as a hypothetical marker point for discussion but I feel no moral degeneracy for being more concerned for my friends’ children and their likely future happiness (lol) given our current pathway. For me, anything which suggests we don’t radically change something right now is dodging.
The Franklin Fund indeed exists, Russell:
It is not related with Franklin-Templeton Investments:
But Franklin-Templeton is indeed connected to John Templeton:
which in turn is connected with his own Foundation:
The Foundation provides mitigated benefices to society.
Sorry to hear this. Hope it was diagnosed early.
We’ve probably already delayed the next glacial inception by at least 50000 years. We probably don’t need to keep emitting CO2 for this reason.
When we are weighing the costs and benefits of CO2 emissions, we can now add the greening of the planet as a benefit, plus delaying the next ice age! I feel that fossil fuels don’t get their benefits properly accounted for. Longer life span, heating and cooling, more food (from greening) with less water (because of more CO2) and delayed ice age – not bad. Of course, both the benefits and costs of more CO2 emissions decline because the effects of CO2 emissions are non-linear. Oh well – that also factors into the weighing of the costs and benefits. Less benefit of going from 420 ppm to 560 ppm, but also less cost!
I would like to see us hit 560 ppm of CO2 in the atmosphere just so we can compute the actual realtime ECS and TCR. I feel that this would be useful feedback for the modelers. After that we can slow down CO2 emissions to fine tune future ice ages.
The Cornucopian/Technophilic (err… my) view of the future goes something like this:
Things that can’t go on forever, don’t. Exponential growth tails off to 5% at some point. That is the boundary limit for technology. Within that limit almost everything is not only possible, but likely–if we choose it.
But steps toward one choice are steps away from another. Choosing wisely is important. Blockchain is an advance. Crypto currencies not so much. Some form of artificial intelligence is almost upon us, although not in the sci-fi sense of the word. But weapons directed by AI is not a good idea, something that is as true for vehicles as it is for weapons.
New toys will appear. Some will be useful. But our hope lies locked in biotechnology, with genetic modification offering nutrition for all, control of hereditary disease, revival of recently extinct species, delaying of ageing, cure of many diseases (and our only real shot at delaying senescence) and potentially much more, including new weapons to combat climate change.
Interplanetary travel will be a real thing in this century–interstellar, not so much. The number of known species on this planet will skyrocket–not because of faster evolution, but for better tools for finding and categorizing them. They will all be very small.
There is one thing above all others that will bring this future closer. That is peace. I wish it comes soon, for you and all others on this planet.
In practice, we use fossil fuels to provide energy and there is a cost to doing so, which is paid by the consumer (let’s ignore subsidies for now). However, there is a clear understanding that there are external costs associated with the use of fossil fuels that isn’t included in the price. One way to deal with this is to estimate this external cost and include it in the price. Most estimates suggest that this cost is positive (i.e., there are additional costs associated with the use of fossil fuels that are not included in the price we currently pay).
What do you mean by this?
Yep. I am happy to pay the net of external costs AND benefits of fossil fuels. But a very careful analysis does need to be done to get the right price for the external costs AND the right price for the external benefits. Only if both are adequately accounted for is the pricing going to be accurate. I feel that most of the attention has been paid on external costs and not enough to external benefits (of fossil fuels).
If we could persuade the Templeton Foundation to park its assets as shrewdly as Doctor Franklin, by the time of the Second Coming, it might have enough money for a down payment on new sun.
Thanks for your concerns. Unfortunately not caught soon enough according to my urologist (it does make me wonder if my GP was doing PSA screening tests though, I ran into some severe circulatory issues a little over a year ago, at which point I had a PSA of 30 when 5 is normally considered borderline. My parents divorced when I was like 6 years old so do not know my father’s or his family tree side of this form of cancer. That means that I could never tell my GP that side of my family tree. I would strongly suggest that all men beyond a certain age, say 50, get at least an annual PSA test, if not at least twice a year).
I do have a gallows sense of humor though. I stopped paying taxes, that way I could cheat death! 🙂
I’m 69 and no one lives forever either. So that I will join the other 100 billion or so former human lives that nobody knows about today.
What makes you think very careful analyses are not being done? As I understand it, the analyses do consider benefits and costs and conclude that the net effect is a cost. I would also argue that we should factor in irreversible risks, but that’s a slightly different issue.
Please name some positive externality or social benefit of fossil fuel consumption that is not due simply to the energy contained therein.
As someone approaching a similar age, though without a similar diagnosis, at least so far, I appreciate your outlook.
I suspect ‘longtermism’ is a philosophical distraction from the problems of the present and our personal mortality.
Our real outlook is about 70 years minus our age.
But given the very limited influence any individual, whatever their wealth and/or power has on the systemic processes of a global human society of 8Bn, we can only realy affect our own and our close friends and families situation.
We discuss the larger and long term problems not because we have any direct influence, (although it is reassuring to think we can) but like the weather, it is useful to know when we might need an umbrella.
May your future be as pleasant and positive as is possible.
“Please name some positive externality or social benefit of fossil fuel consumption that is not due simply to the energy contained therein.”
EFS, I’m very sorry to learn of your diagnosis. Your last paragraph, I think, shows appropriate humility toward our place in the cosmos:
I, for one, have appreciated your insights here and elsewhere. I’ll be doing the atheist’s equivalent of praying for you. Yet in the largest scope, the ultimate mediocrity of our lives, our families, our countries, and our species is inescapable, in spite of all our hopes to be remembered:
(Thomas Gray, Elegy Written in a Country Churchyard)
Izen, I presume you were smiling when you wrote that. To be clear, I’m asking about benefits external to the price petrochemical producers pay for feedstock.
I thought Izen was channeling Benjamin Braddock in The Graduate.
Mr. Sargent, we have exchanged comments on various weblogs in the past–not sure if you remember me. I wish you the very best.
I am sorry to hear that news, Everett. I am not convinced I will make another orbit of my favorite star, but I am limping around and awaiting the spring with great anticipation. My affairs are in order. My comments will tail off shortly after my demise if things go as expected. I will be 70 tomorrow, so good long life. I have cheated death a few times, but we have remained on good terms nonetheless.
I am very interested in the experience of dying. I plan to follow the lead of one of my heroes, Aldous Huxley, and take a large dose of a hallucinogens if it is clear I am likely to be enjoying my last day dragging this meatsuit around.
I feel sorrow when I read about a person like Victor Venema who expires at a relatively young age. It feels like a robbery that impacts us all. Old guy dies after enjoying good, long life? Raise your glasses, friends. Slàinte Mhath. Here’s to Victor and all our friends who have gone ahead. May we meet again on some distant, beautiful shore.
“To be clear, I’m asking about benefits external to the price petrochemical producers pay for feedstock.”
I suspected as much, but the ambiguity of the enquiry allowed for the answer.
After all plastics have been a major and unexpected source of advancement within human society. Imagine a world where everything now made of, or containing plastic, instead had to use metal or wood. With the added benefit that as plastic, fossil fuel is not only useful, but can be sequestered in landfill without the damage from CO2 until bacteria evolve an efficient method of breaking it down. Until then, like wood in the Carboniferous era, it remains as a secure geological store of Carbon.
Much of the friction between y’all and myself stems from my desire to keep today’s distressed more firmly in mind as we allocate scarce resources to projects that may benefit future generations
Well, I wouldn’t presume to speak for “all”, but purely for myself, it stems from your consistently false claims that your position is consistent with the IPCC, and your dissembling when this is pointed out.
Your belief in your own mind reading skills is something, though.
and the suggestion that Tom simply wants to do good things, unlike the rest of us who, by implication, don’t.
“There is one thing above all others that will bring this future closer. That is peace. I wish it comes soon, for you and all others on this planet.”
Tom, you havent been paying attention, there are several WWIII scenarios that are much closer today than 1 year ago.
Some form of artificial intelligence is almost upon us, although not in the sci-fi sense of the word. But weapons directed by AI is not a good idea, something that is as true for vehicles as it is for weapons.
i dunno, AI for weapons is pretty basic and well understood.. did my first AI threats in the 80s… simple predator prey pursuits work
AI for art is getting scary good try DALE
here some samples
mouse over to see natural language prompt
willard and marx play hockey
Longtermism is the view that positively influencing the long term future is a key moral priority of our time. It’s based on the ideas that future people have moral worth, there could be very large numbers of future people, and that what we do today can affect how well or poorly their lives go .
questions. can i have moral obligations to people who wont be born in my lifetime?
can i have moral obligations to a lump of cells that isnt a person yet?
do future people have a moral obligation to me.
if i can do unto you, but you cant reciprocate can we have a moral relationship.
if you cant return the favor what good are you?
can i morally make you indebted to me
or for Tom and rawlsians, can i choose to favor the unknown future generations
at the exppense of the least among us today.
more practically can i leave my wealth to future generations of unrelated people and leave nothing to my children?
1. VTG, I do hope you don’t speak for all. I do try and comment on climate using the IPCC as a guide–but not a bible. Have a happy new year.
2. ATTP, I certainly would like some evidence for the claim that I think/feel/believe any or all of you do not want to do good things. How disappointing.
3. Steve, it is not a lack of respect for AI’s technical capabilities that inspires my caution about their use as weaponry or unsupervised transportation.
4. Steve again, the concept of stewardship is both old and established. ‘Leave the campsite cleaner than you found it’ was drilled into me in another century. It wasn’t new then. And future people have more value than as simple users of the campsite I just vacated. And people have been dealing with this issue for a long time. The relatively recent tool of discounting is discussed above.
I don’t think anyone is suggesting you bequeath your entire crypto wealth (any left?) to The Long Now Foundation, worthy as that organization is. Take care of your family first, of course. But you can kick in a couple of bucks to some outfit looking to make the future a better place, can’t you?
Oh–and VTG (and apparently ATTP), to be more specific and long-winded about ‘the source of friction between myself and y’all,’ for more than a decade I have been fighting the institutional attempt to prioritize a pan governmental approach to global warming that is fixated on eliminating emissions, an approach that heavily discounts the concept of adaptation.
I understand the importance of lowering emissions and the consequences of failure to do so. But people living today often suffer and die because we don’t adapt to present conditions, and more will suffer and die if we do not help them adapt to future conditions. Steve discusses one point of view upthread. I disagree with his philosophy as presented, although I have also seen him advocate dealing with the present before addressing the future, something I once characterized as Build For the Past but Remember the Future (https://cliscep.com/2017/10/07/building-for-the-past-remembering-the-future/).
As I remarked on a recent post of yours, I do not want to take one penny away from the sums devoted to mitigation. I just want them matched by monies spent on adapatation.
Given the season, I’m not that interested in some kind of “food fight”. However, you’re talk about “Much of the friction between us” being associated with your “desire to keep today’s distressed more firmly in mind as we allocate scarce resources to projects that may benefit future generations.” I don’t think the friction is associated with “us” objecting to your “desire to keep today’s distressed firmly in mind”.
Have a happy holiday season, ATTP.
I don’t think this does heavily discount adaptation. Firstly, the climate will continue to change until human emissions get to ~zero, hence a focus on eliminating emissions. Secondly, this requires some kind of recognition that everyone (for want of a better term) needs to do this, even if it would probably be good if some did so before others. Hence, this probably requires some kind of global “agreement”, either informal, or formal. Finally, adaptation mostly doesn’t need this kind of global agreement. Investing in resilience can be done locally, even if there are good arguments for the developed world to help the developing world.
So, the “pan governmental approach” (to use your term) focussing one eliminating emissions is mostly because this is what will probably be needed to stop global warming, not because it won’t also be important to develop resilience and reduce vulnerabilities.
I have recently heard that some people don’t like the term “resilience” but I don’t know what other term to use.
Have a happy holiday season too.
It’s a good term. It would be a better reality.
I agree with this, but I don’t think the problem is some focus in environmentalism that is stopping this, even if this is a convenient excuse that some will make. There is nothing stopping political leaders from deciding to invest in infrastructure, for example, that would improve resilience for people living in their countries. The problem is probably that this isn’t simple and that it isn’t a political priority.
I agree. Hence my meager attempts to lift the topic to a high level of discussion.
Tom: “2. ATTP, I certainly would like some evidence for the claim that I think/feel/believe any or all of you do not want to do good things. How disappointing.”
response to the answer:
Tom: “Have a happy holiday season, ATTP.”
Seems like it may have been one of those rhetorical questions.
FWIW I also have no problem with adaption, especially as our societal stupidity means it will be required, even for issues where mitigation was possible and/or more efficient. It is another of those False dilemmas.
“Steve, it is not a lack of respect for AI’s technical capabilities that inspires my caution about their use as weaponry or unsupervised transportation.”
Oddly enough, mine is. The “Good Old Fashioned AI” (e.g. predator-prey models) is fine. The new generation of AI that people are getting excited about now, e.g. GPT3, is far more worrying, because it is effectively a bullshit machine. It has no understanding of what it is talking about – it is just a language model, and it doesn’t care whether what it actually says is true or logically coherent. The idea of *that* being in military technology *is* worrying, specifically because of it’s technical lack of capability that the users are likely to expect it has.
In which Moshpit lectures us on longtermism, as if we can’t find suitable and more authoritative references elsewhere.
In which Tom lectures on mitigation vs adaptation, and I go, well if sea level rise exceeds say one meter per century, there is not enough money on planet Earth to keep us where we are now, and as humanity has always done before, we simply move and rebuild.
Lessons learned? Always move to high ground, whatever that high ground is, always.
As to the outright selfishness of longtermism, we are still at the chemical rocket stage of development, for about the last century or so. Talking about longtermism is about as useful as talking about space ejaculation, which is just such a jerk off thing to do if you were to ask me. 😀 😀 😀
We don’t even know how dangerous intellerspace is in terms of things like cosmic rays as we are currently stuck in the heliosphere, maybe we need some kind of fusion shielding chambers. Yet we have these STEM illiterate putzes babbling on about 10^14 to 10^58 fake and real humans somewhere way beyond millennia to millions, billions and even trillions of years from now.
Such ignorance, such absurdities I have never seen before, such snake oil medicine men. Yet they do have their very shortsighted adherents, like Musk, like Bezos (but not like Gates), gotta build their spaceships, to hell with the present, the future is ours.
“Put yourself in the position of someone in 1000 years time and see how you would want people to act today. I suspect the first thing would be a treaty to get rid of weapons of mass destruction, followed by doing something about climate change. If it was me, I wouldn’t want them spending a lot of money on preventing asteroid collisions at the expense of the majority of the population being in poverty.”
knowing what we know, what advice would you give to people 1000 years ago?
avoid the whole Oil thing? forget the science shit,
dont do math itll just lead to AI and nukes..
can you put yourself in the other guys shoes, living 1000 years in the future or in the past?
and if you cant can you have any basis for moral judgement or intuition
doesnt the whole concept of moral intuition require that we be able to do this
longtermism seems immoral on its face
Here is another way to think of this, I call it normalization, and damn what a concept.
At some point in the long ago humanity passed one million and then one billion people. If we were to consider either one million or one billion lives today with those historic milestones, then there would not be any humans today!!! Remove that first million or remove that first billion and humanity woulod not be around today to jerk off to longtermism.
That is a formal proof that eight billion lives today is way more important then eight billion lives a trillion years from now when there are 10^13 to 10*58 real and FAKE human lives.
Thanks for the gift, Mosh. Not sure who is Karl and who is me. Perhaps a good thing.
I think your questions could be appeased by considering their moral aspect. In the end, you are trying to act so that you can live with yourself. The relationship to cultivate is about you and the person you want to be. It does not involve any contract between you and possible lumps of cell you would never know.
Artificial agents lack fiduciary duty. I find it hard to give my money to a guy who runs high frequency algorithmic trading, for *nobody* knows why the trader does what it does, including the one who programmed the system. There is an opaque layer of rules that emerges, and this creates a legal difficulty.
If the fund manager can give me a good description of the strategy they try to implement, then I could lend my trust. After all, if problems emerge I could sue the owners of the machines, at least in principle. In reality it is impossible to do business with them without signing some renegading clause. There are situations where that guarantee is not enough. There are also situations where there is no one to sue.
Robots owned by absentee landlords nobody has ever seen or could ever reach – that is the scenario I find most frightening.
Just saw this. perfect!
[while I was wondering if to weigh in further (I have actually tried to digest AI etc. etc., as The New Yorker presents a lot of thoughtful material, usually ahead of the curve, so wouldn’t be quite as half-baked as I often am) this popped up. I think it also provides a little cognitive dissonance to the idea that we think about 200 years as a way to distance ourselves from the gnawing anxiety of knowing how bad things are and how much worse they are getting.]
I think your proof suffers from logical impossibility. Suppose humanity rests on Adam and Eve. Their survival becomes crucial in any ethical framework that values humanity, including longtermism. The imperative to survive matters at all time scales, not just long term. One could not be a longtermism and be OK with Adam and Eve becoming extinct.
In return, here is how we could improve your argument. Suppose Adam and Eve have two choices. One is to collect a biological bootstrap of the species in case of emergency. (Insert your favorite Sci-Fi scenario. One could argue that nature comes with one, as shown by the multiple extinctions.) The other is to make sure that their kids will have the capacity and the incentives to perpetuate.
So either Adam and Eve can relive new beginnings ad libitum, or they and their children can live happily forever after. The first option offers comes with the most robust survival warrantee. The second option comes with the happiest ending. If we can trust fairy tale wisdom, I believe humans would collectively prefer the second option.
Yet longtermism *should* prefer the first version. If survival rate is the unique priority, that is. Perhaps we are only discussing a naive version of the doctrine. I cannot conceive of *any* version of longtermism that alone would be able to distinguish between these two kinds of cases.
I believe this thought experiment captures your concerns. It is a roundabout way to say that life needs to be worth living. Otherwise what worth would life be? This is basically a truism. Survival just is not enough. Which goes on to show that thought experiments are only good at expressing what normal people can express more directly.
On that point, we are on the same side. Philosophers should keep this kind of exercise to themselves. Still, I prefer them to formal proofs. The more I grow old the more I prefer vivid stories to sealed tight derivations.
How are predator-prey models classified as AI? Or is it that neural network style AI can be used to “solve” the nonlinear differential equations that are the basis of many predator-prey models?
Way ahead of you on A/E, the Catholic Church condones incest! There never was just two, never, ever.
Refusing the thought experiment is not a way to refute it, Everett. Mentioning Adam and Eve only helps remind us that humans have that kind of question in mind since a long while.
Two cases: survival induced by bio-tech bootstrap, or survival induced by eternal happiness. A longtermist either has choose the first or has no means to distinguish the two. This is at best absurd. At worst, it is frightfully dangerous.
So there you have it. Longtermism posits humanity as sacred, yet appears to be the most inhumane ethical system humans could ever imagine.
Is it not what you wanted to prove?
I do not think in terms of humans alone. I think in terms of species, even humans are made up of more then one species, DNA shows us so. So I do not have any sort of humanist agenda, heck normal evolution alone will change us tremendously on the time scales under discussion with longtermism.
I consider longterism a selfish only one species must survive scenario. I believe that some species will survive, perhaps ones much more thoughtful and meaningful and wiser than homo sapiens.
I really do not have much more that I can say, except maybe this … imagine two thousand years from now and a new religion then dominates humanity called … wait for it … any moment now … Longtermism. There is no witness to its roots except for so-called gospels and something called the XXX Apostles whom are the people who invented this secular philosophy, and others who were not present who wrote such nonsense to begin with in the 1st place, say a little over one hundred years after the so-called facts. It was absurd then, now and just as absurd in the past. Humans are a selfish lot in my book. Why do you think I call myself a misanthrope?
I am not quit sure who plays Mary, Joseph or the Baby Jebus though!
“Two cases: survival induced by bio-tech bootstrap, or survival induced by eternal happiness.”
I don’t have a clue what you mean there, in either case, bootstrap or eternal …
Steve:”Some form of artificial intelligence is almost upon us, although not in the sci-fi sense of the word. ”
It is getting harder to distinguish between the writing of bad editors and chatbots trained by reading their magazines.
I take back my dorky so-called proof above if that is any help. Something always seems to get lost in translation between us. I thus bow before your superior intellect for whatever reason, because at some point, I no longer care enough to further support my POV.
Quite right, Everett, on so many counts. We may work on the survival and well-being of our species on any time frame that we select, but it is a foolish pursuit if we fail to understand our species’ place within an ecosystem where we can thrive.
Maybe we survive and thrive. Or we survive and struggle. Or we struggle and go extinct. The planet that is our home is likely to remain capable of supporting life in some manner from now until our lovely star expands and makes the surface of our world to extra crispy.
All gonna be fun. Some wise soul said, don’t fear the reaper.
Bíodh lá maith agat,
> I consider longterism a selfish only one species must survive scenario.
Indeed, Everett. I dig that. Many reject humanism outright as speceist, e.g.:
If you reject humanism, there is no need to analyze longtermism “from the inside”. A stronger argument can be obtained by assuming what longtermism does and show that that it leads to some kind of absurdity. This is what I am trying to do.
Bootstrapping refers to a similar process as when you turn on your computer, but for a specie, in this case humans. Longtermists could preserve our genomes so that if the human specie ever risks getting wiped out, there is a long way out of it. Some kind of biological bootstrap. Sci-Fi details do not matter here.
I thought about this because to shoot for the longest term possible does not imply continuity. Human life could disappear for a while. As long as our tech-bros make sure it resurfaces, they should be cool with that, no?
This does not sound very humane. If longtermism, as a form humanism, leads to inhumane conclusions, then so much the worse for longtermism.
It’s not a clear-cut refutation as it’s not illogical, but it’s close enough to me.
Come to think of it, here could be a more vivid way to illustrate how silly is longtermism:
Suppose Bjorn is a salmon longtermist. That is, Bjorn wants salmons to live forever. His plan is two-fold. First, to preserve its genetic code. Second, to make sure salmons live very long term in that kind of environment:
Something tells me that Bjorn is the opposite of being most excellent toward salmons.
Survival is not enough. The fear of an ultimate death is a highway to hell.
Judith Curry seems to favor Taoism as a long- term alternative to Longtermism:
“Judith Curry seems to favor Taoism as a long- term alternative to Longtermism:”
How fortuitous that JC’s yang is an approach with a much larger frame than the narrow control of the IPCC yin. (-/s)
In Daoism the yin and yang are subsumed in a unity, choosing either is an error.
But all such western interpretations of very ancient Eastern philosophies are about as meaningful as a Chinese take on Plato.
Perhaps JC meant that while the IPCC is concerned with the mere material effects of climate change, she considers the abstract, conceptual aspects…
Epoch Times, the Falun Gong successor to The Washington Times seems to be on the same page of the climate playbook as JC- it runs Patrick Moore’s GWPF screeds unedited
Sabine Hossenfelder on longtermism:
Personally I think intelligence is likely to make us a non-arthropodial cockroach and extinction shouldn’t be our primary concern.
I tried to warn Judy, and was rebuffed
Almost a consensus. Since she offered no mediagraphy, I am not sure it comprises Wang 2012:
Great effort is being done in rendering justice to the Eastern traditions.
For a review of Wang 2012: https://www2.kenyon.edu/Depts/Religion/Fac/Adler/Writings/Wang%20-%20Yinyang.pdf
So, post-normal science and wicked problems. A recipe for something, just not entirely sure what (confusion, mostly, as far as I can see).
Just for the record:
Judith deleted my critique of her self-sealing definition of a COVID “consensus.”
She said she wouldn’t provide citations to support her taxonomy (out of spite?).
Then said she’d delete any more of my comments that didn’t have citations, and continues to moderate out any comments I write that include citations to show that her taxonomy is false (e.g., evidence that unlike her claim, public health officials noted early on that there was a marked age stratification in COVID mortality outcomes).
This from someone who wants to expand our embrace of scientific uncertainty, and looks to social media as a marketplace of ideas and home for “free speech absolutism.”
Funny thing is I agee with Judith in principle on many a number of concepts, but it’s sad she’s so deeply and obstinately embedded in her selective, motivated reasoning.
because freshmen are impressionable i used to teach
The woods decay, the woods decay and fall,
The vapours weep their burthen to the ground,
Man comes and tills the field and lies beneath,
And after many a summer dies the swan.
Me only cruel immortality
Consumes: I wither slowly in thine arms,
Here at the quiet limit of the world,
A white-hair’d shadow roaming like a dream
The ever-silent spaces of the East,
Far-folded mists, and gleaming halls of morn.
Alas! for this gray shadow, once a man—
So glorious in his beauty and thy choice,
Who madest him thy chosen, that he seem’d
To his great heart none other than a God!
I ask’d thee, ‘Give me immortality.’
Then didst thou grant mine asking with a smile,
Like wealthy men, who care not how they give.
But thy strong Hours indignant work’d their wills,
And beat me down and marr’d and wasted me,
And tho’ they could not end me, left me maim’d
To dwell in presence of immortal youth,
Immortal age beside immortal youth,
And all I was, in ashes. Can thy love,
Thy beauty, make amends, tho’ even now,
Close over us, the silver star, thy guide,
Shines in those tremulous eyes that fill with tears
To hear me? Let me go: take back thy gift:
Why should a man desire in any way
To vary from the kindly race of men
Or pass beyond the goal of ordinance
Where all should pause, as is most meet for all?
A soft air fans the cloud apart; there comes
A glimpse of that dark world where I was born.
Once more the old mysterious glimmer steals
From thy pure brows, and from thy shoulders pure,
And bosom beating with a heart renew’d.
Thy cheek begins to redden thro’ the gloom,
Thy sweet eyes brighten slowly close to mine,
Ere yet they blind the stars, and the wild team
Which love thee, yearning for thy yoke, arise,
And shake the darkness from their loosen’d manes,
And beat the twilight into flakes of fire.
Lo! ever thus thou growest beautiful
In silence, then before thine answer given
Departest, and thy tears are on my cheek.
Why wilt thou ever scare me with thy tears,
And make me tremble lest a saying learnt,
In days far-off, on that dark earth, be true?
‘The Gods themselves cannot recall their gifts.’
Ay me! ay me! with what another heart
In days far-off, and with what other eyes
I used to watch—if I be he that watch’d—
The lucid outline forming round thee; saw
The dim curls kindle into sunny rings;
Changed with thy mystic change, and felt my blood
Glow with the glow that slowly crimson’d all
Thy presence and thy portals, while I lay,
Mouth, forehead, eyelids, growing dewy-warm
With kisses balmier than half-opening buds
Of April, and could hear the lips that kiss’d
Whispering I knew not what of wild and sweet,
Like that strange song I heard Apollo sing,
While Ilion like a mist rose into towers.
Yet hold me not for ever in thine East:
How can my nature longer mix with thine?
Coldly thy rosy shadows bathe me, cold
Are all thy lights, and cold my wrinkled feet
Upon thy glimmering thresholds, when the steam
Floats up from those dim fields about the homes
Of happy men that have the power to die,
And grassy barrows of the happier dead.
Release me, and restore me to the ground;
Thou seëst all things, thou wilt see my grave:
Thou wilt renew thy beauty morn by morn;
I earth in earth forget these empty courts,
And thee returning on thy silver wheels.
MORE POEMS BY ALFRED, LORD TENNYSON
the point was to explain to them that immortality was a curse.
and that only death gives life meaning.
further that they should asume that “lasts longer” = “better”
and it might be better to die young in a brilliant fashion
be a shooting star so to speak
as far as humanity goes, we’ve had a pretty good run
Great effort is being done in rendering justice to the Eastern traditions.
i am really enjoying the eastern thoughts about the afterlife
the meaning of your life depends on the death you choose