Hacker News new | past | comments | ask | show | jobs | submit login
Longtermism, or How to Get-Out-of-Caring While Feeling Moral and Smart (pestemag.com)
59 points by colinprince 9 days ago | hide | past | favorite | 26 comments





In a Twitter thread regarding a poll whether people would press a button that gives them a million dollars but kills a random person somewhere in the world, and whether or not they are an EA, Eliezer Yudkowsky had this to say:

> It's going three-quarters of the way from deontology to consequentialism and then stopping, is what it is. I'm actually quite serious there; I don't know how to use a million dollars to save the world, and barring that, I'm not throwing away my deontology for it.

The poll: https://twitter.com/Aella_Girl/status/1592598275366195201?s=...


I've also heard the response to this being (paraphrased from poor memory):

"That's effectively what billionaires do, pushing that button over and over again and the random people are those that die from poor safety conditions in factories (because it meant more money for the billionaire), or by users of their products that were known to faulty but it would cost too much to do a recall, or because as workers they weren't given inadequate health benefits, or people near a factory got cancer or some other illness from poisoned land and water (because proper toxic waste disposal is expensive) and on and on they push that button over and over and over again."


I have to remark how peculiar it was to see this article being almost immediately throttled of the front page despite trending votes and low comment ratio. The idea that consequentialism might not be the moral high ground it’s claimed to be might cause some in Silicon Valley to bristle.

I imagine a lot of people would press the button for selfish reasons, a million dollars is a big deal

It's hard to know, which is what makes it a compelling thought experiment.

I'd say I have a pretty consequentialist mindset in the abstract, but without a figurative or literal 'gun to my head' I'm not sure I would extinguish a life. I'd have to wake up to that for the rest of my life. I think also if people knew such a button exists, they'd seek to destroy it before it hurts themselves or people close to them.


I'm going to add a pretty cynical/self-centered take.

I do think most people would have some level of self anguish. However it would not be -$1 million worth of self anguish, especially considering it's someone you've never met and would've never met. (Honestly I would probably forget most of the time.)

It also wouldn't match the anguish of many many years of corporate golden handcuffs. When I think a million dollars I don't think "ah, cool" I think "this is a significant fraction of what it takes to buy my freedom".


One criticism that immediately springs to my mind is around this equivalence of the trillions yet to come to the billions now.

Ok, let’s say we play in this sandbox and accept that we need to build some level of thinking about future generations into our policy and politics and that we are capable of knowing anything about them. It seems to me we should be applying a heavy discount to each successive generation since we may be able to intelligently guess at our children’s lives and what would be good for their wellbeing, but by the time we get to our great great grandchildren maybe it’s just sheer fucking hubris to think we can know anything about their world, what they need from us and whether they will be living on Earth, have harnessed free-energy or even exist.

So these EA people can make up models of the future all day long, but they are pretty clearly an exercise in using fancy math to support some pet conclusion.


Isn't demanding a high discount rate itself an exercise of using fancy math to support a pet conclusion? Although future generations are different from us, so too are current potential beneficiaries of charity; and though charity aimed at future generations can certainly have unanticipated effects, so too can charity aimed at contemporaneous people.

My general ethics around philanthropy is to acknowledge uncertainty and that most people are grasping around in the dark to try to do the best thing. Despite that, it's likely that limiting consumption for personal benefit and directing that instead to things you think likely will do good is best. When people inevitably come to different conclusions about what that is, that's not a defect; that's allowing us to cover all our bases.


As sib notes, this is sort of trying to beat them at their own game. I think the better move is to reject the proposition that far future population estimates/EV calculations of various projects are meaningful at all, on the grounds that the further out we project the more sensitive to errors or noise in the initial input said calculations are (aka the butterfly effect). We have ample evidence that we (humans) are great at short term predictions, okay at medium term predictions, and pretty shit at long term predictions. The longtermists presume to be an exception to this rule, but they should not be given so much credit. (Also I don't think our mathematical edifices should be considered as separate from us, as they are still artifacts/products of human reasoning, though I know this is a whole other can of worms).

I've seen many criticisms of longtermism, but one I don't think I see often enough is the stunning egotism of it all. "I, the main character, can predict and plan for the benefit of people in the future, but those NPCs I'm harming today (or their descendants) have no ability to do good themselves." It's a very negative and/or dismissive way of thinking about one's actual fellow humans. Also, in the way that it prioritizes individual action over systems of action, I'd say it's very American.

Personally, when I want to think about the long term I think of those systems (as did the Founding Fathers BTW) so even the name "longtermism" is a bit of a cruel joke. It's not really about the long term so much as rationalizing current action.


It's frustratingly difficult to come up with an elevator pitch level counter-argument to longtermism and related arguments.

"These ideas lead to behaviors that are both cruel and ridiculous" is the objection that gets often made. But naturally, that's rejecting a position based on it's conclusion, a fallacy these types will panting to name as soon as this argument leaves someone's mouth (though I feel the objection is reasonable and needs something more).

So, the argument I'd go with saying that all these reasonings involve basically multiplying tiny-numbers-with-big-variations by large-numbers-with-big-variations resulting in ... complete garbage ... a distribution pseudo-estimate with no validity... [1].

The further point I'd add is that aside from dealing with crises whose extent are moderately calculable, such as climate change, the activity which would contribute to humanity's future is ... actually improving humanity's qualities - increasing the general sensibleness of people, our knowledge of science and scientific method, our tendency to protect each others, the cohesiveness of our communities, our caution in avoiding destroying things we don't understand - Chesterton's fence, etc.

Which is to say also that the longtermists, the Bostroms etc don't understand that truly hard problems require not equation calculating the odds but a general flexibility and caution in dealing in the world. Why humans can solve the car-driving problem and robots still can't, etc.

[1] JM Kaynes in the general articulated a similar view: "By uncertain knowledge, I do not mean merely to distinguish what is known for certain from what is only probable. The sense in which I am using the term is that in which the prospect of a European war is uncertain . . . There is no scientific basis to form any calculable probability whatever. We simply do not know. There is a world of difference between low-probability events drawn from the tail of a known statistical distribution and extreme events that happen but had not previously been imagined."


Sabine Hossenfelder did a good video on this topic recently https://www.youtube.com/watch?v=B_M64BSzcRY

Clearly, the problem with this book is that the long term isn’t long enough. The end game is the sun goes nova, the earth burns, and is followed by the heat death of the universe, completely nullifying all of their math.

So we shouldn't try to improve our lives as much as we can, since we will all die anyway?

I am sure there's a greek word for this fallacy, but Finnucane is being facetious. They're just arguing that this longtermism is not long term enough, and if we really, really, really cared about the long term, we would argue with the collapse of the universe in mind.

It's a critique when someone's reasoning stops at random points that coincidentally align with someone's interests is just a tool to enforce their point of view, not a virtue of the argument itself.


We should, and we should avoid causing suffering where we can. We should even dream about a better future. But we can't get there unless we accept that reality is what is in front of our faces right now.

Congratulations, you've rediscovered nihilism.

Novas happen in binary systems. The Sun is not large enough to undergo a supernova. It is an isolated star that will simply fade away as a white dwarf, barring any unforeseeable collisions with other stars.

https://astronomy.com/magazine/ask-astro/2020/09/what-will-h...

I think people commonly mistake exploding novas with expanding giants. It's understandable.

In a few billion years when the sun goes red giant the habitable zone will move. And the earth will be engulfed. Those things the grandparent said were correct.

Yes, it will eventually go white dwarf. Those have habitable zones, too.

https://www.the-spaceship.com/white-dwarf-habitable-zone/

Assuming we don't escape the solar system, it will still take some serious engineering to follow the habitable zone.

There's that pesky climate crisis and the current anthropogenic extinction event we need to live through to get to those problems, though.


There is a red giant phase that may evaporate Earth https://www.scientificamerican.com/article/the-sun-will-even...

As soon as people still need marketing or sale tricks to survive, they human being just keeps going down to hell itself. No other ways to recover.

Here's a thread from MacAskill addressing some criticisms of the book, including those raised in this article:

https://twitter.com/willmacaskill/status/1583098954601750528


It's hard to take this seriously. It's more of a criticism of utilitarianism (and pretty much all moral reasoning) than longtermism.

Utility is very much a thing, "commensuration" is very much a necessity. You/your country/the world have finite resources that are insufficient for the task at hand. You must choose some option (not choosing is a choice) and what option you choose reveals your moral inclination. At big enough scales, this sometimes just means saving some lives at the expense of others. That's just life.

Also the implication that math is solely just some way to "argue" pet theories and not one of the best tools we have to understand and manipulate the world... is just insulting. (There are surely people who use it this way, but there is no intellectual tool that can't be used this way.)


Longtermism/William MacAskill is what happens when free market capitalism meets moral philosophy. If you assume the free market is efficient, then you must also admit that there should be a philosophical line of thought that would champion certain billionaires (namely, those sponsoring it) as the paragon of morality of our times. No other line of moral arguments fits this characteristics better than Longtermism.

I've had some long conversations with Effective Altruists (both self proclaimed and not), and my takeaway is Longtermism is a cancer EAs need to grapple with and defeat lest their entire philosophy of doing good effectively is hijacked by rich thugs who want to feel good about themselves.

Edit: and for those who are not sufficiently convinced that "EA" had enough information to be suspicious: https://www.semafor.com/article/11/18/2022/effective-altruis...


> when free market capitalism meets moral philosophy

I wish people would stop using the phrase "free market capitalism" because capitalism as we have it today has proven itself to be the enemy of free markets. People need to know that we came to a fork in the road and took the wrong path.

Also, the "meets moral philosophy" part reminds me a bit of the equivalent phenomenon in the religious sphere - prosperity gospel - has also become a thin rationalization for actions and beliefs actually quite contrary to older more-authentic teachings. There's a lot of overlap between the two, and probably a lot more parallels to be drawn.


Longtermism reminds of Soviet version of communism where problems of today were justified in the name of “the bright future”.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: