Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

There are so many people that could come into existence in the future if humanity survives this critical period of time---we might live for billions of years, our descendants might colonize billions of solar systems, and there could be billions and billions times more people than exist currently. Therefore, even a very small reduction in the probability of realizing this enormous good will tend to outweigh even immense benefits like eliminating poverty or curing malaria, which would be tremendous under ordinary standards.

It's really an interesting moral discussion - it's like an extension of the "sacrifice one person to save five" classical dilemma, but I really can't agree with his flippant assertion that our moral obligation to the unborn future billions eclipses that to our obligation to help our contemporaries. And he's not even talking about preserving the planet for the future generations, he's merely concerned with them being born in the first place.

Also, his argument has the same short-coming as the "sacrifice" dilemma: We cannot know for sure that a certain action will have a certain outcome - or that any action taken was actually the cause of the outcome.



"unborn future billions" sounds so... abstract.

Think of Petrov: http://lesswrong.com/lw/8f0/existential_risk/

We effectively credit him with saving the world. If it weren't for him, most of us might not be around. I consider him a hero. From a moral perspective I would rather be him than Bill Gates, who has greatly reduced suffering and disease in the modern world. Just because of the impact on the future.


http://rt.com/news/soviet-nuclear-petrov-stanislav-221 (Retired Soviet officer rewarded for averting nuclear war)(2012-FEB-25)


>Also, his argument has the same short-coming as the "sacrifice" dilemma: We cannot know for sure that a certain action will have a certain outcome - or that any action taken was actually the cause of the outcome.

I don't see how this is a shortcoming.

If I am leading a company, I cannot know for sure that a certain action will have a certain outcome – or that any action taking will actually be the cause of a given outcome. That's not going to prevent me from doing my best to build a good product and make a profit.

In the same way, the uncertainty inherent in reducing existential risk shouldn't prevent us from doing our best to reduce it.


That's the problem of reasoning on timescales of civilizations, yeah? You can't really pay attention to transient effects and still make meaningful policies.

For example, consider how many people died mining coal in the past couple of centuries, or how many natives died when colonizing forces spread them diseases or outright committed genocide.

Sure, it's awful, but would the world really be a better place if America were limited to some traders on the East coast? Or if England had never really gotten the Industrial Revolution thing figured out?

We can't really reason in human terms when talking about the species writ large.


We would be able to reason if we had the other historic outcome for the comparison. But since we don't, we go with the "winners don't get judged".

So, as usual, the whole debate will be solved with brute force and whoever wins will get the right to speak like you.


> how many natives died when colonizing forces spread them diseases or outright committed genocide.

Are there really people thinking the Native American Genocide was somehow worth it? That's despicable. Tell me because I'm not sure, you classify the genocide as "transient effect" or "meaningful policy"?

> [...] would the world really be a better place if America were limited to some traders on the East coast?

Irak war anyone? That's how USA makes world a better place.


Just like there's a time value of money, there should be a time value of people. The unborn future billions should be discounted to the present value so that we can accurately compare them. What's the discount rate?


You're pushing the idea of the discount rate beyond its underlying assumptions. Short-term temporal discounting is premised on two concepts: 1) A pseudo-psychological principle that people value things more in the present than in the future; 2) An economic principle rooted in the assumption that there are alternative investments available for any given expenditure.

(1) is problematic because: a) it's not true at the scales we're talking about; and b) it's intrinsically tied to how someone existing in the present values future benefits. It takes a very narrow view of "utility" to argue that only the value to those in the present is relevant.

(2) is problematic because the assumption isn't necessarily true. Imagine a world where there are no investments that yield a consistent return such that an investment at time T0 yields exponentially increasing wealth at time T0+T. In such a world the second principle provides no reason to engage in temporal discounting. At the scales we're talking about, this second principle starts to break down. You can't assume that an investment in the present will continue to yield returns indefinitely into the future if your actions result in their being no future humans.

I think a more apropos basis for a guiding principle in this area is the observation that, in absolute terms, the productivity of society grows exponentially. A single person 1,000 years from now will produce much more, in absolute terms, than a single person today. If we define our metric more objectively, something as the sum of all production over the existence of humanity, then the proper course of action is the one that preserves as many future lives as possible, even at the expense of present lives, because most production will happen in the future.


It has to be very high, because our uncertainty about the future is absolutely enormous. I'd put the discount rate even higher than the monetary discount rate, which with fairly standard numbers is already effectively 0 in 20 years.


It takes a ridiculous discount rate to effectively become 0 in 20 years. 5%/year * 20 years = (1-.05)^20 = 35.8%. 10%/year * 20 years = (1-.1)^20 = 12.6%.

Still, 100 years is often considered a reasonable limit on such things as 5% * 100 years = 0.6%.


It's an interesting idea. Here are some counterarguments:

http://lesswrong.com/lw/n2/against_discount_rates/


But note that most commenters there accept discounting based on risk, uncertainty, opportunity cost, etc. - which are exactly the grounds that most ordinary people, even in this very Hacker News page where you would think people would understand & accept things like expected value or probabilistic reasoning, claim to ignore the future based on.


A lot of those bases break down completely when you're talking about these time scales. Consider opportunity cost. It is not sensible to spend $50 to save $100 20 years in the future, because even at a fairly low rate of return the opportunity cost of that $50 now is over $100 in 20 years. Note that such thinking is unavoidably rooted in the idea that there will be a consistent rate of return over the period in question. That assumption isn't necessarily true if you're talking about the possibility of present decisions increasing the risk of wiping out humanity in the future.

23% of all goods and services made since 1AD were made between 2001 and 2010: http://www.economist.com/node/21522912

If a similar pattern holds true, then the opportunity cost, in absolute terms, of a decision that results in fewer future humans is absolutely staggering.


I feel these moral rules go out the window the moment you're faced with a life and death scenario. When we're talking about this theoretically, it's easy to say a billion people in the future are more important, but the moment you realize you could die, you say screw those people, I'm gonna save my life first.


> And he's not even talking about preserving the planet for the future generations, he's merely concerned with them being born in the first place.

It's odd to give that much moral weight to the potential future existence of unborn people centuries from now; how does that not favor simply having as many babies as possible?


Not to mention the fact that any person alive today is potentially the root of a vast ancestral tree of countless millions of those future unborn, each of which is potentially the root of other vast ancestral trees, and so on... Ill effects suffered by these people may well be passed on through these generations (sins of the father, so to speak). Or worse, if they do not make it to procreation entire continents of future lives will be snuffed out. The moral weight of this absurd postulation, if anything, swings toward the past rather than the future.


That moral weight to the future hypothetical people is something that I have just now put a name to (thanks!) and the more I think about it, the more places I see it, and the creepier it gets.

But on a purely biological note, natural selection favors just that: those who make more babies than anyone else, who then go on to make more babies than any of their contemporaries, etc etc etc.


Natural selection is a shitty basis for ethics.


History has proved that any ethic is ineffective by itself and needs to be supported with brute, unethical to others, force


pretty much. that's why I find this valuation system that so grossly over-emphasizes future people at the expense of current ones so creeeeeepy.




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: