Hacker News new | past | comments | ask | show | jobs | submit login

I've never understood this problem. To me, it seems that since you've defined a minimum "worth living" amount of happiness and unbounded population, it makes complete sense that the answer would be that it is better to have lots of people whose life is worth living rather than fewer. Is it not tautological?

Like it seems like you have to take "worth living" seriously, since that is the element that is doing all the work. If it's worth living, you've factored in everything that matters already.




If you pack the whole problem into a definition of "worth living", then you're right. But the premise is that there is a range from extreme misery through neutral through extremely happy. The repugnant conclusion is that it is better to have many people in a state that is barely above neutral.


I'm not the one packing it, the setup of the problem does it. "Barely above neutral" means you've picked an acceptable state. And then we are supposed to consider that acceptable state "repugnant"?


There's a comparison. If the scale goes from -100 to +100, the conclusion is that if we have 8 billion people in the world with average happiness of +10, it is better to immiserate them in order to have 80 billion with average happiness +1.01.

It's not that the acceptable state of 1.01 is repugnant, it's that the conclusion seems counterintuitive and ethically problematic to many people, as it suggests that we should prefer creating a massive population of people who are barely happy over a smaller population of people who are very happy.


I guess I just don't understand how if your axioms are 1) X is an acceptable level of happiness and 2) more people are better than fewer it is in any way surprising or problematic to end up with infinite people at happiness X.

Perhaps people don't see that (2) is a part of the premise?


It's more that after seeing that result of starting with those premises they don't like the 2 premises anymore. It would be like me really liking the experience of eating potator chips all day right up until the point that I discovered it had a lot of adverse health effects. I might no long like eating them as much.


Because 1 is not one of the axioms. The axioms are 1) There is a range of experience between worst possible misery and best possible happiness and 2) more people who are just barely happy is better than fewer people who are much happier.

I don't understand why you're insisting on a binary distinction of acceptable vs. not acceptable. With that assumption there is no repugnant conclusion.


1 is one of the axioms because a binary cutoff is built into the premise.


I may have taken you a little too literally when you wrote that you didn't understand the problem. Perhaps what you're saying is that the conclusion is not repugnant to you and that the conclusion is neither counterintuitive nor ethically problematic.

Consequently you believe that it is better for a large number of people to exist in a state barely better than misery than for a smaller number of people to experience a greater degree of happiness.

Fair enough.


I suppose that is a fair characterization. I would say that I still think it's tautological. Obviously it's a synthetic situation that involves infinity, so real-world applications are difficult to evaluate.

But I just don't get why people see it as an ethical dilemma – the conclusion is a perfectly sensible outcome of the setup. The conclusion is just a restatement of the premise – a maximization of population over a maximization of happiness. Thats why it seems tautological to me, the math of it is perfunctory and reveals nothing. If you cared about maximizing happiness more than population you would have to modify the setup. The trade-off is built into the premise.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: