Hacker News new | past | comments | ask | show | jobs | submit login
What should we agree on about the Repugnant Conclusion? (cambridge.org)
23 points by yboris on April 14, 2021 | hide | past | favorite | 95 comments



See also, Wikipedia article on the Repugnant Conclusion:

https://en.wikipedia.org/wiki/Mere_addition_paradox

Edit: Personally, I don't find it intuitive at all that adding more people to the world inherently makes the world better. So I guess my view would be closer to averagism. The Wikipedia article 'Population ethics' states "has never been widely embraced by philosophers, because it leads to counterintuitive implications said to be 'at least as serious' as the repugnant conclusion". But… which implications? The Wikipedia article 'Average and total utilitarianism' gives an example: "More counter intuitively still, average utilitarianism evaluates the existence of a single person with 100 hedons more favorably than an outcome in which a million people have an average utility of 99 hedons." But far from being counterintuitive, this strikes me as 'obviously true'.


>The Wikipedia article 'Average and total utilitarianism' gives an example: "More counter intuitively still, average utilitarianism evaluates the existence of a single person with 100 hedons more favorably than an outcome in which a million people have an average utility of 99 hedons." But far from being counterintuitive, this strikes me as 'obviously true'.

Sounds to me as obviously abstract to the point of absurdity BS.

E.g. an isolated junky with an endless supply of heroin or a person hooked to some "happiness" drip is better than a community/civilization of a million of almost equal contentment people.

Like the "repugnant question" itself, this is a theoritical absurdity, too abstract to have any real value as an ethical experiment.


> Like the "repugnant question" itself, this is a theoritical absurdity, too abstract to have any real value as an ethical experiment.

Utilitarian ethics require that people inhabit thought experiments like this. It's dissociation. One ignores the particulars of the event in question, forfeiting context and substituting detached "objectivity".

I think it's a mistake to try to purge contingency from questions of morality. Moral questions turn on context, no matter how effective a social technology is the idea that they don't.

This is not to say utilitarianism is bankrupt, but should somebody demand one's stance on some variant of the trolley problem, "I refuse to negotiate with terrorists" is not a bad answer, imo


It's interesting that you say that when we can clearly see trolley-problem style questions affecting real engineering decisions. For example, take self-driving cars. We can take it as a given that at some point such a car will be in a state that it will likely collide with a bystander or else risk the lives of it's passengers. At that point, which lives should it prioritize?

Or alternately, the question raised by an international study; if such a car were in a situation where it would either collide with a child or an elder, which outcome would be preferable? Interestingly, different cultures have overwhelmingly different answers to that question.

Every moral event has particulars and context, but it's impossible to take in that context at the stage of designing systems, so we need a way of analyzing and reasoning about them in the abstract. Do you disagree with this?


You've posed the trolley problem to someone explicitly refusing to answer it!

I'm joking :') thanks for the thoughtful comment. I agree that we see utilitarian thought experiments influencing engineering decisions. But as you imply, there is no perfect answer to the trolley problem. Wander far beyond difficult-to-dispute trivia like "it's less bad for fewer people to die" and it rapidly becomes an exercise in justification rather than critique.

Also, as you state, it's impossible to predict every possibility in an engineered system's deployment environment. We cannot preclude unknown unknowns. Imo this should motivate caution. Why should we develop technology so quickly and haphazardly if we don't understand its consequences? Uncritical utilitarianism can too easily collapse the is-ought distinction. "Hypothetically, what should we do if X?" tends to snowball, like a self-fulfilling prophecy. Technological forms in the present haunt the future insofar as they render their (always-contingent) existence permanent in the human mind. We plan cities around the automobile because it carries techno-cultural inertia, not because it best promotes human health and well-being.

So I don't dispute your argument, taken out of context. Engineered systems require utility functions: this is inescapable, I agree. Groups of humans must come to some consensus on how to construct these, and different groups will reach different conclusions.

I question the premise, and the long chain of assumptions that lead to it.


I agree fully on the tendency for toy philosophical problems to devolve into bikeshedding and useless arguments over irrelevant particulars. I also agree that ignorance of unknown harms and second-order effects deeply harms the potential good that engineered systems can do.

I worry, however, that selectively-applied caution leads to equally unfavorable results in the opposite direction; to whit, it encourages sticking to the status quo. It is often complex to the point of impossibility to predict the wider effects of technological or social changes. The requirement that we understand those consequences would, if applied consistently, basically halt any technological progress. As distasteful as it is to admit it, we learn much more quickly by failure than success, and often the quickest, cheapest, and most thorough way to understand the consequences of a technology is to implement it.

In other words, while I agree that applying implementing new technology without sufficient caution is often disastrous, I make note of that camel's nose "sufficient." The cost of excessive caution is that certain ideas (cars, fossil fuel energy, destructive economic models, etc) are allowed to continue far past the point where we can implement better alternatives. The cost of trying to do that is some measure of failure; there will always be some cost, and some of that measured in human lives, with trying to improve the state of the world.


> As distasteful as it is to admit it, we learn much more quickly by failure than success, and often the quickest, cheapest, and most thorough way to understand the consequences of a technology is to implement it.

I totally agree. This is a fundamental paradox. As you state it's impossible to fully understand the consequences of any technological development. We can't learn without taking risks.

> The cost of excessive caution is that certain ideas (cars, fossil fuel energy, destructive economic models, etc) are allowed to continue far past the point where we can implement better alternatives.

I'm not sure about your use of "caution" here. I don't think fossil fuel extraction persists because we are too cautious (i.e., too risk-averse) to pursue alternatives. This would imply that biogeochemical realities are absent from our risk-benefit analysis, leading us to conclude burning carbon is less risky than other options. This may have been the case 50 years ago but "climate consciousness" is well within the mainstream now.

I would argue instead that humans are uniquely skilled at ignoring unpleasant potentialities. This is socially useful: we coordinate our activities by genuflecting to fictions. This was a better strategy when we were incapable of terraforming the planet. Perhaps we can salvage it, but insulating ourselves from consequences as we seem so keen to do is a great way to walk off a cliff before the loss function kicks in. If you feed a system too-thoroughly sanitized data it will not form an accurate world model.

> trying to improve the state of the world.

I would submit that most existentially threatening technologies were born out of a desire to do just this. It seems the arc of development finds its way to entrenchment, if not the trenches, with little to no regard for inventors' intentions. As you say, perverse economic incentives don't help.

I don't mean to condemn tool-making wholesale. But if we're going to build tools, imo they must be easy to dismantle and supersede. This principle is not so uncommon in software but the opposite tends to obtain more generally. An atmosphere full of carbon is not an easy thing to deconstruct, nor are stockpiles of nuclear weapons, or soils exhausted by years of improper care in pursuit of yield-maximization, or car-centric urban megalopoli.


>I'm not sure about your use of "caution" here.

I was alluding to caution in reference to fossil fuels specifically because nuclear power causes less deaths per megawatt than any fossil fuel (and, interestingly, hydroelectric, if only because of the catastrophic failure of the Three Gorges Dam), and the extra scrutiny it receives in comparison is thus unjustified in my opinion. Over 100 people are killed in car crashes every day in the US alone, which helps put the potential value of self-driving cars in better perspective (although, of course, the switchover to self-driving cars opens us up to the possibility of malicious actors suborning the entire fleet; a worthwhile hypothetical to consider, I think you'll agree, given the... lackadaisical information security standards both IoT and vehicle manufacturers adhere to). The example of economic models is more speculative, but I could well imagine any potential new system being derided for being unproven or too risky even as our current one encourages and rewards looting the planet and poisoning the communal well.

To your point on ignoring unpleasant potentialities, I couldn't agree more. The optimists have always led the way, and as long as there is more commons to translate into immediate success, it shall be so. We are living in a time of whalefall, with a surplus of resources sufficient to support a maximally extractive lifestyle for perhaps one generation more, before it is gone forever. I have grown increasingly disillusioned with our potential to avert the coming global ecological and subsequent economic catastrophe; it seems that our Neros would rather fiddle while Rome burns.

>But if we're going to build tools, imo they must be easy to dismantle and supersede.

An interesting point, and one that I never considered before. My assumption is generally that for physical goods, robustness and long lifetimes are strongly favored over upgradability. Or rather, that "ease of transition away from" is basically never considered in the design or proliferation of new tools. That's a difficult point; is a meme that provides a path to a better meme more adaptive than one that locks the target in? Can it be made so?


> I was alluding to caution in reference to fossil fuels specifically because nuclear power

I follow you more clearly now- yes I agree. Maybe nuclear power gets undue scrutiny because its failures are rapid and local. I think I prefer this to an object so diffuse in space and time no one can point to it ("how can the earth be warming if it's snowing"). We are too likely to disbelieve or ignore anything that doesn't slap us in the face. A caveat: any waste disposal proposal which assumes continuity of governance or ecological predictability over any significant timescale is DOA imo. Even deep final repositories assume we can predict geologic activity half a million or more years into the future. I am not a geologist but this seems a gamble.

Yes, and the situation re: self-driving cars seems similar. We entertain death behind the wheel daily but hand-wring over the particulars of hypothetical autonomous vehicle accidents. It seems there is a preference (at least in the US) for injury to come with a human face attached rather than at the banal whim of an algorithm, but fleetwide hijacking is a different story. I wonder if offline AVs are feasible. Here too the present may preclude a more desirable future: e.g., we may feel compelled to connect navigation computers to the internet to avoid roadway congestion, but in an alternate reality perhaps we did not encircle ourselves with asphalt and invite a chariot powered by the fumes of the long dead into every home. As they say, "you're not in traffic..."

> We are living in a time of whalefall, with a surplus of resources sufficient to support a maximally extractive lifestyle for perhaps one generation more, before it is gone forever. I have grown increasingly disillusioned with our potential to avert the coming global ecological and subsequent economic catastrophe; it seems that our Neros would rather fiddle while Rome burns.

I feel much the same way. An economist turned tree farmer I once spoke to said: "We have failed to develop cultural tools of sufficient sophistication to manage the tsunami that is our technological inventiveness."

> My assumption is generally that for physical goods, robustness and long lifetimes are strongly favored over upgradability. Or rather, that "ease of transition away from" is basically never considered in the design or proliferation of new tools.

Good point- I didn't consider this in depth. I agree re: physical tools. The trend toward disposability or frequent upgrades is the wrong direction. Industrial processes and "cultural tools" (i.e., social technologies or memes as you say), on the other hand, have more destructive potential than a manual implement and longer lifespans.

> is a meme that provides a path to a better meme more adaptive than one that locks the target in? Can it be made so?

This is a fascinating question and I don't know the answer. I'm struck by the fact that one can substitute "gene" for "meme" and with a fair bit of confidence say yes. If life had "locked the target in" prior to the oxygenation of the atmosphere presumably we would not exist today.

An open question in plant biology, incidentally, is whether crop trait plasticity is itself a meta-trait we can pursue[0], instead of e.g. breeding or engineering germplasm for specific regions or conditions (although for some forms of plasticity the developmental basis is not well understood). I think of it a bit like the difference between a painstakingly programmed expert system and a neural net. We will likely need analogous reserves of sociocultural plasticity to survive on a planet growing more temperamental by the year, although the engines of financial and industrial metastasis do not appear so amenable to this.

[https://www.frontiersin.org/articles/10.3389/fpls.2020.00546...]


>For example, take self-driving cars. We can take it as a given that at some point such a car will be in a state that it will likely collide with a bystander or else risk the lives of it's passengers. At that point, which lives should it prioritize?

Those which the community will deem as acceptable, based on its historical ideas, tendencies, morals, etc (and adjusted to taste later on, e.g. when the victims pile up, or don't).

And that will solidify into a set of laws.

There wont be a pick based on some "perfect" abstract theoritical answer on this or that thought experiment.


I know it's not your intent to present one, but I love moral puzzles and you have just generated a delicious one.

The core of your comment is that the "decisions" of the car should be based on the community to which it is sold; i.e. a car sold in, say, France should prioritize saving the lives of women, while one sold in Saudi Arabia should save the lives of men (for information on preference data, see [0]). Further, that there should be laws mandating these cultural standards such that (I assume) it would be illegal to sell or manufacture cars not in accordance with said standards.

Two scenarios immediately spring to mind. What if I do not agree with the standards, either communal or legal? For example, Burmese standards strongly prefer inaction of the car; what if I consider that wrong, and that there would likely be fewer deaths if the car was allowed more autonomy in its decision-making? Would I be morally culpable if I was the car's programmer and my boss asked me to make the car less responsive (i.e. in my opinion more likely to kill people) so that it could be sold in Burma? Other countries are more indifferent to the car trying to save more lives; would I be a murderer for designing a car for those markets?

The second scenario is the more obvious one; your argument hinges on community-derived morals being superior to abstract, though-experiment morals. But that's not a given. There are communities that don't consider certain people as people. There are communities that tolerate infanticide and slavery. There are communities that, in short, have pretty objectively abhorrent moral philosophies. Sometimes those communities have legislative power in a nation. If we concede that the ethics of autonomous systems should be derived by the specific morality of particular groups in power, we will likely end up with some extremely harmful systems. This obviously goes beyond the initial question of self-driving cars into matters of surveillance, social control, AI-driven weaponry, and so on. I just find it important to ask whether you believe that as an engineer, you believe that your own moral beliefs should be completely irrelevant when designing a product - that the manufacturers of Agent Orange or Zyklon B carry no culpability for the use of those products.

[0]: http://moralmachineresults.scalablecoop.org/


>Utilitarian ethics require that people inhabit thought experiments like this.

Sure, utilitiarian ethics might require this.

But moving a level up, utilitarian ethics themselves are not required. Especially in this form.

Historical ethics have always been utilitarian - in the sense of serving the preservation/goals of the community (or leaders) that established them.

But precisely for that, they were developed not based on astract utility (based on ill-defined terms like "good", "happiness", etc), but on concrete goals, beliefs, etc, and under the influence of concrete desires and historical feedback and struggles ("emergent/evolutionary" would be a better term).

The modern eponymous "utilitarian ethics", is an ahistorical, naive version of ethics, where everything is reduced to abstract utility discussions and mind games.


I agree. All that follows is wild speculation, but...

Maybe the utilitarianisms of history maintained concrete referents in part because the notion of "utility" worked somewhere in basement brain. When we pulled it shrieking into the light of consciousness it died and we buried the body under a heap of cope born of the hope that ethics could be reduced to a formalism.

(Of course clever and unscrupulous people quickly figured out how to weaponize this as well)


A junky doesn't necessarily have high hedons, depending on how you define hedons (they're somewhat ambiguous)


On the other hand, I have always found the simplistic utilitarism you described to be just too trivialized to use for any worthwhile conclusions. In particular, your last example assumes that "utility" adds up linearly. I claim that "utility is a linear quantity" should sound preposterous to anyone that has worked in physics, biology, engineering, or some other form of study of dynamical systems. This is the "spherical cow in vacuum on a frictionless plane" of philosophy.

Edit: My message, devoid of tone, sounds glib and antagonistic. My bad! I actually find your line of reasoning important and interesting, even if I consider it "obviously" flawed.


I completely agree; it's very oversimplified. Utility is not a scalar, and to the extent averagism is correct, the "average" is not a literal arithmetic mean.

As just one counterexample, I think there is some moral or at least aesthetic value in a breadth of life experiences existing. If the aforementioned one person or one million people were the only people in the universe, the million people would have a much wider breadth of experience, leading to a more interesting society. On the other hand, if we assume a total population of billions, I'd say the significance of this factor dwindles to near nil.

Also, I believe utility maximization is only one of many moral precepts in the first place.


Yes, not only is naive utility-maxing preposterous, it can also launder questionable ethical decisions: "for the greater good", etc. Unexamined consequentialism breeds infinite self-justification, imo

As far as I can tell claims to objectivity in ethics tend to translate to "I'm happy to theorize situation X so long as X never happens to me." In the last century this attitude begot a professional class of social engineers working in tandem with state security apparati to optimize for... what exactly?


If you could painlessly kill half the world's population while also rewriting any memories and traces of the past so that they never existed, would you consider this a morally neutral action?


I never understood that Thanos reasoning, clearly he underestimates exponential growth. Killing 50% has you back where your started in 1-2 generations. How useless. Second, even though I don't think limiting population growth is a solution, but science is, Thanos should have calculated what the size of minimal gene pool for survival of a population was and went for that, much lower number that would have given him more bang for his (stupid) buck.

In any case, I resent the idea that we should stop or limit procreation. All you others can do that, but my of spring will live among the stars and I will maximize my chances of it getting there. And don't even get started on selective procreation, it has been tried.

Edit, perhaps this response is more towards the general sentiment in this thread, not op.


> In any case, I resent the idea that we should stop or limit procreation. All you others can do that, but my of spring will live among the stars and I will maximize my chances of it getting there. And don't even get started on selective procreation, it has been tried.

Personally, I'm not on the "the world needs fewer people" bandwagon either. While I'm not an expert, I've heard that the world population is expected to level off around 2100 at around 10.9 billion, not that much higher than the current 7.8 billion. So we will never get to some Malthusian point where there isn't enough food to go around. Meanwhile, at a small scale, adding one person does marginally increase the cost of feeding everyone and marginally increase pollution, but that person will also contribute to society and the economy.

When humanity someday goes to space, perhaps the population will start growing again, but if so, it will be precisely because there will be more resources for them.

In other words, the effect on society of having kids seems to be pretty neutral in practice. Given that, I have no reason to oppose your desire to spread your genes, assuming that's what you meant. Personally, I don't care whether the ones living among the stars have my genes or somebody else's, but that's just me.

I do think it would be nice to maximize the chance of humanity as a whole someday living among the stars, but increasing the population would not necessarily help with that.


> I never understood that Thanos reasoning, clearly he underestimates exponential growth.

This is a common thread in discussions of overpopulation. One particularly egregious example I've seen is where people object to immortality[1] based on overpopulation concerns. As in this case, killing literally everyone (after they reproduce) only gives you a couple of generations of breathing room. Exponential growth is really really unintuitive.

[1] I didn't say it was a particularly realistic discussion


never understood that Thanos reasoning, clearly he underestimates exponential growth. Killing 50% has you back where your started in 1-2 generations

Thanos’s reasoning makes even less sense when you consider he removed 50% of all life; removing half the fish and half the birds at the same time as removing half the people leaves you no better off even immediately post-snap


Also if Thanos had paid more attention in Graph theory class, he might have been able to do a better job of minimizing grief than just randomly killing half the vertices. He probably could have found sub graphs that were not very connected to other parts and eliminated them completely.


To accomplish his stated goal in the MCU (as opposed to the goal of impressing his girlfriend), he should have gone for maximum grief, along with preserving those islands of isolation since those islands are more likely to be the cultures that don’t engage in the fantasy of unlimited exponential growth.


DeathStar considered more moral than Thanos


Well in the comics he was supposedly just trying to impress Death, a much more reasonable motivation.


Just remember that rabbits kill themselves through starvation long before they invent boats to get to the next island full of resources to consume.

Controlling population growth is a far preferable option to mass murder either by actively killing people Thanos-style, or passively killing people Malthus-denier style.


No, but I don't think it carries to not letting them be born. My stance on killing being wrong is that it destroys the freedom of an existing person to decide whether they want to continue to experience life.


I should say my view is closer to averagism when questions of utility start getting involved. But I believe utility is just one moral precept among others. Murder is disallowed no matter how much it might increase others' utility. And murder is murder regardless of pain, memories, or traces. On the other hand, choosing to have fewer or no children is easily justified. So is pursuing outcomes, such as education, that could cause people to choose to have fewer children on average.


No because this take assumes the crappy neural net in your head that creates grossly inaccurate models from which you draw conclusions is giving you something like a viable perspective on reality under which such a drastic action makes sense. In other words, it disregards neuropsychological humility.


If we started with the people who wanted to do this, we'll end up killing a lot less people overall.. :D


It's not directly saying adding more people, it's just saying if we take world A, and world B and compared them. This video gives a bit of a feel for the comparison steps that lead you to the repugnant conclusion https://www.youtube.com/watch?v=vqBl50TREHU


Yes, I'm being a bit sloppy by saying "adding more people". (Although, to be fair, the most obvious way to influence the future population that is morally acceptable is choosing whether or not to add people to it by having children, or influencing others making that choice. Removing people by killing them is not acceptable, for reasons beyond the scope of utilitarianism.)

I diverge from the video's argument at 1:10 when she says "it's hard to see how it could be worse". To me, a world with lower average happiness is intuitively worse.

I do think there are some aspects where a higher population is better. For example, if there is a wide breadth of life experiences in the world, it makes for a more interesting society. Also, a higher population increases the chance of humanity surviving in the long term. But these effects tend to be logarithmic. A world with one billion people has some advantages over a world with only one million people (though it has some disadvantages too). But if the choice is between a world of 1 billion or 2 billion people, as posited in the video, the breadth of society, and humanity's chance of surviving, are about the same in both. That leaves the higher average happiness of World A as the most significant difference, making it preferable.


would you prefer to live in a world where you had a 100% chance to be as happy as possible or a 51% chance to be as happy possible or a 49% chance to be not quite unhappy enough to commit suicide. I know where I stand and think that the "not worse" step presented in the video is a leap of logic taken without any justification.


> But… which implications?

On some naive interpretation of averagism, a very efficient way to raise average happiness is to just kill the unhappy. Or sterilize them.

And I agree that averagism is more intuitive, at least to me. It does lead to the conclusion that we should discourage making kids and focus on improving the happiness of a small population, but to some people, saying "have fewer kids" is some sort of genocide apologism for some reason...


Usually “have fewer kids” leads directly to “who gets to have kids”, and the world does not have a great track record on forced abortions and sterilizations affecting all subgroups of people equally.


Adding more people in the universe does make life much greater chance to spread and offers greater resilience to planet destroying events.


I do believe there is some moral value in the survival of the human race, or life generally. But the extent to which that applies is very situational.

For example, as a concrete scenario that could cause the world population a few decades from now to be a few million lower than it would otherwise be, consider an education or economic development initiative in a poor country that causes its residents to want fewer children on average. This would not meaningfully hurt life's chance to survive or spread. If anything, it would slightly reduce the likelihood of various non-human species being driven to extinction due to habitat loss or pollution or whatnot. So I would consider the reduction in population due to such an initiative to be morally neutral, or positive to the (small) extent that it saved species or improved others' quality of life. Meanwhile, the education or economic development itself would of course be morally positive.


Maybe I misunderstood but seems to me that even if you bite and accept their assumption that happiness can be quantified, it's still nonsense.

In particular, they quantified happiness, but they conveniently seemed to have forgotten to quantify unhappiness. Sure, if you add more people, the total amount of happiness grows, but it's not clear at all how much the total amount of unhappiness would grow in comparision. (And the note that each newly added life is "barely worth living" suggests each individual contributes a lot more unhappiness than happiness to the total)

Alternatively, you could stick with a single number and use positive values for overall happiness and negative values for overall unhappiness. This still won't work:

Let's assume N is the total number of people and h(i) is the happyness of person #i in "hedons" (don't ask me...)

N is a natural number > 0 and h(i) is... some number.

Than their conjecture says in effect that:

lim(N -> +infinity) sum(i = 0 to N) h(i) = +Infinity.

As every math student can explain to you, that's not how this works. Even if each h(i) was a real strictly > 0, the sum wouldn't necesserily grow to infinity. If elements keep having negative values - not a chance.

The whole thing only works if you constrain that each h(i) must be > some positive number - or if you define h(i) as a count of descrete "hedons" of which most people have at least one. But that's picking a model to support your conclusion.


Academic philosophers will never agree on anything because modern academic philosophy lacks a method to do so, and the incentives are in standing your ground and do performative arguing instead of reaching out for truths, agreements, compromises or consensus.


Academic philosophers have consensus on many things. That doesn’t make them correct, merely a reflection of what a societal group thinks.

The notion of agreement or compromise is antithetical to philosophy, so I’m not sure how that would be relevant. That is the domain of politics.


Charitable interpretation is quite important in philosophical argumentation; it could be construed as a form of emotional compromise, and is certainly about avoiding unnecessary disagreement.


That’s true, but I’d categorize it more as a social factor affecting philosophers and not philosophy itself. But of course we would get lost in endless debates about this.


Does a philosophical hermit ever choose to not come back, and yet remain philosophizing? I doubt it.

I agree that the social aspects are not the apparent point of philosophy though. We can clearly benefit from exploring ideas without attempting to unify society as a whole.


Can you name one thing there is a consensus on, in academic philosophy?


There is a philosopher survey organized by David Chalmers

Results: https://philpapers.org/surveys/results.pl


Hilarious that nothing has more than 81% agreement. And here again, the same thing happens:

I did not know Newcomb's box problems, so I google it, I find that people Newcomb and people who solved it are all actually scientists (The problem is vague and there are two ways to fill in the gaps, leading to an apparent paradox). Yet philosophers are split in their answers.


https://old.reddit.com/r/askphilosophy/comments/es98yo/has_a...

Adding to that, you’ll also find that most philosophers are atheists / non-theists while most theologians are theists, which is why I said a consensus mostly reflects social groups and not any final say on the topic.


"Possible worlds exist" -> Damn what a breakthrough

Yes, I agree that once you remove all the theists from the domain and are ready to call out people who do theology badly disguised as philosophy, the field becomes a bit clearer.


That isn’t what I said nor is it the conclusion I’m drawing. The field becomes “clearer” because people who are not atheists then self-filter themselves out, don’t become philosophers, etc. It has nothing to do with the legitimacy or illegitimacy of theology.

Which is exactly why you should be skeptical of philosophical consensus on social issues. Philosophers are subject to the same social pressures and effects as any other human group.


Well, as if non-academic internet? philosophers are better at agreeing or reaching any form of truth. I'm very skeptical about that.


I certainly hope that the "internet-philosophers" include some academics as well.

No I am more talking about people who do philosophy "accidentally" as part of their jobs: scientists or engineers mostly. And on many philosophical questions, yes, most specialists in a field will tend to reach more agreement than academic philosophers.

To take an example I know, I think you will find far more agreement on things like the nature of qualia or the relevance of Searle's Chinese room amongst AI researchers than you would among academic philosophers.

Nowadays, many philosophical questions benefit from a scientific background that academic philosophers just lack. Some philosophical positions are simply wrong or come from a misunderstanding solved decades ago by scientists.


Yeah, but what you are saying now is more like that academic/professional non-philosophers are easier to reach an agreement on particular philosophy-adjacent questions from their field of expertise. Yeah, this is totally possible, but I think, this is not the same as your original statement.


So the most ethically optimal population is one in which adding new members will cause life to no longer be worth living to at least the same number of other members. The effects of each new person tends to push another person into a suicidal state.

I think the problem here is in using happiness as the thing to be optimized in ethics. A better measure may be the odds of survival. A maximally large but miserable population may have lower adaptability, and thus survival potential, than a somewhat smaller but happier one.

The most ethical population becomes the one that can increase its own evolutionary fitness the most efficiently, and that is not one so resource constrained that growth creates equivalent death.

Optimizing for survival, we optimize for the population's potential to explore its fitness space, a kind of computation. If you design a computer cluster to make this calculation, you would choose a smaller number of highly capable computers rather than a larger number of barely functional ones.

Optimizing for survival is something like optimizing the cumulative parallel computational capacity of a population rather than its cumulative happiness. The fact that it may imply greater average happiness might just be a happy side effect.

It would also tend to lead to a larger population in the long run, and thus a greater cumulative happiness in the fullness of time, considering future generations ... which would make the two measures eventually equivalent.


I think your reasoning is sound, but one problem I see with your suggestion to optimize for odds of survival is that it would reward societies that most would consider incredibly unethical.

As a simplistic example, imagine a scenario where 10,000 of the richest, most intelligent, and most physically fit wipe out the rest of the human race and establish a high-tech self-sustaining town from which they live full happy lives and conduct research on how to face existential threats to the species. Reproduction is allowed, but every cohort of children goes through a Hunger-Games-style test when they reach a certain age to keep the population stable and select for the fittest children.

Wouldn't that be a more survival-optimized society? Would it be a society you'd want to live in?


Say those 10k are elite supercomputers, Newtons and Einsteins, and the rest of us average out as five year old smart phones. If those smart phones are still effective computers, then there is negative utility to deleting them. We want both.

But we want a high enough average so that the supercomputers can still reach most of their potential, and the average is not crushed by the desperate. I think that means that six orders of magnitude more of lesser computers can still contribute massively to the calculation, compared to the 10k elite. It still implies a very large population, just not a barely functioning one.


I think where your analogy breaks down is that unlike computers, distributed computation on human brains scales badly[0], which is why my example limits the community to 10,000.

[0] https://en.wikipedia.org/wiki/Dunbar%27s_number


I don't think that applies, because the size of the network is not limited by the number of connections of a node. If our intelligence were limited by the number of dendrites per neuron, there would be little use for so many more neurons than that.


>> So the most ethically optimal population is one in which adding new members will cause life to no longer be worth living to at least the same number of other members.

I can't see how that follows. The paradox is comparing possible worlds which have differing average happiness and total populations. It doesn't follow that there exists some way to transform each world so that it matches the other. You aren't killing off the excess population in B to somehow reach A or vice-versa adding population. The paradox exists because you are comparing the happiness of a person which doesn't exist in one possible world with his (realized) happiness in another.

There could be any actual relationship between population size and happiness but the paradox would still exist because you wouldn't be able to order the desirability of some possible worlds.


The paradox is a false paradox due to assumption, that Happiness(A+B) = H(A) + H(B) for which I see no good reason.


Indeed this is all of utilitarianism debunked in a nutshell. Even the concept of aggregate utility is incoherent, because it requires unjustifiable assumptions about intersubjective utility comparisons. It's not surprising that it leads to perverse conclusions if you dial the framework to 11, because it doesn't make much sense even in the small. The only thing aggregate utility does is put a veneer of objectivity over the subjective social preferences of the person doing the supposed aggregate utility calculation (one way you can view this is as a manifestation of Arrow's theorem).


My repugnant conclusion is that these ivory tower sophists need to get new jobs because they are outputting low quality research.


Had to look it up:

"In Derek Parfit’s original formulation the Repugnant Conclusion is stated as follows: “For any possible population of at least ten billion people, all with a very high quality of life, there must be some much larger imaginable population whose existence, if other things are equal, would be better even though its members have lives that are barely worth living” (Parfit 1984)."

Befine "better"? Who said that "the more the better"?

More people are better than less people (in such a notion) was never a part of historically/socially derived ethics.

And who said "all other things" could be "equal"? That's a 'perfectly spherical cow' level BS.


My main problem is with the premise that as long as it meets some minimum level of happiness, then it's a life worth living. That means that you keep getting a bigger (wider) happiness area by adding those almost miserable people.

But what if the threshold wasn't whether they'd rather commit suicide? What if there was no threshold? Suppose that you measured the difference from the max happiness anyone could possibly achieve with the actual happiness of the individual.

Subtract the "missed out happiness" from the happiness level, and do this for each individual that has ever existed through the course of the population's history.

If the average happiness is below half of whatever the max happiness is, then you add a negative value each time you add a person. This decreases the population/history net happiness.

When things are good, then adding one person will have a positive effect on the population/history net happiness. Just like in real life animal populations, when things get too dense for a space and its resources, then net happiness decreases, so the optimal value lies in some middle sweet spot.

Since we're calculating the value for the entire history of the population, killing unhappy people won't increase the population's net happiness value. Only adding more people will increase or decrease it.


The logical solution is to fund space travel. The Earth is small, while Space is essentially infinite. Having a universe, or even solar system, full of human beings seems preferable to artificially limiting their population in order to fit Earth’s constraints.

I also personally think such a “positive” attitude toward human life is probably necessary in order to avoid civilizational nihilism. This is the fundamental challenge of the modern era.


> The logical solution is to fund space travel.

Why? We have the technology and resources now to house, clothe and feed most the world's population. The main reason we do not is we choose "deserving" and "undeserving" populations and individuals.

As our technology improves further and the global population begins to decline this will become even more evident. Space exploration is great. But suffering is a choice we collectively make every day.


I’m not really sure how to understand your comment. I am replying to the linked article about the repugnant conclusion.

Housing, clothing and food are not the only things necessary for maximizing happiness. Even if every human being had these, there would still be differences of happiness.


And space exploration solves happiness for a significant portion of the human population? If we're talking about minimizing the repugnant conclusion we're generally talking about resources and how they're distributed.


Well, because space travel would enable more resource acquisition and more humans to then use those resources productively to be happy. If we limit ourselves to Earth, we quickly run out.

Basically the old argument about expanding the size of the pie vs. fighting over the current one.


Let's be real though. There are enough resources now. There is really no reason to believe that if the resources of another world fell under Earth's control that those resources would be used to enrich more people than would be exploited to extract them. And in a situation where these new resources would be behind the most significant barrier to entry in human history, they would almost certainly be controlled by the wealthy.

That's why I'm saying that at this point, it's not about how much, it's about us making a positive choice to orient society so that the resources we do have are used to enrich as many human lives as possible. Without that, you're just creating even more wealth for the elite.


I don't think this is true or likely at all. Space is pretty much infinite and while it may take a few hundred years (or even a thousand) for personal space travel to be possible, that's nothing on a longer timeline.


Can some correct me if I'm wrong? This paradox comes from trying to determine which population is better depending by taking averages of happiness of individuals in populations. Then they do some weirdness to take special cases into account?


Suppose happiness can be quantified, e.g., 1 billion people each have 10 happiness, for a total of 10 billion happiness.

Then 10 billion people with 1.1 happiness each yield 11 billion total happiness. They are, collectively, happier.

And 100 billion people with 0.111 happiness each are, collectively, even happier.

And 1 trillion people with 0.01111 happiness each are, collectively, even happier.

Etc, etc.

The ‘repugnant conclusion’ is that this sequence never ends. You can argue for ever higher populations based on total happiness even though individual happiness approaches zero.

It’s obvious nonsense.


I think the numbers are just a way of thinking about it. Consider how you would map that to actual experience.

Consider two sentient species that evolved on two different planets in two different galaxies. One species decides to prioritize minimum happiness: the result is that they have 1 billion people, each of whom live the fabled lives of gods: everyone has constant access to physical and intellectual pleasures (food, music, whatever), is completely free of disease, war, and so on. Nobody ever thinks of suicide because they're just so blissfully happy and satisfied all the time. But to achieve this, they limit their population such that it never grows past 1 billion.

Another species decides to maximize aggregate happiness. They expand to have the maximum number of their species possible while maintaining a positive "happiness balance" for each person. Each person's life is hard, full of pain and toil; it's also rewarding, full of joy and relationships; but the reward is only ever fractionally better than the sorrow. Obviously that's an average: people's lives always have ups and downs, and during the "down" times, people begin to contemplate suicide. People may spend as much as 45% of their lives feeling that life isn't really worth living. But there are far more of them -- they've expanded to every available liveable location nearby, and continue to grow. There are trillions of them already, and as they continue to expand throughout their galaxy, there will be trillions upon trillions more.

So which species has made a better choice? When concretely embodied like this, I don't think the question is a nonsense one at all.

The only part I think might be considered nonsense is the assumption that you can always have one more and still have an average/minimum positive utility. Consider a generation spaceship (i.e., a spaceship designed to take several generations to travel between stars) with enough replicators to generate food for 1000 people to be sated. Some people might consider 1200 usually-hungry-but-otherwise-happy people is better than 1000 always-sated people, but 3000 always-starving people must be far worse; and 100,000 people living on the calories of 1000 people is clearly physically impossible.


So one person with 1 happiness and one person with 0 happiness combine to make two people with 0.5 happiness?

Or a billion people with one happiness each are almost exactly equal to one person with a billion happiness and a billion people with 0 happiness?

It's not so much repugnant as ridiculous.


> It's not so much repugnant as ridiculous

Exactly. Philosophers should be required to take courses in abstract algebra and topology before being allowed to discuss utilitarianism.


But this only makes any sense if you accept that, if you could somehow quantify happiness, that it is a quantity on which the operation of summation has meaning.

But happiness is not even a quantity, it is a direction. It is not a destination, it is a journey. Those who arrive at a destination where they think they will find happiness, without having a new destination to seek, inevitably become stagnant and miserable. You can't define a linear sum on such a phenomenon.


Yes. There is definitely a sense in which every Polio or Malaria free person increases collective happiness, but the ‘repugnant conclusion’ is flat Earth levels of ignorance.


It’s intended to be obvious nonsense as a means of getting people to think about metrics and whether the sum of a metric over a population is a good metric for that population.

The same thing goes for means, weighted means, and so forth. The classic joke being that when Bill Gates walks into a homeless shelter, everyone is a millionaire on averages.


I think the issue here is they're imputing goodness of a population by total or average happiness or some other tricks to make their happiness calculus work.


Can't there be a concept of negative happiness?


Definitely! ‘Imaginary’ happiness as well to allow for periodicity. But before all of that the linearity assumption [H(a+b)=H(a)+H(b)] needs to be restricted to particular local domains, and explicitly disclaimed in general.


A good explanation I found since I couldn't wrap my head around this just reading about it: https://www.youtube.com/watch?v=vqBl50TREHU


in my own amateur philosopher can't get to sleep what should i think about sort of thinking, i "solved" this by stating that when considering global outcomes you must use statistics, and that not just the sum of the distribution but also its means, variance, and (in particular) lower quartile also matters.

So for me the repugnant conclusion fails because there's some point where it's unfair to the person least well off on the planet to add an additional person, as they will as a result fall below some bar for quality of life.

i'm not a philosopher but i can't see the holes in this argument.

it also applies to "eating meat allows cows to live so is good" -- well, no, because you have to dig into the details of what the distribution of welfare looks like for those cows.


I think this is what is meant by "life barely worth living" in The Repugnant Conclusion: whatever the lowest quality of life permissible for life to be worth living. This is what you call "some bar for quality of life".

The claim is that if everyone is above this quality of life, when the number of those living is sufficiently large enough, it is a better world than 10 billion individuals living amazing lives.


What a funny thing to be ethically concerned about. Conflict is not over (will never be), anyone taking ethically and not functionally motivated positions on demographics will simply be replaced by evolution.


Imagine that the existence of a person now meant that another 10 people would not exist in the future. What conclusion do you come to then?


That either we have reached new levels of being able to predict the future or someone is just making stuff up.


Negative utilitarianism resolves this "paradox" and is probably the only objectively harm reducing philosophical system in existence.


Negative utilitarianism claims that it is better to not exist at all, than to live a 1,000 years of pure bliss but also experience one mildly-annoying mosquito bite.

Seems very counterintuitive to me.


A genocide with equal representation of all types of people is just as reprehensible as the other option.

So if you're contemplating bumping off 50% of the population, because of reason 'x' -- but, not because of some '..ist' or '..phobe' reasons, you're still as big as a scum as that guy with the funny mustache and stupid ideas.


Utilitarians utilitarianiazing utilitarianly.

It would be funny, if these folks didn’t wield so much power.


It is very sad that many people in power are not utilitarian - they do not want "the greatest good for the greatest number" but instead want something more selfish.


The actual title is "What Should We Agree on about the Repugnant Conclusion?", and the text doesn't suggest general agreement on it at all.


Yeah that's bad. Fixed now.

Submitted title was "Philosophers Agree on the Repugnant Conclusion". Submitters: please don't do that! This is in the site guidelines: "Please use the original title, unless it is misleading or linkbait; don't editorialize."

https://news.ycombinator.com/newsguidelines.html




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: