So the most ethically optimal population is one in which adding new members will cause life to no longer be worth living to at least the same number of other members. The effects of each new person tends to push another person into a suicidal state.
I think the problem here is in using happiness as the thing to be optimized in ethics. A better measure may be the odds of survival. A maximally large but miserable population may have lower adaptability, and thus survival potential, than a somewhat smaller but happier one.
The most ethical population becomes the one that can increase its own evolutionary fitness the most efficiently, and that is not one so resource constrained that growth creates equivalent death.
Optimizing for survival, we optimize for the population's potential to explore its fitness space, a kind of computation. If you design a computer cluster to make this calculation, you would choose a smaller number of highly capable computers rather than a larger number of barely functional ones.
Optimizing for survival is something like optimizing the cumulative parallel computational capacity of a population rather than its cumulative happiness. The fact that it may imply greater average happiness might just be a happy side effect.
It would also tend to lead to a larger population in the long run, and thus a greater cumulative happiness in the fullness of time, considering future generations ... which would make the two measures eventually equivalent.
I think your reasoning is sound, but one problem I see with your suggestion to optimize for odds of survival is that it would reward societies that most would consider incredibly unethical.
As a simplistic example, imagine a scenario where 10,000 of the richest, most intelligent, and most physically fit wipe out the rest of the human race and establish a high-tech self-sustaining town from which they live full happy lives and conduct research on how to face existential threats to the species. Reproduction is allowed, but every cohort of children goes through a Hunger-Games-style test when they reach a certain age to keep the population stable and select for the fittest children.
Wouldn't that be a more survival-optimized society? Would it be a society you'd want to live in?
Say those 10k are elite supercomputers, Newtons and Einsteins, and the rest of us average out as five year old smart phones. If those smart phones are still effective computers, then there is negative utility to deleting them. We want both.
But we want a high enough average so that the supercomputers can still reach most of their potential, and the average is not crushed by the desperate. I think that means that six orders of magnitude more of lesser computers can still contribute massively to the calculation, compared to the 10k elite. It still implies a very large population, just not a barely functioning one.
I think where your analogy breaks down is that unlike computers, distributed computation on human brains scales badly[0], which is why my example limits the community to 10,000.
I don't think that applies, because the size of the network is not limited by the number of connections of a node. If our intelligence were limited by the number of dendrites per neuron, there would be little use for so many more neurons than that.
>> So the most ethically optimal population is one in which adding new members will cause life to no longer be worth living to at least the same number of other members.
I can't see how that follows. The paradox is comparing possible worlds which have differing average happiness and total populations. It doesn't follow that there exists some way to transform each world so that it matches the other. You aren't killing off the excess population in B to somehow reach A or vice-versa adding population. The paradox exists because you are comparing the happiness of a person which doesn't exist in one possible world with his (realized) happiness in another.
There could be any actual relationship between population size and happiness but the paradox would still exist because you wouldn't be able to order the desirability of some possible worlds.
I think the problem here is in using happiness as the thing to be optimized in ethics. A better measure may be the odds of survival. A maximally large but miserable population may have lower adaptability, and thus survival potential, than a somewhat smaller but happier one.
The most ethical population becomes the one that can increase its own evolutionary fitness the most efficiently, and that is not one so resource constrained that growth creates equivalent death.
Optimizing for survival, we optimize for the population's potential to explore its fitness space, a kind of computation. If you design a computer cluster to make this calculation, you would choose a smaller number of highly capable computers rather than a larger number of barely functional ones.
Optimizing for survival is something like optimizing the cumulative parallel computational capacity of a population rather than its cumulative happiness. The fact that it may imply greater average happiness might just be a happy side effect.
It would also tend to lead to a larger population in the long run, and thus a greater cumulative happiness in the fullness of time, considering future generations ... which would make the two measures eventually equivalent.