
The Morality Apocalypse: An Open Letter to Bill and Melinda Gates - jal278
http://blog.joellehman.com/index.php/2014/01/morality-apocalypse-an-open-letter-to-bill-and-melinda-gates/
======
Aqueous
Every attempt to compel humanity towards some higher degree of morality, or to
accelerate this process, either through force or through law or by throwing
money at the problem, fails. Our morality is increasing, but slowly. It is a
developmental process. Steven Pinker's observation that violence has actually
declined over the course of human history, despite anecdotal evidence to the
contrary, attests to this fact.

Unfortunately, like the stages of economic development, there are no shortcuts
here. The moral development of our species will be exactly proportional to our
development of moral feelings, like empathy, and no faster. And since this
kind of psychological change - from generation to generation - happens
exceedingly slowly, there's no way around the fact that our moral development
is also exceedingly slow. But slow is better than non-existent.

~~~
bedhead
I think the mistake was comparing the speed of silicon with the speed of
biology.

------
rejschaap
This letter manages to rub me the wrong way three times before it even begins.

First the title, why is this addressed to Bill and Melinda Gates? What they
spend their money and (more important) their time and energy on is wholly
their own business. I'm of the opinion that their work is most admirable and
important.

Second the graph is comparing magnitude of morality with magnitude of
technological progress over time, which just doesn't make any sense. The term
'morality apocalypse' is equally ridiculous and over-the-top, I was half
expecting to find the term 'war on immorality' somewhere in the letter, which
would have been quite ironic.

Finally the TLDR seems to suggest we ought to stop or slow down technological
progress because 'we can't handle it' which might be the most immoral thing to
do, seeing how many people's lifes have been saved by advances in technology
and medicine. Instead of holding back progress on the technology front (which
is futile anyway) we might do better increasing morality by simply educating
people (probably making use of technology). Explain to the toddler what the
consequences of using the flamethrower are, it is not impossible.

------
herbig
Something about the idea of an "open letter" always bothers me. I don't know
who Joel is, or why I should trust that this issue is important. The coming of
a "morality apocalypse" is pure speculation, with no data or even anecdotal
examples to back it up, and seems straight out of 60s Cold War MAD paranoia
with little to add since then.

~~~
sarreph
Open letters are only ever used correctly by those with enough power/influence
to address their audience as a public entity.

Therefore, the main reason why those lacking power, in this case Joel, write
'open letters' is in the hope of gaining enough attention from others peeking
at a letter addressed to a private individual.

------
jerf
I think this is a very interesting question, but there's a pretty strong
tendency in the human psyche to assume that the answer is more or less to
replicate $MY_CLAIMED_MORALITY out to the rest of humanity, which this essay
does nothing to avoid. ($MY_CLAIMED_MORALITY stands in contrast to
$MY_ACTION_MORALITY, i.e., how one behaves, which this essay _does_ discuss
there being a difference in, to its credit.) Fiddling with morality may have
large potential benefits, but it can also cause catastrophe, yes, even worse
than what we have now.

"Increasing empathy" sounds great, right? But it can cause excessively local
decision making brought on by a particular case of acute pain, while missing
the greater good that may be done elsewhere. It may allow a morality-parasite
leader to come to power to manipulate everybody via their increased empathy to
do something awful. (Indeed, I'm sure many people here have found that even in
current society, there's certain areas that you must deliberately reduce your
empathy in; you simply can not afford to be manipulated by every picture of a
starving child you come across, or you'll become broke... and with no
guarantee that your donations are doing anything but lining a pocket
somewhere, if you're too busy giving to do background checks on who you're
giving to....) And of course it's not a knob; what "increases empathy" in the
lab may in the field "increase empathy _for my tribe_ ", as we are so wired
for that.

Tribalism is bad, right? Perhaps so, but "let's just turn down the tribalism"
does not clearly work. We are tribal for a reason. Unless you can flip the
switch all at once, you run the risk of the selfless a-tribals getting
parasitized by the still-tribal, meaning it may not be stable. It may not be
stable even if you _could_ flip the switch all at once; you still have to
worry about morality parasites.

To say nothing of what happens once you have the power to twiddle with human
morality, and then those of altered morality start twiddling the knobs
themselves, which would be inevitable.

Yes, it is likely that some morality changes will have to occur for human
survival... no, it probably is not the case that you can predict them
trivially now, no, it probably isn't as simple as "Why can't we all just love
each other?", and I very strongly suspect that if we could get a preview of
that new morality, we'd all probably find it _repulsive_ in some way... it's
simply inconceivable that the answer is as easy as taking somebody's modern
morality (which does not come from any magically better morality source, it's
as broken as everybody else's) and stamping it out on everyone else.

~~~
jal278
I agree completely that this is a delicate issue; my main hope is to raise
some awareness that the disconnect between technological progress and progress
in morality/ethics deserves greater research focus. I am not in favor of
blindly imposing anyone's particular and likely flawed understanding of
morality universally, but that this could be an urgent issue for our species'
survival.

~~~
maroonblazer
1) At the risk of seeming trite, I liken this to your typical change
management program. That is, simply rolling out a new tool across the
organization and expecting people to use it in accord with the the vision it
was conceived is unrealistic. People need to be educated and, depending on the
circumstances, given incentives for using the new tool. Ask anyone who's been
involved in one of these projects on a large scale and they'll tell you it
often takes longer to realize the behavior change than it takes to develop the
tool.

2) The field of moral psychology tackles these very types of problems, by,
among other things, surfacing our moral contradictions and forcing us to
consider why seemingly equivalent moral outcomes are not considered so (take
the Trolley Problem as one, now nearly cliched, example). i.e. I think there's
more work being done in this domain than perhaps the author gives credit. That
said, I don't disagree that more could be done.

------
gpcz
The author says that scientific and technological progress in certain fields
is outpacing human morality, but he or she advocates advancing science and
technology toward better understanding human morality. Presumably, this would
entail understanding and modifying the chemistry of our bodies to eliminate
the "baser animalistic drives" caused by evolution. However, if our species is
so immature, how do we know that our modifications are moral? How do we make
sure the people wielding this awesome power aren't immoral themselves?

~~~
joshuacc
C.S. Lewis's _Abolition of Man_ takes this question and turns it into a rather
full thought-experiment.

------
msluyter
It's unclear to me that "moral progress" is well defined, as it seems to imply
that there's some universal, generally accepted notion of an ideal morality
which we can aspire to. And yet I seriously doubt that we could find any
agreement on what that could be.

~~~
dcre
Martha Nussbaum has an interesting paper on this very topic, if you're
inclined.

[https://lawreview.uchicago.edu/sites/lawreview.uchicago.edu/...](https://lawreview.uchicago.edu/sites/lawreview.uchicago.edu/files/uploads/74.3/74_3_Nussbaum.pdf)

------
brudgers
Noting that technical progress and morality begin diverging at the origin, and
assuming that the graph is accurate, then the gap is just part and parcel of
human existence. Analogous to Issac Bashevis Singer's observation that, "We
must have free will - we have no choice," it could be said that we must have a
morality gap.

But even more charitably, if such a gap exists it has been known for two and a
half millennia.

 _This, said Theuth, will make the Egyptians wiser and give them better
memories; it is a specific both for the memory and for the wit. Thamus
replied: O most ingenious Theuth, the parent or inventor of an art is not
always the best judge of the utility or inutility of his own inventions to the
users of them. And in this instance, you who are the father of letters, from a
paternal love of your own children have been led to attribute to them a
quality which they cannot have; for this discovery of yours will create
forgetfulness in the learners’ souls, because they will not use their
memories; they will trust to the external written characters and not remember
of themselves. The specific which you have discovered is an aid not to memory,
but to reminiscence, and you give your disciples not truth, but only the
semblance of truth; they will be hearers of many things and will have learned
nothing; they will appear to be omniscient and will generally know nothing;
they will be tiresome company, having the show of wisdom without the reality._

\-- _Phaedrus_ , Plato.
[http://oll.libertyfund.org/simple.php?id=111](http://oll.libertyfund.org/simple.php?id=111)

~~~
jal278
While the gap between moral and technological progress may be known, the
argument here is that it becomes increasingly dangerous as technological
progress accelerates relative to moral.

I think it would be a huge mistake to assume that the gap is an inherent part
of human existence. Our morality results as a product of our brain and our
culture. We can potentially use technology to remedy parts of our brain or our
brain chemistry to enhance morality.

Of course, this in itself is a dangerous moral undertaking, but it is a
possibility, one that perhaps needs greater research attention.

~~~
brudgers
The zeitgeist of each civilized age is that the world is on the highway to
hell. The quote from Plato wasn't accidental, the Athenians condemned
Socrates. Though something from Confucius or Rousseau would also have worked.

However, leaving literary and cultural arguments aside, the argument that
natural selection has not or will not create a good balance for the human
ecological niche seems to show no more faith in science than an appeal to a
divine creator. Likewise, the idea that humans are somehow exempt from natural
selection in a way which allows them to shape it, similarly relies on some
notion of our holding some unique biological status.

If you're selling brain state materialism, then you lose traditional notions
of morality and culture. That's just the price of radically redefining
traditional notions of the mind.

Even more to the point, the idea that morality cannot cope needs to show why
the Hobbesian view - developed during a time when the movable type printing
press enabled spreading Protestantism to violently play itself out in the
English Civil War - is not the correct path. In other words, the morality gap
depends on the absence of political progress - an argument difficult to wage
in light of modern history and the diminishing of colonialism, slavery and
monarchy, and the rising of the idea that there are universal human rights and
more recently pressure for their extension to other species.

------
jal278
While I fix my cache:

TLDR: Don’t give a toddler a flamethrower.

In more detail: The letter makes an argument for allocating more funds towards
solving less immediate causes of human suffering. In particular, one emerging
problem is that we as a species may face a morality apocalypse: Our species’
level of moral responsibility may become laughably insufficient to manage
increasingly powerful technologies. We may become like a toddler with a
flamethrower, resulting in suffering on a massive scale.

More detail yet: Technology allows us to impact the world more drastically,
and can as easily be used for good as it can for evil. Technology is growing
at an accelerating rate, while moral progress is plodding. Already our
technological power outstrips our ability to use it responsibly (e.g. are we
morally developed enough as a species to be entrusted with nuclear weapons?).
A mistake would be to view morality as a fixed part of the human condition —
there may be technological ways to enhance empathy or decrease our species’
tendency towards greed, revenge, and moral flexibility under duress. Without
intervention to remedy our morality (perhaps through technological means),
humanity may be at significant risk for horrific outcomes as our technical
abilities more drastically eclipse our moral ones.

~~~
ZenoArrow
Are you the author of the article?

For now, I'm assuming yes. Aside from the questions on how to define good
morals, and how such morals are best passed on, the main issue I had with the
article is that it seemed to be a case of 'when you have a hammer, every
problem looks like a nail'. Have you considered that technology may, in this
case, not be the best tool for the job? I'd say the arts and our common
culture has more room to be a positive influence on our collective
consciousness than the application of some sort of clinically derived
morality-inducing technology. Humanity doesn't need to be "fixed", it needs
room to grow. Look at what holds back that room to grow and you'll have a much
clearer picture of what can be done.

------
smackay
The gap between technological capability and the ability to deal with the
consequences illustrates a problem with our collective intelligence and the
ability or inability to contain the more volatile members of our species when
they make use of technology for their own selfish gain.

Morality (the desire to render many shades of grey into black or white
decisions) is not going to save you. The only solution that mortality has to
offer at this point is to limit technology or attempt to put a halt to further
progress until humanity has had a chance to catch up. To a certain extent this
is being tried already in various conservative movements around the globe -
from opposing stem-cell research to limiting the effectiveness of vaccination
programmes.

Instead I'd put my money on creating more effective institutions and better
levels of organization in order to limit any adverse effects until human
intelligence is able to progress at a similar pace.

------
mathattack
Very good questions to ask, though I'm not sure it's a valid premise.

Can we measure or track morality similar to technical progress? How could we
really know that we're on a "right" or "wrong" track on morality? We could
say, "If we kill less of each other, we are on the right track" but how do we
know that what might appear a small morale roadblock isn't setting us towards
progress? For example: At first blush Hiroshima looked awful, though even
there we had some ambiguity as generals defended national interest. But what
if the experience in Hiroshima allowed us to keep our wits and not escalate
Korea, Vietnam or Afghanistan into a nuclear war? There's no way to know if we
made morale progress.

But I do like the question, which is, "How can we make sure we're increasing
our ability to get along with each other, as we develop better weapons to hurt
each other with?"

------
brudgers
As a picture the graph may be worth a thousand words, but in this case all of
them are nonsensical.

Seriously, what the fuck is the graph mean supposed to mean? Is it really
trying to show the delta's of technology and morality? How is that supposed to
be measured? Or is it some aggregate amount of technology compared to an
aggregate amount of morality? And even if it is, how was that measured?

On the other hand, those who have suggested that what is meant by morality is
unclear, are mistaken. It is very clear. It is a traditional Abrahamic view -
eating from the tree of knowledge is evil. And though we could just blame Eve,
it doesn't go with the pseudo-scientific presentation.

So let's blame the Gateses. Gateses took our precious Netscape. Nasty dirty
sneaky Gateses. We hates the Gateses.

------
nickmain
Have you considered using Ethics rather than Morality ?

I hear the word morality and immediately assume the speaker/writer is arguing
from a religious point of view.

~~~
9999
"It was morality that burned the books of the ancient sages, and morality that
halted the free inquiry of the Golden Age and substituted for it the credulous
imbecility of the Age of Faith. It was a fixed moral code and a fixed theology
which robbed the human race of a thousand years by wasting them upon alchemy,
heretic-burning, witchcraft and sacerdotalism." \-- H.L. Mencken

Morality is the wrong word to use. It is also highly subjective. Whose
morality is this guy even talking about? What does it mean to increase
morality? Mencken again:

"Immorality is the morality of those who are having a better time. You will
never convince the average farmer's mare that the late Maud S. was not
dreadfully immoral."

------
kstenerud
Morality is one area that scares the bejesus out of me.

Conservative moralists argue that we must prohibit something because it's
potentially dangerous, while liberal moralists argue that we must encourage it
because it's potentially beneficial. Absolutists argue that morality cannot,
or must not change, much to the chagrin of the relativists. And all this
before we even get into the culturally directed moralities!

Compounding the problem is everyone's surety that THEIR morality is superior,
with some even suggesting that their axiomatically superior morality be
imposed upon others, by law, or even by force, if necessary.

------
TheMagicHorsey
WTF is moral progress? How do you measure morality and put it on a graph? What
the fuck is this guy even talking about?

Am I the only idiot that doesn't understand? How does this get voted to the
front page?

~~~
cjbprime
> WTF is moral progress? How do you measure morality and put it on a graph?
> What the fuck is this guy even talking about?

I'm sure you could begin to answer this if you tried harder. Global decrease
in violence would be a good start at a proxy to measure, see e.g.
[http://online.wsj.com/news/articles/SB1000142405311190410670...](http://online.wsj.com/news/articles/SB10001424053111904106704576583203589408180)

A more subtle example involves measuring the "expanding circle" \-- the degree
with which we're willing to empathize with people who are not like us. We've
made clear recent advances towards showing empathy and consideration for e.g.
black people, women, gay people, in relative recency in human history.

------
jokoon
People will be surprised how blurry the constructs of society really are.
Science really IS easier compared to politics. You don't need scientific
advancements, but they're welcome too, and in the end they matter a lot.

Marketing technological advances while aiming to help the poorest is I think
the best way to show everybody how technologies can really help us, and not
enslave us. Technologies must be designed with political goals too. You could
naively say the iPhone is creating more political problems than it solves, and
you'd be right. What Bill Gates does is creating incentives for research in
the way the NASA did. And I think it's awesome.

I guess that's what makes Bill Gates smarter in the end. You'll moan about
antitrust laws and go watch JOBS, but today the people who make difference are
businessmen who knows their basics of science.

If you're unhappy about politics, don't tell Bill Gates, that's not what he
does. His giant financial resources won't help solving political problems.

Maybe he could pour money into making some media communications projects about
science and studies just like Al Gore did, but again, I doubt Bill Gates
really has the proper connections to do this.

------
Udo
It is not a matter of morality. Suppose you have 100 billion robots, each
programmed _not_ to press a world-ending button. Some of them _will_ press the
button. It's a matter of statistics and the moral framework of the other
robots doesn't even enter into the equation. That's exactly the situation
we're in.

Moral progress has _always_ trailed behind our capabilities, and if I may say
something heretical here, maybe it's also been _driven_ by technological
progress.

From the moment on where the first knife was used to attack a fellow human
being, our capability to do harm has always been greater than our moral
inhibition to do so. And it still is like this, by the way, with the humble
knife. It seems we've never mastered the art of handling it responsibly,
people are still being stabbed to death.

However, I think it's also important to point out that we're dealing with
outliers. Knife murder is not something a person typically does during the
average day, nor is it being a bomber, or a Bond-type villain researching
world-ending microbes. At the same time though it's important to recognize
that the potential of these outliers to do harm to large numbers of people is
only going to grow. Concocting the next deadly plague that might kill millions
is now within the grasp of determined single crazy people. It's kind of
astonishing how little the media has caught on to this profound game changer
that has been decades in the making.

We live in an age where a single person could very well wage war against a
huge group of (possibly defenseless) people. That's why the morality argument
here is so weak: even supposing an ethically perfect society, we'll never have
the statistical certainty to exclude destructive outliers.

I don't have a solution for this dilemma, of course, but it's not as "simple"
as merely complaining about the slow pace of our collective morality; or, more
insidiously, calling for technological regression as a measure to uphold the
common good. The growing disproportional power individual crazy people are
wielding makes a reasonably good explanation for the causes behind the Fermi
paradox, so I don't think we'll get past this any time soon - at least not
until reasonable technological protection is available that doesn't at the
same time enslave the civilization it's supposed to protect.

In closing, I think there's hope. The cold war was a first testing ground for
the whole effect, and the fate of the world rested repeatedly on the moral
actions of individuals. I'm still somewhat incredulous that it all worked out
in the end. By the same token, even though inflicting mass casualties is now
definitely within the reach of single people, there's a distinct absence of
world-ending catastrophes, and more importantly: there's even an absence of
people who actually tried to do it.

We might be OK, we might not. But it's not down to large-group morality
somehow playing catch-up with technology. The call to artificially inhibit
technology "until we're ready" seems a downright unethical proposition to me.
It could be many, many generations until things like religious delusions
finally die out, but you can be absolutely certain that will _never_ happen in
a world that is artificially tech-restricted. I thoroughly believe the only
way forward is, you know, _the way forward_.

~~~
coldtea
> _It is not a matter of morality. Suppose you have 100 billion robots, each
> programmed not to press a world-ending button. Some of them will press the
> button._

Am I missing something? If they are (a) robots, (b) each "programmed not to
press a world-ending button", then none will.

~~~
theandrewbailey
What if they were all programmed differently?

What if one malfunctioned such that a part or appendage fell off and hit the
button?

What if one bumped into another that was standing next to the button and
accidentally pressed it?

~~~
coldtea
Well, what if there was no robot at all then, and someone accidentaly rested
his coffee cup on the button?

------
AndrewKemendo
_there may be technological ways to enhance empathy or decrease our species’
tendency towards greed, revenge, and moral flexibility under duress._

Congratulations OP, you have reasoned yourself into transhumanism! To me the
argument the OP described is one of the best ways to come to human-
augmentation advocacy because it becomes obvious that we should be using
technology to improve our capabilities.

To delve more deeply into transhumanism from this perspective, I would caution
to not get preoccupied with current transhuman threads which are largely based
on prosthetics and physical limitations.

Rather, I would look more towards nootropics and other technologies that help
maintain or improve brain function and technologies which help _reduce
biases_. In my opinion there is not enough being done to help the average
person make better decisions on a daily basis, though there are some things.
The fact that this is becoming a discussion point is great!

edit: I should also mention that the field of behavioral economics positively
emphasizes the idea of "nudging" people toward better decision largely through
design.

------
nashashmi
Unfortunately, the entire conversation is based on the premise that morality
evolves at the species level, and not the level of the individual. And this is
both a damning statement of humanity as well as a heretical statement against
religion and history.

Morality was always inside the human. No evolution needed here. But from an
evolutionary perspective, the evolution of morality was the defining factor
that made us drastically different other species.

Then what causes some people to be Buddha, Jesus, Moses, or Muhammadﷺ while
others to be ignorant and vicious? Morality needs to be brought to health and
vigor after being subdued because of social or political conflict. This is
often termed enlightenment.

And religion has always been a powerful vehicle and force to establish
morality in individuals, a vehicle that carries arguably the divine rule, or
an evolutionary mechanism that is carried through trial and failure.

------
alimoeeny
OK, I understand the problem being proposed, but what is the solution here?
what solution is being suggested by the author? I understand that we are
gaining more and more power without as good of understudying and self control
to use it, but what can the Gates do to help?

------
jotm
Not exactly morality and not exactly an apocalypse, IMO. It seems that our
brains can't keep up with our technology - for example, 200 years ago, one
person could realistically learn the workings of most of existing technologies
- nowadays, it's literally impossible even if you specialize.

You need to work in large groups, yet living as individuals, it's hard to keep
track of what you create.

And that's the problem - maybe the Gates foundation could invest in mapping
the brain and figuring out how to improve it.

Once we're smart enough to understand more of the implications of our
technologies and ways of life, morality will be easy to fix.

------
undoware
ABD in ethics here. I can't believe someone actually drew a line on a chart
and labelled it 'ethics'. This is a category mistake, like accusing a shade of
blue of murder, or asking the Pythagorean theorem on a date.

If you're going to use ridiculous oversimplifications, please at least use the
ones that have been vetted by people who have actually given the matter a
decade or two of thought. It's not like there's a shortage of candidates
either. But whatever it is, ethics is not a quantity, and it is certainly not
a quantity over time.

By the way, Pythagorean Theorem says hi.

~~~
undoware
...in the future, it might also help to remember that models of
morality/ethics are approximately as hard to get right as cryptosystems --
harder, if you factor in the inherent squishiness and inexactitude of the
subject matter. As with crypto, so with ethics: trust your libs before you
trust your midnight flights.

------
JackFr
What is meant by "moral progress"? Is it the illusion, (or less
argumentatively, the perception) that we living today are somehow more
enlightened than those in the past?

Such a thing can not be objectively defined.

------
FrankenPC
I understand the sentiment. What I don't think a lot of people understand is
that people like the Gates have probably gone over all this at least in their
minds. And they've come to the same conclusion everyone else has: we have to
evolve out of it. Unfortunately, evolution is a terribly inefficient way to
deal with the problem. Maybe genetic engineering to remove the lizard brain
from the next generation of humans might work.

------
Zebra20
"On the other hand, the burden of the intelligent and caring with ample
resources..." The white man's burden? Are you sure this isn't satire?

------
mareofnight
I enjoyed your essay, and I think it's good that you're getting people talking
about this.

I don't think we can expect to get the same sort of moral improvement we do in
the fastest-moving areas of technology by throwing technology at the problem
of morality. We're still not all that far along in learning how to treat
mental illnesses, and using medicine to make people more moral would probably
be similarly difficult. That doesn't necessarily mean it's not a solution,
just that we shouldn't be surprised if building a solution of that sort turns
out to have a difficulty more like curing PTSD than preventing childbed fever.

I also think that making humans act more morally probably has more to do with
helping us recognize and think clearly about moral decisions, than making us
care more. One of the reasons the Gates foundation is special is that they're
trying to help people _efficiently_. The way human minds are built makes it
hard to realize that it's important to be efficient at doing good, rather than
just to make an effort.

I guess this is coming from someone whose empathy dial is turned up to eleven
anyway, though, so it could also be that different people have different
problems to overcome to be more moral. The problems I had were not forgetting
about causes of suffering just because they don't happen to be staring me in
the face at the moment, and then focusing on the right things.

The website LessWrong has some interesting writing about why humans suck at
charity
[http://lesswrong.com/lw/aid/heuristics_and_biases_in_charity...](http://lesswrong.com/lw/aid/heuristics_and_biases_in_charity/)
, and learning how to reason better in general is the main thing they write
about. The main effects the stuff I read there has had on my morality-related
actions are that I take my career more seriously (I don't actually care that
much about more money for me, but I do care about more resources for improving
the world), and I worry less about small stuff (I accept inefficiencies in my
own life if fixing them doesn't look worth the effort, and don't get
distracted by projects that would help people but only a small amount).

Some of what you're talking about here - the idea of giving flamethrowers to
children - sounds similar to the idea of existential risk, though on a smaller
scale. If you don't already know if it, you may find it relevant.

------
jchung
Highly speculative. Nothing concrete proposed.

~~~
mynameishere
Seriously. It's a known issue, almost a cliche, and has been for a while. If
somebody _had_ some suggestions, they'd get to the point quickly. Let me re-
write it with some (bad) ideas:

\------------------------------------------------------

Humans evolved with sticks and stones, and their resulting evolved sense of
morality cannot be trusted with guns and bombs and nuclear weapons, etc.
Therefore I propose the following.

First, as stopgap measures:

1\. Putting female hormones in the world's water supply.

2\. Decommissioning all nuclear power plants and all machinery capable of
refining uranium.

3\. Placing every military under the direct control of the United Nations.

Second, as an eventual permanent solution:

1\. Neutering all violent criminals on their first offense, as well as all
past violent offenders. Neutering all other criminals who have been identified
as having antisocial personality disorder. Since this will be required in all
countries, the authority to do this will be placed with the United Nations.

2\. An unlimited budget will be allotted for the purpose of researching and
enacting a method for mining the entire planet's reserve of uranium and
rendering it inoperable or unobtainable.

~~~
coconutrandom
Wow, I didn't see that you wrote "(bad)" when I first skimmed it. Bit of a
shock.

------
captainmuon
I think it is not so much a question about morality, but about our societal
structures.

In the past few centuries, human history has been driven by technological
advances. But the major defining development of the 21 century will not be
technological, but social. Society will either radically change for the
better, or for the worse.

~~~
humanrebar
What was the defining development of the 20th century? I can think of many big
ones that weren't technological in nature, for example the causes of the world
wars.

~~~
captainmuon
There were too many to pick out a single one - ubiquotious electrification,
running water, modern logistics, modern medicine (esp. antibiotics)...

The thing with the political developments is that they were IMHO dependent on
previous technological and economical developments.

Fascism was only possible in a modern nation-state. In a feudal world, the
peasants didn't care much which king ruled over their land, they didn't
identify much with their nation. That changed when everything grew closer
together. I think new means of communication and transportation greatly helped
with building a national identity (and later were used for propaganda). The
rise of capitalism and the bourgeois/civil society were also neccessary
factors. Why do I say this was driven by technology? Because the begin of
modern capitalism is inseparable from the industrial revolution. In the end of
course it's not just a causal chain, but a very complicated development.

Anyway, what I was trying to say is that when people have hopes for the
future, they often think of great technological breakthroughs, continuing the
progress of the 20th century. A cure for cancer, fusion energy, robots and AI,
.... I think the biggest breakthroughs will be on another field: How can we
make a global economy that is not so riddled by crises? How can we distribute
wealth better? Can we build a society without money? How do we make better
descisions as a society (E.g. the political system in the US is clearly broken
right now)? How do we ensure the freedom and libery of everybody? How do we
prevent wars? etc. etc.

There were a bunch of ideas in the last few centuries in this field, some
better, some not - constitutional monarcy, the republic, democracy
(parlamentary, or council), socialism, communism (the dozen or so ideas and
sytems that called themselves such), social market economy, the stuff they're
doing in china right now, and so on. In the last few decades, the consensus
seemd to be that these are mostly ideologies, and ideology is dead and
capitalism+western democracy has won. Now, we are seeing our system fail
slowly, too, and we seem pretty helpless about it. It won't help to throw ever
new technologies at it. But I think by learning from the mistakes of past
societies, but also from what they got right, we should be able to gradually
develop a new kind of society that ensures prosperity, freedom, fairness and
peace.

I hope this was not too incoherent to understand :-)

------
aaronem
This is a very sensible and well-reasoned, if ridiculously windy[1], essay
which has its basis in one completely erroneous assumption, which is that it's
possible to answer, save after the fact, a question like "[A]re we morally
developed enough as a species to be entrusted with nuclear weapons?"

That's a very simple question to answer, but you can only do so in hindsight.
Did we exterminate ourselves with nuclear weapons? If we did, then we weren't
morally developed enough to be entrusted with them; if we didn't, then we
were.

And even in that, there's another bogus assumption, which is hidden behind the
word "entrusted". No one _entrusted_ the human species with nuclear weapons;
there was no event in which God, or angels, or sufficiently advanced aliens,
descended from on high, amidst clouds of incense or rocket exhaust, and
bequeathed unto us the knowledge of what happens when you squeeze a ball of
uranium very tightly. It may seem absurd to point this out, but only until you
consider that "entrusted", in implying a higher power which did the
entrusting, implies also a higher power which could _reverse_ that decision;
there being no such power, it is impossible to reason accurately about the
morality of any technological development when such reasoning involves the
implicit assumption that there is such a power.

[1] Those familiar with my comments on HN may be astonished to see me calling
_anything_ "windy", much less ridiculously so. In my defense I can only say
that I reserve the use of that adjective and its adverb for cases of extreme
provocation, the nature of which should be obvious to anyone attempting to
wade through the text I've so described.

~~~
6d0debc071
_> And even in that, there's another bogus assumption, which is hidden behind
the word "entrusted". No one entrusted the human species with nuclear weapons;
there was no event in which God, or angels, or sufficiently advanced aliens,
descended from on high, amidst clouds of incense or rocket exhaust, and
bequeathed unto us the knowledge of what happens when you squeeze a ball of
uranium very tightly._

\---------------------

From the perspective of the vast majority, scientists entrusted us with the
knowledge of how to make nuclear weapons. They could have kept quiet on that
score.

~~~
aaronem
The perspective of the vast majority has no slightest bearing on what is or is
not true.

You imply either that scientists are angels, or that every particle physicist
is a member of a world-girdling conspiracy, which exists as a sort of caucus
which decides, presumably by some highly precise and objective method, what
every member shall or shall not do. Neither is even remotely plausible. Try
again, please.

~~~
6d0debc071
I simply imply that people are responsible for their actions. People were
given the knowledge to make nuclear weapons by those smarter than them.

The fact that others may very well have eventually given them the same is
neither here nor there, and your weird obsession with requiring it be angels
and aliens does nothing to alter that. If you assume many groups of aliens,
each species of which might have given out nukes, then you're in the exact
same position as our theoretical physicists - 'of course humans weren't
entrusted with nuclear weapons, after all, aliens aren't all part of a great
alien conspiracy....'

The two scenarios are functionally analogous but for the fact that ALIENS is
posted into the second.

You're just dancing with semantics.

~~~
aaronem
And you aren't?

The point of involving angels and aliens is to make clear the absurdity of
assuming some superhuman agency which handed down the knowledge from on high.
Scientists are human beings, too, with all the fallibility and lack of
prescience that is our common heritage, and the fact that "others may very
well have eventually given them the same" is the very crux of the matter at
hand.

From our perspective, with the benefit of hindsight, it is trivially obvious
that the Third Reich made no significant process toward the attainment of
nuclear weapons, and that such weapons were not necessary to guarantee the
Allies a victory in Europe. Do you think that was trivially obvious in 1942?
It was nothing of the sort, and for all those who worked on the Manhattan
Project could know, their efforts were absolutely vital to their nation's
successful prosecution of the war. In such circumstances, what you would no
doubt consider a principled refusal to participate in the project would, in
fact, constitute moral cowardice of the highest order, on the part of anyone
who found Allied victory in any sense preferable to Axis. They could not have
known, after all, that the Third Reich had no realistic hope of developing an
atom bomb, given the time and resources available to them, and judging them on
the basis of what we know now, seventy years after the Third Reich's abject
defeat and dissolution, is purely foolish.

All of this goes to demonstrate your basic error, which is to consider
"scientists" a single, unified category which can, or can be expected to, act
as a whole in whatever it believes, again as a whole, to be the best interests
of mankind. This produces a sort of "us and them" mentality, which implicitly
elevates your "scientists" category to the level of the superhuman actors
whose existence my "angels or aliens" formulation makes explicit. It is this
mentality which permits you to expect knowledge and behavior of scientists
which you do not expect of _humans_ , and it is precisely this mistake the
original author makes.

~~~
6d0debc071
Humans are fallible, but some humans are massively more fallible than others.
Humans lack foresight, but some humans lack a lot more foresight than others.
Viewing humanity as a whole, without respect to local variations in morality
and other qualities, over-simplifies things.

You have a group of individuals who come across a power that the majority
lack, they also happen to be a lot better educated than the average, and their
choice is whether to give away the secret of that power. Perhaps they're not
well equipped to make that choice, but that's neither here nor there in
framing the basic question - and the answer to that question can then be used
to form other questions. If they turn out to be ill-equipped to make the
choice, then that's at least something that can be worked on. How do you
improve the qualities you'd want of a decision maker in this regard?

That's a lot more useful, realistic line of thinking to go down than just
shrugging and going 'All humans, we should expect the same knowledge and
behaviour.' The behaviour you expect of a randomly chosen human should be
different to behaviour you expect of a semi-randomly chosen human from a
specific subset. Heck, even the arguments you can have - you can't talk about
probability with most people, they're not going to understand what you mean
when you say 'standard deviation' or anything like that. Most people do not do
much thinking.

You don't need scientists to be super human. You just need them to be a group
you can start asking questions about. Get an angle of attack on the problem
through. If they turn out to be average or substandard or whatever in some
particular area, then there are ways that can be addressed.

~~~
aaronem
Given the way you keep ignoring my argument, I don't see this going anywhere
useful, so I'm going to curtail my further participation. I'd like to say it's
been a pleasure, but...

~~~
6d0debc071
From my point of view I'm not ignoring your argument. It's a shame we can't
understand one another, but thank you for your time anyway.

------
bonemachine
_TL;DR_ Nice, planet-saving thoughts and all, if the original author could put
a bit of markup around the key takeaway points. For example the following:

 _The main thesis is that it may be possible to apply science and technology
to understand and improve human morality, and that this is a most critical
human endeavor._

------
dminor14
Paradoxically, The power to make humans more moral could also be used to make
them less. And since, in the beginning, we wouldn't be moral, these
technologies would actually have the opposite effect!

------
dharmach
Part of the issue is that many societies, even the ones in pretty comfortable
position, operate on "survival mode" where power is perceived cool and having
more is always better.

------
j2labs
"It has become appallingly obvious that our technology has exceeded our
humanity." – Albert Einstein

------
sarreph
Yes, the global populace is immensely immoral.

What is your solution?

------
valtih1978
It is not something new. All we know the free market mantra that _too big_
companies are not viable. I also remember that Carl Sagan's "Cosmos" central
point was that it is the technology that dramatically reduces the probability
of intelligent life (and that is why we cannot find any aliens around). Even
our owners (the friends of Gates couple), remind us about of the same dangers
or overpopulation in the face of resource depletion,
[http://www.youtube.com/watch?v=bI0fnRbhHFo](http://www.youtube.com/watch?v=bI0fnRbhHFo).
What is new to me is the linear growth of morality. Why?

I see that people are simply becoming more ignorant on the basis that they do
not need to study the nature and technology because technology already
provides everything to them and, furthermore, and, furthermore, defend their
ignorance saying that the technology is so far ahead of the morality that we
need to harmonize our relationships better rather than learn (that is,
advance) the technology. Modern people do not understand that you cannot
distinguish between good and bad and decide what to do with your technology if
you are illiterate.

Might be the letter author refers to the linear IQ growth. Ok, but this also
concerns me because we have stifled the natural selection with our advances in
medicine and improved quality standards, which allow to survive and reproduce
anybody whereas only 2/10 did reproduce yet 100 years ago. Since we did not
replace the natural selection with artificial one, the biological quality (aka
our genome) of new generations is degrading (you believe the opposite,
right?),
[http://biology.stackexchange.com/questions/7686](http://biology.stackexchange.com/questions/7686).
I propose to solve this problem not letting the inferior comrades to die but
paying for the services with the reproduction right: if you are genetically
unhealthy and, thus, take more from the society than give to it, you should
let to produce people to the other, those who maintain the civilization and
advance the technology. This is in your interest.

The article speaks a lot about possible misuse of the technology. But the most
astounding fact is that the civilized nations have the technology to organize
the comfortable live efficiently, but use it to kill the nature on the daily
basis and do not notice it!

But I can tell you how you is personally guilty abusing the technology every
day: The technology allows you to burn all the precious resources very fast
and this is considered as a good thing. The freedom culture and technology
gives you cheap gasoline, car and house and I bet that you believe that you
are absolutely sure that this is good to have a personal house, separate from
the others and waste more because more waste is more consumption, better
economy and greener world. Thereby, you hate the consumerism. However, this is
the car-house based infrastructure that costs the most to you and to the
nature. Once you condense the population into apartments in multifloor
buildings, as it was practiced in the Eastern Block, for instance, the
resource consumption (lands, for transportation, heating, lighting, and
building/maintaining the infrastructure) reduce 10-100 times and we'll fit
into the ecological footprint. This is how how you can we make the life of 10
billion sustainable. The 10-100 savings are achieved since the average
distances are shrinked dramatically, which, per se allows huge savings but
additional savings are because you can use more effective (i.e. public)
systems of heating and transportation - sharing the resources as consumers,
which contributes another order of magnitude. You can even compost in the New
York city apartment,
[http://sustainability.stackexchange.com/a/2402/476](http://sustainability.stackexchange.com/a/2402/476).
We can even use trains for inter-city travels instead of planes. Trains do not
consume the energy whereas our favorite planes are the paramounts of
ineficciency. This way we could reduce carbon emissions, save fuels and stop
global dimming.
[http://en.wikipedia.org/wiki/Global_dimming#Probable_causes](http://en.wikipedia.org/wiki/Global_dimming#Probable_causes).
Yet, we wont, right? There are certainly bad people who can use terrible
technology for bad and we better write Bill Gates about them. We have a right
to be independent, self-reliant, live in a private house and spend a fair
amount of gasoline.

I like the communist preaching telling that we should stop consumerism,
greediness and other drivers of capitalist hell and turn to sympathy, concern
for the others, unfamiliar and the rational social planning environment. I
even think robots should do the same, cooperate rather than fight for their
egoistic interests. What I do not like is that you remove the part predicting
that we'll have consciousness people who will stop wasting lands in 2014 from
the Isaak Asimov interview (compare
[http://www.nytimes.com/books/97/03/23/lifetimes/asi-v-
fair.h...](http://www.nytimes.com/books/97/03/23/lifetimes/asi-v-fair.html)
and what you have in
[https://news.ycombinator.com/item?id=6995644](https://news.ycombinator.com/item?id=6995644)).
I see quite the opposite to what seen the futurist. Our passion to have a
house is strong as usually. It is expedited by the technology and housing
bubble , we build ever more cottages and urban spawls (enjoy their sights
[http://en.wikipedia.org/wiki/Urban_sprawl](http://en.wikipedia.org/wiki/Urban_sprawl)).
So, when you speak about some adversaries that will use technology for bad,
can I look at you? I always look at you because you are the that criminal.

Do you keep up with the morality to understand what I am talking about? If you
do, ask the Gates couple to free the green lands from your houses and roads,
moving all activity into the dense, 3D cities.

------
AhContraire
Before the 1960's, when God was in schools, both Blacks and Whites, didn't
have to lock their doors.

Before the 1960's, there was no iron bars on windows, dead bolt locks, car,
home and business alarms, and no security cameras in the home, business or
city wide.

Those are the facts, especially for the atheists and agnostics.

~~~
woofyman
And black people were lynched and denied voting rights. Women were second
class citizens. Gays were beaten by the police. Those were the days !

