

Three Worlds Collide (Eliezer Yudkowsky) - ugh
http://lesswrong.com/lw/y5/the_babyeating_aliens_18/

======
gjm11
A bit of background may be in order: the main point of this story (aside from
just being a good story) is to get readers to think about how one ought to go
about interacting with other intelligent agents whose value system is
_radically different_ from one's own. See, e.g.,
<http://lesswrong.com/lw/tn/the_true_prisoners_dilemma/> for an explanation of
why those radically different values make a difference.

And a bit of background to that bit of background: Yudkowsky is interested in
the question "Suppose it turns out, in the not-outrageously-distant future, to
make machines whose general-purpose intelligence matches or exceeds our own;
and suppose it turns out that once you've got that, ordinary technological
progress plus the ability of those machines to do their own machine-designing
leads rapidly to machines _vastly_ smarter and more powerful than we are. How
do we avoid this scenario playing out in a way that we don't find abhorrent?"
(Sample, over-simple failure mode: we try to make sure that our
superintelligent machines work for the benefit of the human race by teaching
them what happy and unhappy people look like and telling them to make there be
more happy people and fewer unhappy ones; they slaughter the entire human race
and fill the universe with lots of little dolls that look just like really
happy people.)

Well, that's at least partly a question about how to deal with clashes of
values between very different sorts of intelligent agent: in this case, human
beings and superintelligent AI machines.

~~~
memetichazard
_(Sample, over-simple failure mode: we try to make sure that our
superintelligent machines work for the benefit of the human race by teaching
them what happy and unhappy people look like and telling them to make there be
more happy people and fewer unhappy ones; they slaughter the entire human race
and fill the universe with lots of little dolls that look just like really
happy people.)_

A sample, less simple failure mode, also written by Yudkowsky:
<http://lesswrong.com/lw/xu/failed_utopia_42/>

~~~
nova
That's actually a not so bad failure mode, I think.

------
bumbledraven
I'm probably making all kinds of Bayesian reasoning errors by coming this
conclusion, but I'm beginning to think Eliezer's true calling is writing SF.
This one is superb - entertaining and enlightening, but I didn't find the
ending(s) very powerful. For a story of Elizier's that starts slow and
finishes strong, check out "That Alien Message", a disturbing and inspiring
parable about the true meaning of superintelligence:
<http://news.ycombinator.com/item?id=980876>

~~~
revorad
I agree, EY's writing is simply brilliant. He would probably love to be an SF
writer, but he's doing what needs to be done. See my comment here -
<http://news.ycombinator.com/item?id=1301534>

------
rms
Great story. If you like this, you may like Eliezer's Harry Potter fan
fiction!
[http://www.fanfiction.net/s/5782108/1/Harry_Potter_and_the_M...](http://www.fanfiction.net/s/5782108/1/Harry_Potter_and_the_Methods_of_Rationality)

~~~
ugh
The fan fiction appearing today on HN actually caused me to submit Three World
Collide. (So, if anyone ever wonders why HN sometimes seems to be monothematic
…)

------
pronoiac
In case you didn't notice & click on the "in 8 parts" links, the True Ending:
<http://lesswrong.com/lw/yb/true_ending_sacrificial_fire_78/>

------
jarin
Surprisingly good story. I really liked how he put humanity in both positions
of being technologically/morally superior and then inferior.

~~~
memetichazard
When I originally read it, the symmetry was blatantly obvious and I kept
waiting for at least one of the humans to remark on it. That they desire to
impose their ethics on the Babyeaters yet reject those of the others...

But SF can also be social commentary. Not necessarily intentional, but it's
likely this could be an allegory to multiple current situations/viewpoints.

