
Rationality: From AI to Zombies - rayalez
https://intelligence.org/2015/03/12/rationality-ai-zombies/
======
xianshou
This is a collection of Yudkowsky's blog posts on LessWrong, and it's making
the rounds now because it forms the basis for most of Harry's rationalist
insights in the extremely entertaining and recently finished Harry Potter and
the Methods of Rationality. (The last chapter of HPMoR went up on Ultimate Pi
Day - 3/14/15 - last week.) I picked it up on the advice of some friends, and
while the editing is less than perfect, it is overall an impressively well-
organized and cross-linked treatise on how to reason.

At the heart of the book is the assertion that rationality is not about
justifying why we are _right_ , but thinking how we might be _wrong_ , and
paying careful attention to any evidence that puzzles us or could convince us
to change our minds. The resulting philosophy is a combination of the
falsification principle and the scientific method, applied to your own
character. Highly recommended.

~~~
Strilanc
Given that this compilation was published on March 11th, and hpmor finished on
March 14th, I don't think it's making the rounds now because of the timing of
hpmor. Their finish/release dates are simply close together.

~~~
maaku
The release dates were purposefully close together.

~~~
jtheory
I don't have inside knowledge, but yeah, it seems extremely likely -- the
sequences have been available for years, after all.

~~~
ciphergoth
They didn't wait for the end of HPMoR to publish this book - they made an
extra effort to have it ready in time!

------
rkachowski
That Eliezer Yudkowsky is a pretty smart guy. Against all prior expectations
(lol), I am now reading Harry Potter fan fiction and enjoying it.[1] The
LessWrong sequences are poignant and concise and pushed me down the road of
rational thought.

I look forward to getting some of these books on my shelf.

[1] [http://hpmor.com/](http://hpmor.com/)

------
Kutta
I'm happy to see this in the preface (where EY looks back and details what he
thinks he did wrong):

"Today I would write more courteously, I think. The discourtesy did serve a
function, and I think there were people who were helped by reading it; but I
now take more seriously the risk of building communities where the normal and
expected reaction to low-status outsider views is open mockery and contempt.

Despite my mistake, I am happy to say that my readership has so far been
amazingly good about _not_ using my rhetoric as an excuse to bully or belittle
others. (I want to single out Scott Alexander in particular here, who is a
nicer person than I am and an increasingly amazing writer on these topics, and
may deserve a part of the credit for making the culture of _Less Wrong_ a
healthy one.)"

~~~
gwern
For those interested in Scott: his LW posts can be found at
[http://lesswrong.com/user/Yvain/submitted/](http://lesswrong.com/user/Yvain/submitted/)
, his old blog at
[http://squid314.livejournal.com/](http://squid314.livejournal.com/) , his new
blog at [http://slatestarcodex.com/](http://slatestarcodex.com/) , and a
tumblr for shorter things at
[http://slatestarscratchpad.tumblr.com/](http://slatestarscratchpad.tumblr.com/)

If you're not sure where to start, a number of his writings have been received
well on HN:
[https://hn.algolia.com/?query=slatestarcodex&sort=byPopulari...](https://hn.algolia.com/?query=slatestarcodex&sort=byPopularity&prefix&page=0&dateRange=all&type=story)

~~~
pja
gwern, Scott makes a specific point of not linking to his blog from his LJ in
his penultimate LJ post - he might prefer that you didn’t make that link here?

~~~
gwern
He went back and hid the more sensitive LJ entries, IIRC, so I don't think it
should be a problem.

------
Houshalter
There is an ebook here
([http://lesswrong.com/lw/72m/an_epub_of_eliezers_blog_posts/](http://lesswrong.com/lw/72m/an_epub_of_eliezers_blog_posts/))
where you can read it unedited as it was originally written. I don't know if
anyone would want to do this, but it's how I read them. I imagine a lot of the
content would be edited down or out, especially the standalone stuff. That's
not necessarily a bad thing though.

~~~
ciphergoth
I have edited that post to recommend the book over my ePub. You can always
read my compilation after finishing the book, if you want to read the articles
the book leaves out.

------
emiliobumachar
I am currently reading, and loving, Yudkowsky's introduction to Quantum
Mechanics from the blog:
[http://lesswrong.com/lw/pc/quantum_explanations/](http://lesswrong.com/lw/pc/quantum_explanations/)

Does anyone know whether that sequence has made it to one of these books?

~~~
pyrois
I quite liked his introduction! Be aware, however, that he dramatically
overstates some things. For example, he claims that the many-worlds
interpretation is obviously correct. This is basically wrong: it may be
correct, but it certainly isn't obviously correct, and many scientists whose
entire research is dedicated to quantum mechanics would disagree that many-
worlds is correct at all.

There is a long list of alternative explanations[0], with very little evidence
to lend support to or contradict any of them (at least, any of the ones that
are still around. Some have fallen by the wayside).

[0]:
[https://en.wikipedia.org/wiki/Interpretations_of_quantum_mec...](https://en.wikipedia.org/wiki/Interpretations_of_quantum_mechanics)

~~~
hawkice
I think it's fair to say that it's "obvious" when optimizing for the same
things Yudkowsky does. For instance, any issue having something to say about
the Born rule seems to be his dump stat, if you will. This makes sense,
because in the paper where Max Born introduced the rule, he suggests that it
indicates only one possible correct interpretation (hint: it was not Many
Worlds).

That all being said, no one really disagrees about the predictions of the
math, and at least many-worlds has less completely ridiculous
misinterpretations by lay-people, so both ends of the spectrum are tolerable
to me.

------
maaku
For those who are interested, I will be moderating a online reading group
covering the book here:

[http://lesswrong.com/r/discussion/lw/lx0/rationality_from_ai...](http://lesswrong.com/r/discussion/lw/lx0/rationality_from_ai_to_zombies_online_reading/)

------
hunnypot
I'll just say that "intelligence.org" is as heavy-handed as Google buying the
.dev TLD.

~~~
jtheory
I don't think I could bring myself to be a "Bright" for that reason (see the-
brights.net)...

Vague counterpoint -- this is an ebook collecting essays from lesswrong.com,
which is a bit more subtle as names go.

~~~
adamzerner
> I don't think I could bring myself to be a "Bright" for that reason (see
> the-brights.net)...

A name is enough of a reason on it's own?

~~~
jtheory
Sure.

That is, my understanding of reality is the same, regardless. I think the
goals of "The Brights" are laudable, generally. But I didn't register on their
website, and wouldn't really want my photo and identify featured as a Bright,
because the name just doesn't feel right to me.

How you identify yourself makes a difference to how you're judged, and hence
the influence you can bring to bear on the world.

Or, you know, if I someday gain a bit more public recognition and clout, it
may start to matter then how I self-identified back in 2010.

------
danbmil99
Careful. Yudkowski and LW have a dark side. (google Roko's Basilisk)

Rationality applied to ethics is a sticky wicket. You have to subscribe to
some metric of "good" in order to weigh and compare outcomes. EY's definition
of good is ruthlessly utilitarian, and that has some extreme, nay dire
consequences for certain thought experiments.

Consume this philosophy at your own risk.

~~~
zyxley
Yudkowsky's "good", at that of his fans, also tends to be naively utilitarian,
ignoring modern development of that philosophy (see his whole "it's better for
one person to be tortured for 50 years than for a very large number of people
to each get one speck of dust in their eye" thing, for example).

~~~
TeMPOraL
Check out that topic more carefully. As far as I remember, it was a toy
example pointing out where the naive application of utilitarianism goes
bonkers, and from what I can tell, Eliezer doesn't actually subscribe to
"torture over dust specks".

~~~
zyxley
He does. He says so explicitly in the comments.

> I'll go ahead and reveal my answer now: Robin Hanson was correct, I do think
> that TORTURE is the obvious option, and I think the main instinct behind
> SPECKS is scope insensitivity.

~~~
strnfcpy
If you opt to remove the SPECKS, you would have to change some law of nature
or some property of matter, which will have other consequences for the amount
of suffering in this world. As you cannot reliably predict or calculate those
consequences, you cannot solve this problem.

