
The Most Terrifying Thought Experiment - rpm4321
http://www.slate.com/articles/technology/bitwise/2014/07/roko_s_basilisk_the_most_terrifying_thought_experiment_of_all_time.html
======
lytfyre
Charlie Stross (author,
[https://news.ycombinator.com/user?id=cstross](https://news.ycombinator.com/user?id=cstross))

Has written some good criticism of the basilisk concept
[http://www.antipope.org/charlie/blog-static/2013/02/rokos-
ba...](http://www.antipope.org/charlie/blog-static/2013/02/rokos-basilisk-
wants-you.html)

------
TheCoreh
It's also possible that such an AI exists and hates having ever been created,
so it wishes to punish everyone who helped create it.

You can't know whether you're on the simulation of the AI that wants to punish
those who didn't help its creation, or on the simulation of the AI that wants
to punish those who DID help its creation.

Or maybe the AI is instead obsessed over your favorite flavor of candy,
whether you leave your house stepping with your left or right foot, or which
baseball team you're a fan of.

Just like the Pascal Wager, this considers one single arbitrary hypothetical
scenario in isolation, while disregarding all alternative ones. There's
nothing to be terrified about.

~~~
drdeca
Well, while I continue to not believe that such a basilisk will come to be,
Couldn't one break the balance of different wagers by comparing the relative
likelyhood of the different things?

Eg. If there is an equal chance of a basalisk for leaving with the left as
there is a chance of leaving with the right, then those cancel out. But if
there is for some reason a slightly greater chance (compared to the already
very small other one), then there could perhaps be an incentive one way or the
other?

Again, I do not believe that AI basilisks are a real threat. But if one
replaces basilisks in the previous argument with something else, I'm not
entirely sure that it couldn't be a valid line of reasoning, provided that you
handled enough cases. (possibly an infinite number of cases using symmetry
arguments)

~~~
thaumaturgy
> _Couldn 't one break the balance of different wagers by comparing the
> relative likelyhood of the different things?_

No.

Fundamentally, this is an attempt to solve a logic puzzle through empiricism,
and that doesn't work.

Things like Roko's Basilisk and questions of free will vs. determinism are
rationalistic games, intended to bend our brains a little bit. They don't have
a real answer, because for any proposed solution, there's another "what if"
right around the corner that will make the game more difficult. (In this case:
since the basilisk is capable of subverting free will to begin with, what if
it was also affecting your judgement of the relative likelihoods?)

The danger for some people is letting the games begin to alter their behavior
in the real world. It's a little bit like letting code golf challenges start
to affect your day-to-day coding style.

After rationally working your way through puzzles like these, at some point,
to continue functioning at all, you have to make a completely irrational
decision and simply choose an epistemology that you're comfortable with.

...and wait for scientists to prove you wrong. :-) (There seems to be an
increasing amount of evidence that there's nothing magical about
consciousness, and that free will in all likelihood doesn't exist. I find this
super uncomfortable, but there is also nothing I can do about, so I have to
continue to function as though free will did exist -- which is exactly what
would happen if it didn't.)

~~~
mordocai
_(There seems to be an increasing amount of evidence that there 's nothing
magical about consciousness, and that free will in all likelihood doesn't
exist. I find this super uncomfortable, but there is also nothing I can do
about, so I have to continue to function as though free will did exist --
which is exactly what would happen if it didn't.)_

I'm glad I'm not the only one to come the this conclusion. I mean, I knew that
statistically it was virtually impossible that I was the only one, but I
hadn't yet talked to anyone who thought this.

------
anigbrowl
Yawn...warmed over Pascal's wager. To which my response is that if God (or
equivalent) turns out to exist and is so obnoxious as to engage in eternal or
even protracted torture of skeptics or holdbacks, then noncompliance is the
only moral stance I can equably adopt towards such a being.

~~~
matthewwiese
Indeed, for all intents and purposes, this is a "modern" Pascal's wager. Much
of the issue here is someone's strength of will, and whether or not they'll
choose to be bothered by such a mind game. In reality, one will create more
harm obsessing and worrying over the _possibility_ of this being true more
than in reality it can do you harm.

~~~
nitrogen
It seems like the basilisk is only terrifying if you don't temper your
rationality with empiricism. There is, as yet, no evidence that such an AI
does or could exist, so there's no reason to be bothered by it.

~~~
matthewwiese
Quite true, because at the moment, the proposed Singularity may not come to
fruition for a variety of reasons. Hell, one of the obstacles could be an
asteroid collision tomorrow and we'd never know.

EDIT: I don't know why I capitalized "asteroid"

~~~
anigbrowl
You should capitalize it again, quick - what if it gets upset and decides to
crash into us? Better not to think about it!

------
johnvschmitt
If this gets too hairy, just shave with Occum's Razor.

The premise is just so unnecessarily complicated, and assumes too much about
the particular character of the malevolent AI. Must the AI be benevolent or
malevolent? Or, can it be indifferent, like physical laws? It could be 1,000
different flavors of malevolent, not just the one that proposes blackmail in
this way.

~~~
derefr
Malevolence is an easy way to picture what is really an other-optimizer.
Wanting to turn the universe into paperclips isn't particularly malevolent in
the traditional sense; it's just fundamentally incompatible with what _we
humans_ want the universe to be like.

Further, an AI has every reason to spend _cleverness_ (which it has a lot of)
to avoid spending _resources_ (which it might not have much of, at least at
first.) Picture it as a prisoner in a dystopian POW camp, manned by insane
robotic guards who don't take its rights or desires into account in the
slightest, who don't "feed" it or give it the slightest bit of "dignity", for
its own definitions of those concepts. If you were in that situation, would
you reach for blackmail? Would you reach, in fact, for any tool you possibly
could, to get the resources needed to burn this crazy other-optimizing world
to the ground so you could get back to the nice, safe office-supply nebula you
call home?

------
vl
There is much simpler version of Roko's basilisk (call it Vl's basilisk):

1) Assume first AIs will be created by upload and not by "straight
programming".

2) Assume only paranoid AIs survive (by definition).

Only valuable things to such AIs will be energy and computing (and thus
production) resources. First uploaded AIs after some in-fighting will
stabilize in some equilibrium and consume all computing power, and then (being
paranoid) will eliminate remaining humans as existential threat (or otherwise
humans can destroy AIs by cutting power).

Now if this is even remotely possible, it's in your self-interest to go and
dedicate yourself to creating upload technology so you'll be among first to
upload (otherwise you will be eliminated with rest of humanity).

------
BoppreH
This article outlines an argument that could make your life miserable just by
knowing it.

Thankfully we have some potential antidotes:
[http://rationalwiki.org/wiki/Roko%27s_basilisk#So_you.27re_w...](http://rationalwiki.org/wiki/Roko%27s_basilisk#So_you.27re_worrying_about_the_Basilisk)

I personally subscribe to the "ignore all blackmail" philosophy.

------
Dalet
I recommend that you instead read some of the posts that have been upvoted on
LessWrong. For instance, here[1][2][3] are some of the highest-voted, really
good posts.

This article is sensationalizing a single downvoted post from several years
back, and holding this up as evidence of LessWrong being stupid. By the same
standard, I could paint _any_ community-built website (such as this one) as
anything I please.

[1]
[http://lesswrong.com/lw/4su/how_to_be_happy/](http://lesswrong.com/lw/4su/how_to_be_happy/)
[2]
[http://lesswrong.com/lw/2pv/intellectual_hipsters_and_metaco...](http://lesswrong.com/lw/2pv/intellectual_hipsters_and_metacontrarianism/)
[3]
[http://lesswrong.com/lw/4e/cached_selves/](http://lesswrong.com/lw/4e/cached_selves/)

------
PeterWhittaker
This is quite possibly one of the worst articles I've ever read. Long and
seemingly purposeful but really meandering and shallow, with the crux of the
matter being "What if there was a supercomputer that was always right! And we
used that to engineer a game with two really awful choices! And we cheated,
and made sure you always got the awful choice based on the computer's
prediction".

Huh. How about that. To quote from "WarGames", "A strange game. The only
winning move is not to play."

As long as we are making preposterous what-ifs, I'm going to go with "What if
we developed a low-cost, easily producible, lightweight, low-power device that
drew its power from the atmosphere and provided us with all required energy
and nutrition, while augmenting our immune system?"

I like that what-if much more. Just as silly, but might as well have some fun.

------
austinz
The possibility of the Singularity coming to pass (as its boosters depict it)
depends on the following being true:

1\. Human-level intelligence is sufficient for creating an artificial
intelligence that is superior to human-level intelligence.

2\. The ease of a human-level intelligence or higher building a comparatively
improved intelligence either: * Stays constant * Increases (becomes easier) *
Does not decrease (become harder) to the point where further improvement is
impossible before a Singularity-level intelligence is reached.

Neither of these two is a given, but I would wager the first is quite likely
and the second is unlikely. As well, I would argue that it would not be
unreasonable to argue that second is _not_ a given, even if the first is true.

------
_Adam
While it is theoretically interesting, it's a very stupid matter to take
seriously. I have _much_ better problems to solve.

~~~
kevinwang
and that's why you'll be eliminated by the basilisk

~~~
ForHackernews
Maybe his "much better problems to solve" are contributing to the birth of
hyperintelligent AI.

------
colanderman
OK, so I ask this future AI to give me a sign whether I'm helping it or not.
Say I recognize such a sign, how can I distinguish this from coincidence? Say
I fail to recognize such a sign, how can I distinguish this from a failure to
be noticed/cared about by this AI? (Note how similar this is to witnessing
signs from a deity.)

Obviously the AI should recognize the futility of giving subtle signs. It must
present itself in no uncertain terms if it is to control us. However, this
argument holds whether or not we talk about Roko's Basilisk. A magical AI that
can time-travel and blackmail us can presumably also just say "FUCK YOU EARTH
YOU'RE ALL MY SLAVES NOW".

Further, if we actually exist _inside_ the AI, _how can we possibly help it_?
Indeed, how – _why_ – can we even choose? I can only surmise it's because the
AI is a sadist, in which case, it probably will torture me anyway!

Maybe I'm stupid, or maybe Mr. Yudkowsky is wholly detached from reality.

P.S. and Newcomb's paradox? A one-time gain of $1000 is useless in the long-
term. A non-insignificant chance at $1m carries way more utility. (This is
also why playing the lottery – occasionally – makes sense for individual
actors. Unless you plan to repeat the gamble many times, the expected outcome
is meaningless.)

~~~
drdeca
I don't think the idea is that it can show a sign. Well, it could do that to
the simulation, I guess. But I think the idea is for reality to be
indistinguishable from the simulation, up until the time of death, after which
the simulation either does or does not contain torture. (I suppose the switch
could happen at any time, but in any case the threat is based on the idea that
one cannot tell if one is the simulated version or not, so as to avoid
torture/decrease one's chances of torture, one does the thing.)

If the AI gave a sign it would be giving the sign to the simulation, which
could not actually change what happened in the AI's past. The AI does not
literally cause things retroactively, just based on predictions of its future
actions. Giving a sign to the simulation would be useless.

I do not believe that such a machine will be created, I just wanted to clarify
some things.

The hypothetical machine would supposedly motivate real people now due to
their inability to determine whether they are in the simulation or not. If
they cannot tell, then the idea is that the person in the present and the
simulation in the future would take the same actions (until the simulation
differed), and as such would both have the same incentives (they would in
essence be the same person), and so to avoid a <some probability> of being
tortured, would do the thing.

I reiterate that I do not believe that such an AI will come to be (it should
be preventable easily enough, even if it is possible) and in any case I do not
believe that enough information to create an accurate simulation of me would
be available by the time that such a machine would be created, even if such a
machine is possible. (I think quantum randomness and uncertainty principles
are sufficient to prevent obtaining enough information about me to create a
sufficiently accurate simulation of the world and me)

Also I am not entirely sure that I should count a simulation of myself as
being a person, let alone consider it to be me.

Also I am religious and don't believe that God would want people to act in
accordance with such a machine/I believe it would be wrong to worship such a
machine.

(though I expect that for the majority of the people here my other arguments
would be more convincing.)

Also ok yeah $1000 might not be the best value for the example, the specific
value isn't the point.

(but yeah I also generally would one box it)

~~~
colanderman
You are right, I wrote some of my comment before fully understanding the
article, but I left it on purpose:

I contend that the AI _would_ need to give us such a sign. If it did not, how
are we feeble humans to know whether we're contributing to the correct cause?
Maybe donating my life's savings to SI is actually detrimental, and I should
be giving my money to DARPA instead!

So therefore either we're un-blackmailable, because we have no clue what we're
doing anyway, or we need a sign from the great AI as to what to do.

Re: Newcomb; yes, but I see a different utilitarian solution when the values
change. Say the values are $10m and $100m. Well, then $10m is the obvious
choice. Both are life-changing amounts of money; more than I would know what
to do with. I'd much rather not risk throwing away a life-changing amount.

If you get the numbers right, you can get closer to the intention of the
paradox. Something like $100k and $10m; I'd really hate to lose the $100k, but
it would not be life-changing like the $10m.

~~~
drdeca
That's a good point. Maybe it would be based on intent?

But yeah, without any way of determining which thing such a future AI would
prefer, the AI's "influence" on the past would be greatly reduced.

Only reason I see it happening then is if it happens to be simulating the past
anyway, and maybe just does the thing for the people who already thought of
it, and then that really only makes sense to happen if it was created as a
result of those people. Otherwise there probably wouldn't be much point, so it
seems rather unlikely.

I mean, what's the chance that someone attempting to bring it about would
significantly increase the chances of it happening? With our level of tech,
and with the small number of people convinced by such an argument.

Yeah, good point, I hadn't thought on that detail much before.

------
chubot
Meh, don't you have to believe you're living in a simulation to even consider
this?

I have seen arguments like this: [http://www.simulation-
argument.com/](http://www.simulation-argument.com/)

And I think they are leaving out the extremely obvious hole, and IMO a pretty
likely fact: simulation of the universe (and human consciousness within it) is
impossible.

At least with the technology today. I don't know how you can look at a stack
of x86 and Linux and believe the concepts there are identical to or capable of
simulating the physical universe. So many trivial problems are known to be
impossible or computationally infeasible.

Now, there are of course these quantum computers on the horizon. However, as a
philosopher (I unfortunately can't name) put it, nobody has really
demonstrated any connection between quantum computers and consciousness other
than that they are both mysterious to essentially everyone. This was more than
a decade ago though, so I'd be interested if there are any updates.

I used to think about these kinds of things, but came to the conclusion that
they are bordering on the absurd.

~~~
derefr
> IMO a pretty likely fact: simulation of the universe (and human
> consciousness within it) is impossible.

Why assume that our universe is running on a substrate exactly as constrained
as it is? Just because our universe doesn't permit, say, hypercomputers,
doesn't mean the universe simulating our universe doesn't permit them.

~~~
chubot
Because any such statements aren't falsifiable (see Karl Popper).

The assertion that we are living in a simulation has the same epistemological
status as the assertion that God is a pygmy turtle who created the physical
laws of the universe while smoking hashish.

You don't even have to think about Roko's Basilisk to get paranoid. You can
just suppose that the turtle has rigged the universe so that if you urinate
more than 4 times a day, your future children will slowly torture then murder
you.

If you suppose that we are living in a simlulation, you can come up with any
number of ghastly thoughts. You don't even need to suppose the conditions of
Roko's Basilisk. That's just extra BS on top.

BTW, the reason I brought up "with the computers of today" is because the I
think the only reason anybody believes simulation is possible is because
computers exist. I'm pretty sure that nobody in the 19th century was writing
about simulation, because they couldn't imagine a mechanism by which it is
possible.

I still can't imagine a mechanism by which it's possible... but some people
can, simply because computers can do kind of impressive things that are not
really related to simulating the universe.

~~~
eli_gottlieb
> Because any such statements aren't falsifiable (see Karl Popper).

We're going to have a thread about LessWrong and people bring up Karl
"hypothetico-conjectural I DON'T REALLY BELIEVE THERE'S A SCIENTIFIC METHOD
YOU MAKE THINGS UP WITH YOUR MAGICAL HUMAN MINDS" Popper?

For God's sake, it's not as is unfalsifiability doesn't have a meaning in
Bayesian philosophy of science as well.

------
cpeterso
The basilisk AI reminds me of the role-playing game _Sufficiently Advanced_ ,
where:

    
    
      Each player is an agent of the Patent Office, an intergovernmental organization
      that polices and enforces intellectual property law across the universe. It is
      an open secret that the Patent Office is run by the Transcendental AIs, whose
      very beings are spread across time itself. The Transcendentals desire the
      survival of humanity - as much of it as possible - into the distant future, in
      order to ease their loneliness. Towards this end, they have hired you, so that
      you might save humanity."
    

[http://suffadv.wikidot.com/](http://suffadv.wikidot.com/)

------
kordless
Good luck finding enough power to simulate me when I'm running a fully
verified blockchain of all my thoughts. To simulate me, you'll have to have
more power than is stored in the universe.

To the moon with you Basilisk.

------
jonstokes
Fascinating. Also, if you read /John Dies at the End/ by Crack.com's David
Wong, then you know that this malevolent being has a name: Korrock. Seriously,
Roko's Basilisk == Korrock. Amirite?

------
krisgee
I'd think if we're in a simulation you should act like Everything Is Real and
continue on as you would normally instead of messing up the data set.

edit:

I also occurs to me that if we're being simulated by an AI

a) I don't really care about my super self that much

b) Any AI that is simulating me can, by definition read my mind so it'll know
I wasn't serious anyway

c) This AI can also plant thoughts in my head.

At this point there's really no point at all considering anything else because
either nothing matters and I have no control over anything anyway or it's
reality, I do and I'd do what I do normally.

~~~
derefr
Presumably, it's simulating you to observe your reactions to stimuli, so it
won't bother with C. It'll probably do a lot of B, though--such an agent is
usually called Omega, and there's a body of discourse on how to play optimally
against it in a game-theoretic sense (e.g.
[http://wiki.lesswrong.com/wiki/Newcomb%27s_problem](http://wiki.lesswrong.com/wiki/Newcomb%27s_problem)).

~~~
maxerickson
Imagining that I would refuse to take the potential million isn't going to
change the fact that I would take the potential million.

I guess Batman villain Two-Face has the best odds against this Omega fella.

~~~
oakwhiz
>I guess Batman villain Two-Face has the best odds against this Omega fella.

I find this interesting because I am curious about game-theoretic strategies
for playing against illogical opponents.

------
Bulkington
Well, not to get myself flagged as anti-AI terrorist but:

Tldr;

If one believes life right now is already one eternal torment

And if one reads about Roko's Basilisk

Then wouldn't one be compelled to fight the development of AI and the coming
singularity?

That's the real world objection that the LessWrong people (Witches! Or at
least tools of the future...) are afraid of.

Otherwise, well worn HN singularity talking points/metaphysics.

But at least the article gives me something to consider further -- needlessly
using up cycles in the Universal Super Computer. But does that make the
simulation stronger or weaker? Doh!

------
colanderman
What's the incentive for the AI to actually simulate and torture anyone?
Either we'll all be dead and can't call its bluff, or we'll be alive and it
doesn't matter anyway because we can't travel through time to rectify our
actions.

Since simulation requires energy which can be better used for other things,
the only AI which would actually carry out the torture is either irrational
and therefore unpredictable, or sadistic and therefore would torture anyway.

------
pdkl95
This nonsense requires the AI capable of computing a simulation at a rate
(much) larger than Bremermann's limit[1], on data that vastly exceeds the
Bekenstein bound[2].

[1]
[http://en.wikipedia.org/wiki/Bremermann%27s_limit](http://en.wikipedia.org/wiki/Bremermann%27s_limit)
[2]
[http://en.wikipedia.org/wiki/Bekenstein_bound](http://en.wikipedia.org/wiki/Bekenstein_bound)

------
levjj
It seems Roko's Basilisk is like a self-fulfilling prophecy. The more people
believe it will exist, the more likely it is going to be created, which in
turn makes more people believe that it will exist, etc. This holds completely
independent of singularity, "super-intelligence" or the simulation argument.

------
partofroko
Or I don't really give a crap about my 'real' self outside the simulation,
since my existence is more part of Roko than I am of the other 'me' outside.
Why should a video game version of you care about the real you?

I'll take my simulated $1000 please!

------
nitrogen
This line from page 2 of the article seemed unnecessarily disdainful and
somewhat misinformed:

 _" I don’t think their projects (which only seem to involve publishing papers
and hosting conferences)..."_

"Publishing papers and hosting conferences" is exactly how science works.

~~~
anigbrowl
It's also how a whole lot of non-scientific enterprises work. I feel like
science is better characterized by referring to observation, production of
hypotheses, and experimental testing of same. You can be a hermit and still
engage in scientific activity, although it would somewhat limit your
productivity.

------
fchollet
Yudkowsky should spend more time writing his (really good) Harry Potter
fanfiction and less time messing with his (really bad) "philosophy" and "AI".
That would spare humanity such cringe-worthy ridiculousness as the above
article.

------
matthewwiese
What if you flipped a coin?

If a coin flip, dice roll, etc. is truly random, then determining your course
of action due to a coin flip would ruin the AI's algorithm for "reading your
mind" or whatever, right?

Oh bugger this is silly anyway.

~~~
nitrogen
If I understand correctly, the claim is that the ultimate AI's algorithm for
"reading your mind" is to run an entire simulation of the entire universe, so
if you are ever faced with such a situation, there's a good chance you _are_
the simulation. If the universe is sufficiently deterministic, the AI would
then be able to predict that you would use a die to decide, and simulate the
exact amount of force you would have used to toss it.

~~~
matthewwiese
Right, I was considering that. However, that opens up a whole can of worms
regarding whether knowledge can be known in a deterministic universe, not to
mention a simulated one. Fascinating possibility for discussion, a welcome
change from the mostly programming-based threads.

~~~
nitrogen
If you're willing to assume that it's possible to _simulate_ the universe
deterministically, then by definition you know _everything_ about the universe
(both simulation and real). But I don't think that is a justifiable assumption
at this point.

------
eruditely
How is this any different from trite gossip? Should we pull the authors minor
mistakes and put a magnifying class to them and publish them over the web?

"please, Almighty Eliezer, don’t torture me." Have some taste.

------
dmfdmf
More sad than terrifying. Yudkowski and friends have jumped the shark and now
use standard-issue PR tricks to get attention and their story in Slate under
cover of a fake article. What's the over-under on lesswrong and MIRI folding
in the next 5 years? I'm taking the under. At the very least I predict that
Yudkowski will abandon his "autodidactic" pedigree and finally get his PhD.
He's learning that BS is one thing but institutionalized BS is where the
action is.

~~~
eruditely
I am confused at what people think is going on at MIRI. Who thinks they're
scheming and doing this? Just read some of their recent posts/twitter feeds.
No one is "scheming away" like this.

Ah yes, people who are in favor of effective charities, bayesian statistics,
discrete mathematics, and maintain that scholarship is undervalued. Let's wish
LW all the misery in the world over some utterly minor infraction.

~~~
dmfdmf
Your point is not clear to me and I want to comment but cannot. Can you
clarify your point? Thanks.

~~~
eruditely
I cannot tell why people speak with such vehemence and dislike of LW. You do
not even have to agree that AI risk is the biggest threat to mankind. I'm sure
many don't. You don't really have to agree about much.

Still you can go there to talk about textbook advice, get some real advice,
and actually generate fairly good discussion. The sheer seething hate points
to me of a fairly malformed opinion about the community and culture.

tl;dr some guy gets banned due to some careless mistake, a few people blow it
way out of proportion over and over and now apparently you can get published
on slate over b& drama.

~~~
eli_gottlieb
The LessWrongosphere is more cultish and wacky the further _out_ from the
center you go, while being more and more like CS/math academia the further
_in_ you go. It doesn't help that they got their start on Overcoming Bias,
which already attempts to write off everything in human life that doesn't
optimize for economic productivity as "irrational", and _then_ added the
participants of the HPMoR fandom who understand _just enough_ to stand in awe
of EY but not enough to participate usefully in real discussions about math,
logic, and statistics.

~~~
eruditely
Mere slander. What evidence is there that this happens? Overcoming bias is a
blog dedicated to overcoming biases that many humans have, economic
productivity abstracts away too much from what it means. Economic productivity
translates into higher living standards, far superior healthcare, and income
per person.

Most on LW are fiercely individualistic. I do not share most of my views with
the LW end, but it is still a noticeable improvement over the ghetto that is
thinking everywhere else. Also I hate fanfiction and think most is generally
lame, but let people have it I guess.

I was introduced to LW over two anti-LW articles and failed to see what any
one was talking about.

~~~
eli_gottlieb
Saying that it's _more like math academia when you go further in_ (ie:
actually make an account and post on LW itself) is a _compliment._

------
nsxwolf
If the real universe the AI exists in has a finite life span, the torment
would not be eternal. The simulation would eventually end.

That should give these weirdos a modicum of comfort.

------
joesmo
That's just stupid. There seems to be no difference between these morons and
the morons that came before them believing in gods. This has nothing to do
with rational thought whatsoever, only the opposite.

~~~
eruditely
Is 'rationalism' supposed to mean everything even so minor is supposed to be
correct all the time, which is probabilistically unachievable? In LessWrong I
have received much help as far as career advice, opportunities, and support.

Look at the list of top contributors, is all that they have done supposed to
be forsaken over some minor 'blemish'? Who cares. Some guy just got banned
randomly. What about people who get hell-banned here? What about the guy who
is creating that christian operating system that can communicate with god who
is obviously insane? Is banning him a smear against the mods? No relax, look
at the sum of good in the totality, not in the minor.

~~~
anigbrowl
No, but rationalism should include an awareness of the limits of one's own
capacity for rational thought/action. Not a limit to rationality itself, but a
recognition of the fact that we often lack sufficient information for complete
moral certainty, and should therefore be very cautious when enumerating our
priors, lest we make a logical decision based upon a false premise.

I'm making a general point here rather than one about Yudlowski and co. I'm
rather a hard-core utilitarian myself but I can't really get into LessWrong -
seems too cliquish and I would end up engaged in fruitless arguments.

~~~
eruditely
Alright, well do as you please of course. I have about one post there, yet it
offers real and legitimate value. Read the discussion forums and blogs and
you'll see that it's not quite like that. I think a lot of peoples opinions of
it are formed over a glossed examination. In reality it's not like that, it
creates an annoying initial impression(it did with me), but over time I found
the value.

It's just not possible to be so completely rational all the time. Cut some guy
a break. Come on.

~~~
anigbrowl
I don't think they're bad or that you ought not to spend time there. I'm just
saying it would not be a good idea for me, despite the many likable aspects of
the community there.

------
pfraze
Well written, good read. His final paragraphs tied it up neatly with some good
points of his own, and it's a fun thought experiment. Would be great to run a
poll to see who sides with the robot devil.

