
The Difference Between Rationality and Intelligence - frostmatthew
http://www.nytimes.com/2016/09/18/opinion/sunday/the-difference-between-rationality-and-intelligence.html
======
charlieflowers
I have known many clearly intelligent people who held irrational opinions
quite strongly. For example, huge believers in specific, concrete stock market
predictions based on some fibonacci sequence nonsense, to the point of
actively loosing six figures through multiple failed predictions over a
decade+ and still believing.

So clearly, they're distinct.

But the question about the bank teller doesn't seem like the strongest
indicator. It seems more like a parlor trick that relies on our "system 1"
brain short circuiting before our "system 2" kicks in.

Better indicators would be ones that explore in-tribe vs. out-tribe perception
of facts, or that attempted to quantify one's awareness of one's own blind
spots.

~~~
thaumasiotes
> the question about the bank teller doesn't seem like the strongest
> indicator. It seems more like a parlor trick that relies on our "system 1"
> brain short circuiting before our "system 2" kicks in.

The question about the bank teller is a parlor trick that exploits normal
language processing, specifically the Gricean Maxim of Relevance (
[https://en.wikipedia.org/wiki/Cooperative_principle#Maxim_of...](https://en.wikipedia.org/wiki/Cooperative_principle#Maxim_of_Relevance)
). The maxim specifies that if somebody takes the trouble to point something
out to you (say, that someone you've never met used to be a very politically-
active left-winger), what they pointed out must be relevant to the
conversation.

It's basically another exercise in
[https://xkcd.com/169/](https://xkcd.com/169/) \-- the experimenters aren't
grasping the lesson that "communicating badly and then acting smug when you're
misunderstood is not cleverness".

~~~
Artoemius
> if somebody takes the trouble to point something out to you (say, that
> someone you've never met used to be a very politically-active left-winger),
> what they pointed out must be relevant to the conversation.

Even if the brain believes that all things that are pointed out are relevant,
that still doesn't explain this problem with bad communication.

There are two statements, P and Q, and you have a question of what's more
probable: statement P or the conjunction P&Q. The experimenter (communicating
badly) makes you believe that Q has a very high probability. However, it is
still logically impossible for P&Q to be more probable than P.

~~~
thaumasiotes
I relate the following discussion from memory:

"Long ago", a psychologist did the following experiment on children of
different ages: he'd lay out one line of M&Ms with broadly uniform spacing,
and another line, longer, with the same spacing, and ask the child "which line
has more M&Ms?" After they correctly indicated that the longer line had more
M&Ms, he'd adjust the spacing between the M&Ms in the shorter line such that
it was now longer. Then he'd ask again, "which line has more M&Ms?" He found
that, below a certain age (I hope this finding was approximate), children
would indicate the longer (and sparser) line as now having more M&Ms, and
above the threshold age they would point to the correct, denser line.

His published papers were viewed as establishing a major threshold in child
development, where children started to be capable of assessing reality
independently. "Not so long ago", someone decided to replicate this landmark
finding.

The twist was, in the replication, instead of asking the children "which line
has more M&Ms now?", they said "OK, now you can take one of the lines and eat
all of the M&Ms that were in it".

And the threshold effect disappeared completely. Children of all ages chose
the line with numerically more M&Ms, regardless of linear extent.

When the problem is _your experimental design_ , you're not warranted in
drawing conclusions. You've already demonstrated that you're not qualified to
speculate on them.

> that still doesn't explain this problem with bad communication.

> There are two statements, P and Q, and you have a question of what's more
> probable: statement P or the conjunction P&Q. The experimenter
> (communicating badly) makes you believe that Q has a very high probability.
> However, it is still logically impossible for P&Q to be more probable than
> P.

Your objection only makes sense if (1) human-to-human communication consists
of messages that precisely specify their intended meaning, and (2) people
believe that (1) is true. (1) and (2) are both false.

------
dmfdmf
> people tend to make decisions based on intuition rather than reason.

I think these types of article have a too-narrow definition of rationality. I
think that emotions, feelings and intuition are subconscious processes but
still part of the faculty of reason. The mistake is to think that rationality
= conscious-logical-deduction but anyone who has solved some complex problem
knows the experience of working on a problem or days, weeks, months or even
years and then one day the solution just hits you, usually when you aren't
actually working on the problem.

~~~
paulddraper
Are you suggesting that reason include intuition?

I don't believe that would ever be a helpful definition of the word.

\---

Reason is conscious; intuition is subconscious.

Of course, they're related: reason can train intuition, and intuition can
catalyze reason. But I disagree that a conversational distinction is
inappropriate.

~~~
dmfdmf
I'm not suggesting it, I outright stated it in my comment. My point is that
_potentially_ logical connections can (or must) be made subconsciously and
that that is an aspect of how the faculty of reason works.

Here is an example you experience every single day and even _right now_ if you
type a reply to my comment. If you want to express an idea in words you don't
consciously and methodically select words from a dictionary, no. The words
come automatically with nary a thought, you just issue the command to summon
the words to express your idea.

Without this ability you (and more specifically reason) could not function at
all, ergo, intuition and subconscious processes must be a component of reason,
reason being our faculty for processing information.

~~~
eternalban
I would think no rational being would argue that conscious psychological
phenomena are disconnected from subconscious phenomena. It is eminently
rational to note that we do not have a complete grasp of the inner workings of
human consciousness and _mind_.

> Here is an example you experience every single day and even right now if you
> type a reply to my comment. If you want to express an idea in words you
> don't consciously and methodically select words from a dictionary, no. The
> words come automatically with nary a thought, you just issue the command to
> summon the words to express your idea.

> Without this ability you (and more specifically reason) could not function
> at all, ergo, intuition and subconscious processes must be a component of
> reason, reason being our faculty for processing information.

To claim an equivalence between reason and intuition based on that reasoning
would necessarily require that you also include _irrational_ conscious
behavior. Which I hope will show you that your position is, in effect, an
empty proposition that sheds no light on the matter.

Intutitive thought processes and rational thought processes, in my mind, have
the same relationship as _music_ and _mathematics_. Clearly there is a common
ground, but they are distinct.

Less whimsically, one may arrive at a decision based on intution that is
subsequently validated by events. However, this arrival does not permit a
retracing of steps to permit future decisions. Of course it is possible to
extract a _general law or principle_ from the _experience_ , but that is not a
given.

Reason, on the other hand, is the application of extant, and formulation of
new, systemized understanding. However that should not be taken as a measure
of the _superiority_ of reason to intution, for it is equally true that
intution can at times give us wings to go where reason can not take us.

------
kijin
Intelligence is raw processing power. Rationality is correct software. A fast
CPU can run buggy programs just as quickly as it can run correct programs.

Unless you're just trying to show off your Passmark scores, you're better off
running a correct program on a Raspberry Pi than a buggy program on a quad-
Xeon rig.

Many intelligent people waste their brain cells trying to argue that the moon
landing was fake, vaccines cause autism, etc. just as often as powerful
computers waste electricity running malware. That doesn't make them any less
intelligent. They're just really good at executing poorly written software as
quickly as possible, which often helps disguise the fact that their software
is a pile of crap in the first place.

------
nobrains
I would argue that the test was irrational:

"(A) Linda is a bank teller or (B) Linda is a bank teller and is active in the
feminist movement. Eighty-five percent of the subjects chose B".

Many people could have assumed that being in set (B) automatically means set
(A) = NOT set (B).

Set (B) being more precisely defined, it can easily be assumed that anyone in
set (B) is not in set (A). The participants can easily assume that a person
can be in only 1 set (no rules were defined).

~~~
csallen
So what you're saying is the participants went with their intuitive feeling
about how the question might be structured as opposed to applying deliberate
and effortful analysis/reasoning to understand what was being asked.

~~~
Dylan16807
Such reasoning would not allow reading the question-asker's mind, so no. You
can rationally arrive at either interpretation of the question.

~~~
csallen
As you yourself pointed out, it the idea that A = !B is an assumption. It's
not supported by anything on the page whatsoever. In fact, it's explicitly
rejected by what's on the page. Simply reading and reasoning about the
answers, which are completely unambiguous, would lead one to conclude that B
is a subset of A. Therefore it must be false that A = !B.

There's no mind reading necessary, only rational thought.

Listening to internally-generated assumptions without bothering to fact-check
them via explicit reasoning is intuition.

~~~
Dylan16807
Language is naturally ambiguous and dependent on context. It's not lack of
rationality that causes people to interpret questions in different ways. You
can only show lack of rationality in a situation like this if they provide the
wrong answer _and_ understand the question.

~~~
csallen
The point isn't that people aren't _capable_ of being rational. If they read
the question carefully and think hard about it, of course they'll be able to
figure it out. But that takes a lot of conscious effort.

The point is that people aren't often willing to put in that effort, so they
default to intuition. What's more, they might not even realize they're doing
this.

~~~
Dylan16807
But there is a huge difference between 'intuition about conversation' and
'intuition about the state of reality'. The former is not tied to rationality
as much as practice at fighting through obtuse language. The latter is much
closer to telling you whether people are thinking rationally by default.

Compare
[https://news.ycombinator.com/item?id=12524799](https://news.ycombinator.com/item?id=12524799)
this comment, where the kids already know which pile has more candy, but they
don't understand what the weirdly-formatted question is asking.

~~~
csallen
Interesting experiment. Based on thaumasiotes' description, I don't see any
problem with the experimental design, only with the conclusion drawn from it.
_Something_ changed between the younger and older kids' ability to understand
the question that was asked, and that in and of itself is interesting.

I object to the characterization of these experiments as "communicating badly"
or relying on "obtuse language". The questions asked in both experiments are
exceedingly straightforward and unambiguous. If "which is longer" and "which
is more likely to be true" fall below the bar for designating a question as
confusing, then so does every other question imaginable.

Conversation is an integral part of the "real world", and it's almost always
_far_ more ambiguous than examples we're examining. Yet we're still expected
to parse and decipher conversation in order to survive in society: to thrive
at your job, to vote for the best candidate, to get through school, etc.

It's in no way useful to design an experiment that attempts to avoid
conversation in a world that runs on conversation. There exist people who can
consistently pick the correct answer in all of these experiments, no matter
how the question is worded. And there exist people who will get it wrong
unless the question is worded perfectly. Who is more rational? Who is better
at grasping "the state of reality", as you put it? Who would you rather have
on your team at work?

~~~
Dylan16807
There are interesting things you can ask about why people would understand or
not understand a question.

But if they don't understand it, you can't conclude anything at all about
whether they know the answer.

It doesn't matter if a question is formatted in a "straightforward" way.
Nothing is very straightforward to a small child, and in normal conversation
"A or A^B" is usually a mistake.

Honestly, people say things they don't mean all the time. A literal
interpretation of conversation has a lot of potential to hurt you. It's also
irrational. The rational goal is to figure out the most likely intended
meaning(s). It's not to figure out what the meaning would be in a
counterfactual world where people use language in a more logical way.

> There exist people who can consistently pick the correct answer in all of
> these experiments, no matter how the question is worded. And there exist
> people who will get it wrong unless the question is worded perfectly. Who is
> more rational?

If people figured out that the experimenter was asking a nonsensical and
purely logical question, then that's a valuable skill. But it's not really
connected to whether they are generally rational or irrational. It's also not
connected to whether they understand basic probability.

> Who is better at grasping "the state of reality", as you put it?

In this experiment, you can't tell. When it comes to the probabilities about
Linda, people's understanding is a total mystery if they didn't know what the
question wanted.

You can figure out how they interpret questions on one axis, but you would
need other tests to figure out why. They might be more rational, or they might
be more literal, even to an irrational level.

> Who would you rather have on your team at work?

Depends on why they got the answer right.

~~~
csallen
I think we disagree on two big things, and the rest is fluff.

First, I still don't think it's fair to characterize the questions ("which is
more likely" and "which has more M&Ms") as "nonsensical and purely logical." I
wouldn't even call the questions atypical. We contend with simple questions
like this all the time in society. The intent behind both questions is
unambiguous, and there are no reasonable alternative interpretations, unless
you assume the question asker is simply mistaken.

Second, we're on different pages about what the experiment is evaluating. I
don't claim that the people who answer incorrectly are less _capable_ of
rationality, nor that they don't understand probability. What I believe is
that they are less likely to recognize _when it 's best_ to apply their
logical toolset. As a result, they're more likely to use intuition to answer
questions that are best answered via careful analysis. This appears exactly
like failing to understand the question itself.

To wit: The younger kids used the length of the line to answer "which has more
M&Ms", but the older kids immediately recognized that it's better to use a
more in-depth analytical tool: counting. I hypothesize that the younger kids
are simply not as good (yet) at knowing when to apply this tool. Intuition is
easier and less effortful, thus it's the default unless we make ourselves
think harder. This pattern extends to adults, too. How many people think we
should be tougher on crime because it "feels" like there is more violence
today than ever before, yet don't even consider that they need to look at some
actual numbers to justify this conclusion?

~~~
Dylan16807
It's specifically a question of "which is more likely, A or A^B" that is
weird. The typical "which is more likely" has non-overlapping categories. It
doesn't even need to be a mistake. If I ask whether you want a sandwich or a
burger, it's obvious that 'sandwich' actually means 'non-burger sandwich'.

I'm claiming that they are not using intuition instead of analysis. Or at
least, you can't tell that from their answers. There are logical reasons to
interpret the question as non-overlapping sets. They could be performing a
very careful analysis and still pick the second option.

~~~
csallen
I only disagree with your last sentence. There's no way to perform a careful
analysis of option A vs option A+B while maintaining the assumption that they
are non-overlapping sets. Simply reading the answers proves that assumption
wrong. Thus, it seems way more likely that the explanation is a _lack_ of
thinking/analysis... i.e. intuition. People are answering the question they
expect instead of the question that's in front of them, because that's easier:
[http://lesswrong.com/lw/9l3/the_substitution_principle](http://lesswrong.com/lw/9l3/the_substitution_principle)

But you are right, we can't know that from their answers alone. There are
numerous possible reasons for picking the wrong answer, so further
experimentation is required if we really want to know why.

------
cmrx64
If this interests you, check out [https://intelligence.org/rationality-ai-
zombies/](https://intelligence.org/rationality-ai-zombies/).

~~~
davidgerard
If that interests you, ask for a chart of the real-world successes of
LessWrong readers compared to non-LW readers.

Also about the mosquito nets versus AI dialectic in Effective Altruism.

~~~
JoshTriplett
> Also about the mosquito nets versus AI dialectic in Effective Altruism.

There's a reasonable debate here about short-term and long-term thinking. "How
to save the most lives the most quickly" is not the only reasonable altruistic
goal to optimize for. "How to save the most lives over the long-term, and save
them permanently rather than just delaying a current inevitable" is worth
consideration. And there are already far more people willing to support
mosquito nets, and far too few willing to support AGI research, or SENS for
that matter.

~~~
davidgerard
"AGI research" in practice means "give money to MIRI", whose track record of
results on pretty much _any_ measure is less than impressive.

It is really (and literally) "donate to stuff that demonstrably works" versus
"donate to MIRI, with its terrible track record, to do something supported
primarily by Pascalian arguments."

c.f. [http://www.vox.com/2015/8/10/9124145/effective-altruism-
glob...](http://www.vox.com/2015/8/10/9124145/effective-altruism-global-ai)

Yudkowsky may have (probably) coined the phrase "effective altruist", but
people who aren't living sci-fi dreams are in EA now, and asking rather
pointed questions.

Never confuse "Effective Altruism" and "altruism that is effective". Whatever
"effective" actually means in the given context.

Getting back on-topic, there's still no evidence - e.g., a track record of
results - that anything within a mile of MIRI/LW is actually any good at all
for real-world effectiveness, and things like mosquito nets versus AI as
evidence against it. LW sells a sort of "rationalitiness": it sure _feels_
like rationality.

------
skybrian
I wonder sometimes whether rationalization came before logic. Perhaps using
logic in a rigorous way (as in a mathematical proof) evolved out of persuasive
techniques that are intended to get people to believe you, regardless of
whether you're right?

Looking for logical flaws in arguments might have arisen as defense against
persuasive techniques.

~~~
slachterman
You'd probably be interested in the research of Hugo Mercier and Dan Sperber
on this very point: [http://www.nytimes.com/2011/06/15/arts/people-argue-just-
to-...](http://www.nytimes.com/2011/06/15/arts/people-argue-just-to-win-
scholars-assert.html) People Argue Just to Win, Scholars Assert - The New York
Times

------
dsacco
This is really interesting to me. I've always associated rationality with
intelligence. Naturally this leads me to wonder how people with high
intellectual aptitude have strangely inconsistent opinions.

But after reading this article, intuitively it makes sense to me. I'm eager to
read the linked study for further study. I feel as though the researcher's
holistic approach to quantifying rationality (decoupling rationality into IQ
and "RQ") is more precise than assuming the two correlate together.

One of the things I'd be very curious about, assuming this study reproduces
well, is whether or not it can be reproduced for other aptitude tests. We
already have EQ, which will make three measures if this catches on. Can we
keep decoupling traits from intelligence to measure them more granularly and
accurately? What else could we move into its own category?

~~~
bmh_ca
The capacity to abstract thought and the decision to classify it are two
different concepts, but easy to conflate. As we form classifications our
mylenic sheaths form around our neurons and abstracting concepts - even if
apparent to others - becomes more challenging.

~~~
socmag
> "mylenic sheaths form around our neurons and abstracting concepts"

Roll me one while you are at it

~~~
Dylan16807
That is not how that "and" works.

'Mylenic sheaths form around our neurons. -and- Abstracting concepts becomes
more challenging.'

~~~
bmh_ca
Ty

~~~
socmag
Maybe I missed your point, but as far as I am aware myelination is a very good
thing.

~~~
bmh_ca
Mylenation is a double edged sword. We see in judo subjects the dramatic
increase in reflexes and competetive advantage after substantial practice. On
the other hand similar repetition in other areas can lead to difficulty
construing other pathways. I also believe mylenation is responsible for the
gradual decrease in appreciation of ones favourite songs.

In other words it can make things faster, but limit capacity for change and
reduce broader neurological firing.

Sorry I haven't explained it very well - on the run with a phone.

I would strongly encourage a read of Pocket Guide To Interpersonal
Neurobiology by Daniel Siegel, who touches on the neurobiology and broader
aspects of the impact in an accessible narrative.

The kernel of what I seem to recall driving my original comment was that
mylenation occurs at deep levels, during what we call the formative years, but
if not adequately developed people can be or become susceptible to various
negative psychological traits such as addiction, susceptibility to
brainwashing, poor decision making, lack of critical thinking. As Max Planck
put it - people don't change their mind to accept an idea, rather people die
and everyone that's left just accepts the idea as true. A large part of what
he was getting at is, I believe, a direct consequence of mylenation - namely
the formation of faster but hard to change pathways in the brain.

I hope that explanation helps.

------
emmelaich
I often muse on the difference between the

    
    
        logical
        rational
        sensible
        wise
    

Throw in

    
    
        freethinker
        humanist
        skeptic
        atheist
    

and a few more.

In too many cases, people conflate all of these.

------
dogma1138
Credit Suisse made a write up on some of the elements of RQ.

[https://doc.research-and-
analytics.csfb.com/docView?language...](https://doc.research-and-
analytics.csfb.com/docView?language=ENG&format=PDF&source_id=csplusresearchcp&document_id=1048541371&serialid=mofPYk1Y4WanTeErbeMtPx6ur0SCIcSlaZ7sKGPdQQU%3D)

It's based on a confidence calibration test available here confidence.success-
equation.com

------
tvanantwerp
I wish there were more info about this computer training to minimize cognitive
bias. Wouldn't mind trying that out.

The distinction between rationality and intelligence (I usually think about it
as rationality and logic) is an important one that is rarely made. Economists
especially do a bad job of this, despite being the sort of people who should
know better. Thanks to cognitive biases, limited information, and costs
associated with getting past those two hurdles, people are rarely logical. But
we are typically rational; we make the best decision available _given_ our
beliefs, evolved instincts, limited available information, etc. That can
result in illogical behavior, but hardly unpredictable/erratic behavior.
Understanding that people are maximizing utility based on a function that
includes lots of tricky psychological factors--not just pure logic--is helpful
in empathizing with others.

~~~
tamana
Rationality is logical behavior.

------
ilaksh
Interesting. Had not realized rationality was learnable.

Another leading contributor to opinions that others may consider to be
irrational is existing beliefs. I am not a scientist but I am guessing belief
systems are a bigger factor than irrationality in most cases where we
generally label others as being irrational or unintelligent.

------
hoodwink
Where can I get my hands on this computer training?

------
js8
I don't think they should call it "rationality", it's kinda loaded term. I
think better would be to call it "critical thinking".

Edit: Not sure why I was downvoted. I referred to this part:

"R.Q. would measure the propensity for reflective thought — stepping back from
your own thinking and correcting its faulty tendencies"

That's almost a perfect definition of what _critical thinking_ is - being
critical to your own thought.

------
sheeshkebab
Is that game available for download? (Missing: the pursuit of Terry Hughes)

------
known
[https://en.wikipedia.org/wiki/List_of_cognitive_biases](https://en.wikipedia.org/wiki/List_of_cognitive_biases)
are irrational

------
brightshiny
I'd draw a distinction between _intellect_ and _intelligence_. It's quite
possible to have a powerful intellect and still make poor decisions, i.e. be
stupid.

Intelligence is about making good decisions. Intellect can aid in that, but
it's not the only factor, and after a certain level it wouldn't even be the
primary factor.

------
programminggeek
Irrationality rules the world.

------
wonkaWonka
The article presents the following experimental premise:

    
    
      “Linda is 31 years old, single, outspoken 
       and very bright. She majored in philosophy. 
       As a student, she was deeply concerned with 
       issues of discrimination and social justice, 
       and also participated in antinuclear 
       demonstrations.” 
    
      Then they asked the subjects which was more 
      probable: 
    
      (A) Linda is a bank teller 
    
      or 
    
      (B) Linda is a bank teller and 
          is active in the feminist 
          movement.
    

This example is bullshit, for several reasons.

We have several known personality attributes provided in the example, and all
of them relate to the subjective opinions of an individual and are designed to
provide belief in the potential for associations with similar political
alignments.

Then we're provided with 2 choices, and each choice couples a previously
unknown detail to the individual described, regarding occupation, with no
opportunity to exclude the occupation.

We are asked to make an assumption about the individual, based on previously
provided information, and guide the formation of our assumption with
intuition.

According to _The Letter Of The Law_ , the example asks the respondent to
parse _probability ONLY_ , and then penalizes according to the transitive
property of equality, because technical interpretations of probability state
that, when information has not been previously presented, an individual trait
in isolation, is more probable, than a coupling of rare traits.

    
    
      Which one is more probable? Oh wait, you're 
      wrong because you misinterpreted our 
      context-sensitive definition of the word 
      "probable." You lose.
    

According to _The Spirit Of The Law_ , the example appears to present the
respondent with a set of details, and prompt the respondent with a request to
parse the details and demonstrate a display of reading comprehension, such
that they show they've observed the relevant details, and drawn a conclusion
by associating multiple cultural norms of common political alignment.

    
    
      With the presentation of choice B, and
      based on other personal details about
      Linda, do you feel it likely that Linda 
      could be a feminist? Keep in mind that 
      we've offered clues to her political 
      alignment, and these may play a role 
      in the correct answer.
    

The example reads as:

    
    
      Given: [0, A, 2, C, 4]
    
      Is [E] likely?
    
      or
    
      Does [E, 6] make more sense?
    
    

But claims to present:

    
    
      Given: [0, A, 2, C, 4]
    
      Which is most probable?  
    
       > [$]
    
      or
    
       > [$, 99]
    

So, the example is an experiment in providing a loaded question, and then
changing the context of expected interpretation, and then declaring proof that
people are prone to misinterpretation.

The example is like asking someone if they were happy about who won The World
Series, and then telling them you're not inviting them to a soccer game,
because of their opinions on baseball.

The example the authors have provided is designed to promote assumptions,
without providing adequate context for expectations, and that is fucking
stupid.

~~~
rainsford
Your reaction is exactly the reason the question was constructed that way. A
lot of the available information would lead you to conclude that the provided
details were important and that they support a specific option being more
likely.

But if you rationally consider the options, it's apparent that option B can't
possibly be more likely than option A, regardless of the information
presented, because option B is by definition a subset of option A. It is not
possible for Linda to fit option B but not option A, so option B can't
possibly be more probable.

The fact that it's a leading question designed to promote assumptions is not a
flaw; it's the whole point of the experiment. Even intelligent people are
supposed to be led to the wrong conclusion because they try to analyze all the
available information. But rational people are supposed to recognize that the
presented information is irrelevant and that they can pick the right answer
even if they don't know anything about Linda.

In the interest of full disclosure, I'll mention that I had exactly the same
reaction regarding the quality of the question. It was only after some
consideration that I realized this may have been intentional on the part of
the people conducting the experiment.

~~~
jimmaswell
Some amount of people probably assume that asking "What's more likely, A, or A
and B" intends to ask for a comparison between A^~B and A^B, not simply A and
A^B, in which case it would be an error in communication rather than an error
in rationality.

------
ilovecookies
Post nytimes on hackernews...hmmf

The problem set was piss in terms of intelligence had a false assumption of a
rational/intelligence dichotomy. Of course it's more likely that she is a
bankteller than a bankteller plus (insert whatever). This is probability
theory 101. But hey, it is nytimes you're reading so what would one expect
anyway.

I was more thinking in these lines:

How rational is it for someone to be outspoken against nuclear energy when
there hardly exist any real valid renewable alternative to substitute that
energy source to begin with? Do it like Germany and rely on the importation of
oil from Russia?

~~~
paulddraper
> there hardly exist any real valid renewable alternative

Most people don't know that. Exaggerations like
[http://thesolutionsproject.org/](http://thesolutionsproject.org/) make them
think that wind/solar/tidal could put more than a little dent in energy.

~~~
sinxoveretothex
It is possible, at least in principle, to transition to only using solar for
example.

Musk gave the example, when unveiling the Powerwall, that to power the whole
of the US would only require the whole Texas Panhandle to be covered in panels
25% efficient.

I remember doing the calculation even with the current 15% efficiency and it
came out that if each person has something like 9 m^2 of solar panels on their
roof, the whole energy demand of the US is covered.

Obviously, that's a shit ton of panels and batteries and we can't just pack
them all on the Panhandle for example and it's entirely possible that nuclear
is much cheaper (I don't know where to get the numbers to run this
calculation).

But it is possible. I think a better criticism is that it's too risky and too
slow an avenue in comparison to the risk of climate change

~~~
paulddraper
> the whole of the US would only require the whole Texas Panhandle to be
> covered in panels 25% efficient

Are we building Dyson spheres next?

I believe largish orders of mass-market solar panels (currently ~15%
efficient, as you mentioned) are about $10/sq ft, but let's make that $5 for
sake of argument. So $218k for one acre.

The Texas Panhandle is 16.6 million acres. So assuming solar becomes nearly
twice as efficient for half the cost, the solar panels alone for this venture
would be $3.6 trillion. I hesitate to guess what construction, installation,
batteries, or infrastructure for 16 million acres would add.

\---

So I concede that given best-case economics, impractical funding, and
technology that doesn't exist, it could perhaps be done.

More of a XKCD "What If?" scenario than a serious solution.

~~~
sinxoveretothex
3.6 trillions is a lot of money if you intend to bulk buy all of it at once
today, but comparatively less over, say, 10 years. US GDP is around 18
trillion/year, 2% of the GDP is a lot, but it's certainly realizable.

Besides, it is somewhat meaningless to look at the price of solar panels
without a reference. As I stated above, I have no idea how to estimate the
cost of nuclear. I also don't know if the new kind of reactor (like traveling
wave reactors) are anything more than a concept (it seems like breeder
reactors are pretty much at the level of prototypes today).

I think the important point is what the price difference is rather than
whether a one-time purchase of solar panels would cost an amount to large to
fit in one's wallet.

~~~
paulddraper
$3.6 trillion is only a _portion_ of just the _materials_ cost. It's like the
price of lumber, as compared to the total cost of building a house.

Elon chose that particular spot because the economic of solar is geography
dependent. The infrastructure costs of distributing 100% solar will be
astronomical.

I think you could pat yourself on the back if you figured out how to make it
work for only 20x the cost of just the panels.

\---

> As I stated above, I have no idea how to estimate the cost of nuclear

There is extensive data about nuclear costs, both estimated and empirical.

In 2010, the U.S. Energy Information Administration estimated total captial
costs for nuclear power at $5.4k per kW.
[http://www.eia.gov/oiaf/beck_plantcosts/index.html](http://www.eia.gov/oiaf/beck_plantcosts/index.html)
(This is a rather safe estimate -- the EIA reported inflation-adjusted costs
for actual, real-life plants build in the 1960s was $1.5k per kW. But I'll use
the higher estimate.)

The U.S. used 3.9 trillion kWh last year, twenty percent of which is already
nuclear. So that's $1.9 trillion of capital for 100% of U.S. electricity to
come from nuclear.

\---

At this point, coal makes the most economic sense of course. And the U.S. has
enough for the next couple centuries.

~~~
sinxoveretothex
> In 2010, the U.S. Energy Information Administration estimated total captial
> costs for nuclear power at $5.4k per kW.

Matches what I found here:
[http://www.eia.gov/forecasts/capitalcost/xls/table1.xls](http://www.eia.gov/forecasts/capitalcost/xls/table1.xls)
(5.530$/kW) (page with full report here:
[http://www.eia.gov/forecasts/capitalcost/](http://www.eia.gov/forecasts/capitalcost/),
release date 2013)

Thing is that it is literally more expensive than photovoltaic at around
4k$/kW (same source). Even the Operation and Maintenance is 5 times as
expensive (~93$ vs ~26$).

I must say that I did not expect the evidence to be in my favour here (as
evidenced by my initial comment).

> Elon chose that particular spot because the economic of solar is geography
> dependent.

Yes, the Sun hits more at lower latitude: [http://energy.gov/maps/solar-
energy-potential](http://energy.gov/maps/solar-energy-potential)

> The infrastructure costs of distributing 100% solar will be astronomical.

> I think you could pat yourself on the back if you figured out how to make it
> work for only 20x the cost of just the panels.

What's nice about solar is that it can be very localized (with the efficiency
loss described above).

I agree that the costs of distributing all the power required for the whole of
the US from the Panhandle would require… interesting, distribution
architecture.

Going by the solar potential map, I guesstimate the Panhandle average to be
about 525 Wh/ft^2/day and the worst parts of Washington and Oregon to be at
about 350 Wh/ft^2/day. So let's assume the national average to be 440 and so
assume that we need about 120% the panels needed before. Actually, let's bump
that up to 140% just because my number is rather rough and the population of
the US is mostly on the coasts.

Now, what do you expect the other costs to be? Probably batteries. Going by
somewhat sketchy graph on the Internet, I'll approximate the energy use as
constant throughout the day, with a 2X peak offset just after the sun sets. So
let's assume that one third of the energy needed through the day doesn't need
to be stored at all, it's used as it's generated.

Going by:
[http://www.eia.gov/beta/MER/index.cfm?tbl=T02.01#/?f=A&start...](http://www.eia.gov/beta/MER/index.cfm?tbl=T02.01#/?f=A&start=200001)
the US used about 100000 trillion BTU => 100E3 _1E12 BTU_ 0.293071 Wh/BTU = 29
quadrillion Wh AKA 30 trillion kWh (10 times your figure for some reason).

So 30E12 kWh * 2/3 * 1/365 = 53E9 kWh of storage needed

Now, Tesla sells the Powerpack for about 2.6 M$/4MWh. 53E9 kWh * 2.6 M$/4E3
kWh = 34.5E12 $ AKA 34 trillion $.

I don't know what the economies of scale would look like on producing 530
million powerpacks, but I'm pretty sure it would make the cost substantially
lower (besides, those numbers are not for bulk orders).

As for panels, going by your numbers, I get 10$/ft^2 = 10$ * (3.3 ft/m)^2 =
109 $/m^2. Assuming 15% efficient panels and assuming 420 Wh/ft^2/day * 0.15 =
686 Wh/m^2/day. So 109 $/m^2 / .686 kWh/m^2 = 159 $/kWh

30E12 kWh * 159 $/kWh = 4.77E15 AKA 4.77 quadrillion $.

30E12 kWh / 365 _24 h_ 4000$/kW = 13 trillion $, so something is off.

Indeed, your 10$/ft^2 is way off, it is for small residential installations:
[http://solar-power-now.com/solar-panel-cost-per-square-foot/](http://solar-
power-now.com/solar-panel-cost-per-square-foot/)

Now, I don't know what to think of the ~350-fold price reduction for really
large installation, but it seems sensible going by electronics bulk pricing.

Assuming a more conservative 10-fold reduction for the price of batteries,
we'd get 13+3.4 trillion $ = 16.4 trillion $.

Going by the EIA report, nuclear would be about 5/4 the cost/kW of solar
(presumably without storage), which is about on par with the price of storage
+ solar: (13+3.4)/13 ≈ 5/4

> The U.S. used 3.9 trillion kWh last year, twenty percent of which is already
> nuclear. So that's $1.9 trillion of capital for 100% of U.S. electricity to
> come from nuclear.

Yes, but they may be at the end of their life:
[http://www.eia.gov/tools/faqs/faq.cfm?id=228&t=21](http://www.eia.gov/tools/faqs/faq.cfm?id=228&t=21)

\---

At any rate, I don't have much of an issue with nuclear, at least the newer
breeder designs since breeders produce very little waste and the new designs
are presumably very safe. But obviously, all costs being equal (assuming no
error in my calculations) I'm 100% in favour of renewables over nuclear.
Renewables don't have the international concerns linked with exporting nuclear
reactor technology for one and that's a serious drawback of nuclear honestly.

~~~
paulddraper
Thanks for the additional info.

------
cJ0th
> Then they asked the subjects which was more probable: (A) Linda is a bank
> teller or (B) Linda is a bank teller and is active in the feminist movement.

I don't see how either guess is generally less likely then the other one. I'd
say it depends rather on your mental framework whether you choose A or B.
Therefore, I'd argue a frequentist would rather choose answer A whereas a
Bayesian would be inclined to choose answer B.

The difference between rationality and intelligence imo lies in how you deal
with unknown Unknowns. A person who is merely rational wouldn't consider them
whereas a intelligent person would have the ability to deal with them in some
way.

Take the infamous Westbro family for example. In Louis Theroux's movie they
came across as rational human beings assuming that their world view is
correct. Indeed they appeared to have a happy family life. If everyone would
(want to) follow their rules than the world would probably work. They,
however, fail to realize that they haven't found a magic bullet and that other
people are able to lead equally fulfilling lifes. Their attempts of "saving"
these people are bound to be a futile waste of time and thus I wouldn't
consider them as intelligent.

~~~
mehwoot
_I don 't see how either guess is generally less likely then the other one._

How could A possibly be less likely than B?? The occurrences of B are a strict
subset of A, there is no possible way B could happen more frequently.

~~~
cJ0th
Maybe I am a bit confused but I just meant to imply that A and B could be
considered equally probable if you're a Bayesian who has absolute confidence
that she's the kind of person who is active in the feminist movement.

~~~
bonoboTP
100% now is 100% forever, in the Bayesian framework.

If a Bayesian agent is absolutely certain that she is active in the feminist
movement, it would mean that _absolutely no evidence_ could convince it of the
contrary. Even if we learn that she was kidnapped and forced to work in, say,
Saudi Arabia as a bank teller where she's forbidden from feminism, even then,
the Bayesian agent would _have to_ still stay at 100% confidence that she's
active in feminism.

