
Why Smart People Are Stupid - mshafrir
http://www.newyorker.com/online/blogs/frontal-cortex/2012/06/daniel-kahneman-bias-studies.html
======
lmkg
The research is using SAT score as a proxy for general intelligence... I
wonder if this sort of heuristic short-cutting actually correlates with test-
taking ability more than it correlates with intellgence.

A lot of "test-taking" training basically consists of saving time by training
_away from_ full reasoning, in favor of cheap-and-good-enough heuristics.
Furthermore, those heuristics are over-fitted to the particular problem types
on standardized tests. I wonder how much of this study is actually measuring
their ability to trigger test-taking instincts on problem types they're not
designed for.

~~~
rstevenson542
It is ridiculous to suppose that one's score on a multiple choice test is an
accurate measure of innate ability or real-world intelligence.

The SAT, ACT, IQ tests, and all standardized test like them are socially
constructed concepts that ATTEMPT a method of measuring intelligence.
Intelligence (in the real world) reaches far beyond one's abilities to answer
multiple choice reading comprehension, basic math and writing. Not to mention
that problem solving in the real world has no time constraints.

Beethoven would not have gotten a perfect score on his SAT's. However, we all
can attest to his innovation, creativity and musical genius. How can a
multiple choice test measure the creative abilities of people like Sir Richard
Branson, Steve Jobs, or Pablo Picasso?

The idea that "smarter people... were slightly more vulnerable to common
mental mistakes" is a nonsensical conclusion. These findings are completely
worthless.

~~~
achompas
Don't worry, you're both right. No self-respecting researcher uses SAT scores
as proxies for intelligence in the social sciences.

~~~
ajross
The study detailed in the linked article does exactly that...

------
DavidWoof
For the types of testing that he's doing, I suspect he's measuring boredom
more than anything else, especially since he's testing largely in a university
setting. Intelligent people are accustomed to being bored with endless entry-
level evaluation exams, and at first glance this looks like it's just one more
of them. And because the stakes here are so low (essentially zero), lots of
people will just fly through without really reading and analyzing the
question.

What he's seeing isn't something new, it's something so old that it's part of
popular culture: the absent-minded professor syndrome. It's the stereotype of
the brilliant physicist forgets what he's supposed to buy at the supermarket
because he's thinking about their quantum properties. Analytic people are
horrible at things that don't interest them.

Pay the students $50 for each correct answer, and there's not a doubt in my
mind that the results will be the complete opposite of what he's seeing now.

~~~
unfocused
I also agree with your comment. I did both questions as fast as I could and
only got the first wrong and the second right. It reminded me of a brilliant
Civil Engineering professor I once had who was showing us his notes on the
projector (the kind with the light bulb and magnifying glass over head) and
someone asked him if he could turn off the lights. The student meant the
lights, as in the classroom lights so he could see the projection better. The
professor turned off the projector instead :)

~~~
keithpeter
I saw a pound coin and a 10p coin (UK) and got the first one wrong, I saw a
bar graph with a lot of doubling bars, and saw the long history of doubling
and got the second one right. I saw the money by denomination, and not as a
quantity perhaps. Interesting.

~~~
cheatercheater
Just a sec.. where are you seeing that?

~~~
keithpeter
Sorry, I was describing the visual images I get when solving problems like
this, the article is pure text.

~~~
unfocused
Hmmm. Interesting that you point that out. I think if we were presented with a
math formula (symbols), we'd have aced it.

I think the reason why my wife did it so fast is because it was fed to her as
text only - in fact, I just read it to her. She's a lawyer and she's much
better at interpreting text than most people.

------
gojomo
The bat-ball and lily-pad questions are 2 of the 3 questions on a short test
called the 'Cognitive Reflection Test' (or 'CRT') meant to measure whether
people make the effort to think beyond the obvious (but wrong) answer.

By using those examples, after its headline, this article seems to imply
smarter people do worse on these CRT questions. But that is _not_ what I've
read elsewhere -- which is that the CRT is positively correlated with other
quantitative measures of intelligence (including IQ scores, SATs, and high-
school/collegiate grades). 'Smart' people (by those measures) do tend to do
better on the CRT.

And if you read this article carefully, you see that while it uses these two
CRT questions as examples of tricky questions, when it discusses the results
about awareness-of-bias not helping alleviate bias, it isn't necessarily
saying smart people do worse on those two CRT questions. It's a bit muddled in
what it's saying, and reviewing the linked abstract doesn't help much either.
The paper is evaluating some very specific things under the umbrella term
'cognitive sophistication', which might not map to what we usually call
'smart' or even 'test-smart'.

BTW, I personally think the CRT may be especially useful for evaluating
software/systems proficiency. The bat-ball question probes understanding of
algebra; the lily-pad question probes understanding of geometric growth (and
someone accustomed to powers-of-2 will find it easier); the third question
probes understanding of parallelism and projected-rates-of-work.

That third question happens to be:

"If it takes 5 machines 5 minutes to make 5 widgets, how long would it take
100 machines to make 100 widgets?"

(A software person might also think of it as: "If it takes 5 cores to compress
5 GB in 5 minutes, how long would it take 100 cores to compress 100 GB?")

~~~
archgoon
>>(A software person might also think of it as: "If it takes 5 cores to
compress 5 GB in 5 minutes, how long would it take 100 cores to compress 100
GB?")

I don't think that most compression algorithms are Embarrassingly Parallel
(<http://en.wikipedia.org/wiki/Amdahl%27s_law>), so it's not clear to me the
software scenario is equivalent, or are you saying that the jobs come in 5
gigabyte chunks?

~~~
gojomo
You could raise the same objections to the original 'machines'/'widgets'
formulation. (There are always economies or diseconomies of scale.)

Typically such questions imply the unstated assumption, "answer to the same
level of precision/abstraction as the question itself, and assume you have all
the info needed to give an answer". With that assumption both questions should
be answerable.

Compression tends to be pretty parallelizable, if by nothing else than
choosing to break the input into separate chunks. You might lose a little bit
of efficiency in output size – more restarts, each compressor has less global
information – but those don't mean slowdowns (and in a few contrived
situations might even mean speedups at the cost of size). See for example
'pigz' and 'pbzip2'.

If I were asking this question, I'd accept the 'rough, assuming perfect
parallelization' answer as correct-enough in the spirit and level-of-precision
implied by the question. If the answerer brought up the difficulties in
assuming perfect parallelizability or specific to compression algorithms or
choice of inputs, that'd be worth some extra credit, and would trigger
followups along the lines of, "how would those factors affect the size?" and
"what bottlenecks might you expect?" and "could it ever be faster when split
among more machines?"

------
tokenadult
Link to the study linked in the article (PubMed prepublication abstract):

[http://www.ncbi.nlm.nih.gov/pubmed?term=west%20stanovich%20m...](http://www.ncbi.nlm.nih.gov/pubmed?term=west%20stanovich%20meserve)

The psychologist Keith R. Stanovich is quite controversial among other
psychologists precisely because he writes about what high-IQ people miss in
their thinking, but his studies point to very thought-provoking data and
deserve to be grappled with by other psychologists. I have enjoyed his full-
length book What Intelligence Tests Miss

[http://yalepress.yale.edu/YupBooks//book.asp?isbn=9780300123...](http://yalepress.yale.edu/YupBooks//book.asp?isbn=9780300123852)

which meticulously cites much of the previous literature on human cognitive
biases and other gaps in rationality of human thinking.

And here is the submitted article's link to a description of the Need for
Cognition Scale:

<http://www.liberalarts.wabash.edu/ncs/>

~~~
jerf
The book mentioned, "Thinking, Fast and Slow", despite the boring title, is
quite good. If you're the sort to hang around lesswrong.com it won't blow your
mind, those less exposed to those ideas will find it fascinating (and probably
a bit more accessible in book form).

------
Jun8
"Although we assume that intelligence is a buffer against bias—that’s why
those with higher S.A.T. scores think they are less prone to these universal
thinking mistakes..."

This fallacy is at the heart of the matter. Intelligence and resistance
against bias are only loosely correlated. Such resistance comes not from
intelligence but from careful study and mental exercise, e.g. looking at
various important ethical and philosophical arguments and analyzing them.

This is like saying all large people are strong. There is _some_ dependance
but a smaller gym-fly can kick a slacker giant's ass. The sad thing, while it
is obvious that you have to exercise your body to be healthy and strong, the
fact that the same is quite through fro your brain is often overlooked.

~~~
nileshtrivedi
Isn't resistance against bias a very requirement for considering someone
intelligent? What exactly is intelligence if not the ability to think clearly?

To me, this looks like a definition game. Smart/stupid is a black & white view
of looking at it and hence, misleading. As one overcomes his primitive biases,
we call him smart, even though he remains susceptible to other biases.

In other words, people aren't smart or stupid. People's actions are smart or
stupid in a particular situation.

~~~
quanticle
Kahneman divides our thinking into two subsystems: type 1 and type 2. Type 1
thinking is fast, intuitive, unconscious thought. Most everyday activities
(like driving, talking, cleaning, etc.) make heavy use of the type 1 system.
The type 2 system is slow, calculating, conscious thought. When you're doing a
difficult math problem or thinking carefully about a philosophical problem,
you're engaging the type 2 system. From Kahneman's perspective, the big
difference between type 1 and type 2 thinking is that type 1 is fast and easy
but very susceptible to bias, whereas type 2 is slow and requires conscious
effort but is much more resistant to cognitive biases.

Traditionally, "intelligence" (as colloquially defined) has correlated with
type 2 thinking. So, a reasonable conjecture would be that people who are
better at type 2 thinking would use it more and, therefore be less vulnerable
to bias. However, this research shows that even those who are very good at
type 2 thinking (as measured by their SAT scores and NCS scores) are even more
vulnerable to cognitive biases. This is a deeply counter-intuitive result. Why
is it that people who have a greater _capacity_ to overcome bias have a
greater _vulnerability_ to bias?

~~~
meepmorp
> Why is it that people who have a greater capacity to overcome bias have a
> greater vulnerability to bias?

Overconfidence. If you've become accustomed to thinking of yourself as being
better able to avoid cognitive bias, you come to be confident in your
abilities, to the point where you (perhaps unconsciously) think of yourself as
not susceptible to biases.

~~~
quanticle
That's certainly one possible explanation. Another possible explanation is
that their brains are just faster _in general_ , so that even though their
type 2 systems are faster than others', their type 1 systems are faster yet
and manage to override even more consistently than in others. In any case, I
don't think it's something that's "obvious" or "expected" by any means, and I
do think that it should bear further investigation.

------
keiferski
Intelligence is overrated as a metric, from the get-go. Being smart doesn't
mean anything - accomplishing something, whether that be writing a book,
founding a company, making a new scientific discovery, sculpting a
masterpiece, etc., is a much better metric.

Unfortunately everyone seems to be hung up on the "idea" of being smart, as if
having a high IQ somehow constitutes an accomplishment.

~~~
beersigns
I'm on the same page. IQ is property of the genetic dice roll, not something
that a person earns. Tangible results based measurements seem more
appropriate. Pure intellect is the raw material & needs to be refined/applied
to be useful.

edit: grammar

~~~
upquark
The rest of your personality traits that allow you to accomplish great things
(diligence, perseverance, focus, empathy, etc), as well as external factors
such as being born at the right place and the right time, are also arguably a
genetic / environmental dice roll.

~~~
beersigns
I'd argue all of those areas you list are far more likely to be improved over
the course of a lifetime than sheer intelligence. Pure intellect is pretty
much set at birth, or at the least the ability to improve it isn't
statistically significant. I'd argue that genetic dice roll and environmental
dice roll are quite different things as well. You have FAR more ability to
change your environmental situation than your genetic one. Does everyone have
an equal chance to alter their environment? No, but who said life is fair?

------
pjscott
If you'd rather not just accept your current level of cognitive bias, the web
site Less Wrong has a bunch of articles by and for people trying to become
less wrong about things. Anecdotally, I've noticed that people I know via the
Less Wrong community tend to be decidedly less full of crap than average, so
it seems to work. For example, here's a series of articles on the subject of
avoiding excessive attachment to false beliefs, which I found to be generally
entertaining and insightful:

[http://wiki.lesswrong.com/wiki/How_To_Actually_Change_Your_M...](http://wiki.lesswrong.com/wiki/How_To_Actually_Change_Your_Mind)

Any of those articles are a good place to start, so don't be intimidated by
the amount of stuff there.

~~~
coffeemug
_"Logic is the beginning of wisdom, not the end" - Spock_

Anecdotally I found that the Less Wrong community tends to be decidedly _more_
full of crap than average. In the same vein as spiritual materialism[1], many
people that engage in a bias witch-hunt seem to be falling prey to "logical
materialism", where the whole exercise turns into people deluding themselves
into thinking they're somehow "better" than others because they're less full
of crap than average.

It's good to know thyself, but it's no use if your knowledge isn't tempered by
wisdom, and you're not going to get that by reading blog posts about cognitive
biases online, no matter how good the posts.

[1] <http://en.wikipedia.org/wiki/Spiritual_materialism>

~~~
Cushman
It's nice to know I'm not the only one who thinks this. I think it's really
telling that a community which is ostensibly concerned with the science of
achieving desires spends so much time focused on the "problem" of akrasia.

It's also heartbreaking to see intelligent people getting so excited about
ideas like cryonics and personality uploading. I mean, they're interesting
things to talk about, but a lot of people on LW seem to actually think they
might get to live forever. It's kinda sad.

~~~
ableal
> actually think they might get to live forever

And they aren't horrified at the prospect of that being possible?

Not much thinking going on there, I suppose.

~~~
D_Alex
Why would you be horrified at the prospect of living?

I suspect it is _you_ that needs to put a bit more thought into this matter.

~~~
D_Alex
Oh wow! A downvote! So, the message I'm getting is that a) it is appropriate
to be horrified at the prospect of living; and furthermore b) asking why is
inappropriate...?

This is seriously weirding me out.

~~~
Jach
The message I've taken from this entire subthread is "try to laugh rather than
shake my head." The behavior makes a lot of sense if I frame it in status
signaling (which helps make sense of so much that I suspect it of being too
broad a framework). My own comment on the topic of living is: so it may be
physically impossible to live _forever_ , I think shooting for even a "mere"
200 years is doable and would be fucking awesome. At least we're not dogs,
they get less than two decades.

------
dools
I'm reading the book Priceless by William Poundstone which discusses the work
of Kahneman and Tversky (among others) in great detail as it relates to the
psychology of pricing (excellent read, btw).

This is O.T from what the article is saying but mildly O.T (meaning on-topic)
and I'd love to hear HN's opinion on this.

One of the problems presented in Priceless is:

Would you rather $3,000 as a sure thing, or an 80% chance of $4,000 and a 20%
chance of nothing

versus:

Would you rather a $3,000 loss as a sure thing, or an 80% chance of losing
$4,000 and a 20% chance of losing nothing.

The erroneous path that most people take, in the eyes of these researchers, is
that they set their base reference point at the sure thing, ie. they say "well
the $3,000 is a sure thing so I can assume I have it".

If you do that, then your answers are different:

In the first instance you keep the $3,000 (because it becomes an 80% chance of
winning $1,000 versus a 20% chance of losing $3,000).

In the second instance you go to court (because it's an 80% chance of losing
$1,000 versus a 20% chance of winning $3,000).

However if you don't "rebase" your reference point, then you would make the
same decision in both cases - that is you would take the 80% of $4,000 bet
because it's "worth" $3,200.

As much as I realise what they're saying and they say it's statistically
incorrect to do this, it really seems to me the most sensible way to make the
decisions (which is, I guess, exactly what they're saying right? I'm human,
ergo fallible to this kind of illusion).

The thing that kills me is this: if this is a one time thing, I'd rather be
sure of the $3,000. If I'm buying and selling these bets all day, then sure I
should take the $4,000 at 80% because even if I lose this round, the next time
I take the bet will make up for it (ie. law of large numbers).

But what this problem doesn't address is how _often_ I get this opportunity?
Depending on my circumstances, $3,000 could be a life changing opportunity,
ie. if I "win" $3,000 or $4,000, my circumstances are essentially the same so
I should always go for the sure thing. If I lose $3,000 or $4,000 I'm equally
screwed, so I should take the risk and try and win in court.

What am I missing?

~~~
kristopolous
I'm trying to follow.

So you are saying that with the following scenario:

1\. +3000 at P100 or +4000 at P80

2\. -3000 at P100 or -4000 at P80

Depending on the context, the answer is basically universally different if you
say "1 then 2" as opposed to "just 1" or "just 2"?

~~~
dools
The issue is that the answers are (yes basically) universally different
regardless of whether both options are provided, but that statistically
speaking they are identical.

The foible of humanity that makes the answers different is that we first
"rebase" our expectations to the 100% chance, rather than considering our
current position to be the baseline.

~~~
kristopolous
Yes, statistically they are identical. However, the fact that people consider
the significance of the objects and the context of the decision isn't a fault
of human intellect.

Really, stating otherwise is a fault of human analysis. It's a game theory
problem I think (likely a variation of the stag hunt:
<http://en.wikipedia.org/wiki/Stag_hunt>). It's a paradox only because
Armchair mathematical intuition fails to explain it.

~~~
roel_v
No because the stag hunt family of problems are about trust in other people,
the situation in the OP is not. It's about risk aversion. Whether or not that
is a 'fault of human intellect' is a matter of definition and up for debate,
it's not as clear-cut as you make it out to be.

~~~
kristopolous
Ok thanks. I am no specialist in this by any means. But I do know enough to be
cautious when dealing with these types of problems. Glad you clarified.

------
crazygringo
This explains, when you look in your git repository for who created a bug...

When you find it and it's by someone else, it was obviously a stupid, idiotic
error that you would never make.

When you find it and it's your own, it was obviously an understandable mistake
that anybody could have made.

Particularly if you consider yourself a great coder.

:)

~~~
pjscott
You're more likely to go back and notice the stupid broken code you wrote,
because it disproportionately comes back to bite you later. The good code just
works, so you don't notice it as much. Classic selection bias.

------
sageikosa
Got the ball and bat one right, and the lily pad one. I must not be as smart
as I hoped :-(

I think it comes down to having a value system where you'd rather be wrong and
corrected (even if you have to do it yourself), as opposed to always
projecting yourself as"perfect". Once you accept you aren't perfect, its
easier to work towards perfecting what you've got.

~~~
planetguy
Ah, that's the author flattering the reader. If the author had picked a
genuinely tricky example for the first one it would have turned many readers
off.

Of course a question like the old bat and ball one is ridiculously simple,
_after_ you've been warned that many people get it wrong and hence that you
should probably stop and think for a few seconds before blurting out the first
answer that pops into your head. Do it without that warning and it's easier to
get it wrong.

~~~
dllthomas
When I first encountered these, without the "it's going to be tricky" priming,
I must admit I got the bat and ball one wrong until I revisited it. The lily
pad one I got right with no deep thinking - my explanation is that experience
working with bits primed me for treating "doubles" more correctly. That's post
hoc and anecdotal, of course, so take it for whatever, but I thought it
interesting...

------
jakeonthemove
Well, at least people are collectively smarter today compared to 100 years ago
- the percentage of people who can answer those questions correctly has gone
up considerably :-)...

Also, I just _hate_ these kind of questions - they've always been used to
prove that I'm stupid by those who knew the answers, and they're not solving
anything useful - I need the problem to solve something I care about in order
for my brain to fully focus on it and "do the math"...

~~~
moron
These questions are not designed to demonstrate intelligence if you get them
right. They are designed to show how our cognitive biases trip us up, often
without us even noticing. You're not stupid if you get them wrong, you're
completely normal.

------
Jabbles
FYI the tallest tree in the world is ~ 116 m or 379 feet.

<http://en.wikipedia.org/wiki/Hyperion_(tree)>

~~~
Splines
That question sort of bothered me - I have no idea how big a tree is, and even
if I saw one, I have no reference for how many feet it is without doing some
fairly exhaustive mathematics (and at the scale of "largest redwood", I'd
likely be wrong). Given _some_ information about redwood trees, _of course_
people are going to use that information in the subsequent guess. They're not
going to imagine a redwood tree, then imagine a building next to it, then
count the floors and estimate the height. Or estimate the girth, then guess a
height/girth ratio that makes sense given the composition of a tree, and then
estimate the height. They're lazy.

If I asked you, "Will a frooble fit in my pocket/empire states building?", and
then asked you to estimate the average size of a frooble, you'd certainly take
into account my earlier question.

See <http://lesswrong.com/lw/k3/priming_and_contamination/> for some better
examples. IMO, the more insidious form of anchoring is contamination (vs
sliding adjustment).

------
cyclic
For all of the high and mightiness of this article, this bugged me:

In a lake, there is a patch of lily pads. Every day, the patch doubles in
size. If it takes 48 days for the patch to cover the entire lake, how long
would it take for the patch to cover half of the lake?

If a lilypad is 20 square inches (which is probably conservative), and you
started with 1 lilypad, after 48 days of doubling it would cover 1.4MILLION
square miles. That is 44 times the surface area of Lake Superior.

I get the point of the question, but if you're trying to play "gotcha" on
people, at least ask a reasonable question.

~~~
ableal
It was 20 days when I was a kid.

Inflation, I suppose.

------
ilaksh
My takeaway is that smart people are in fact fairly dumb, in other words even
fairly bright specimens of homo sapiens make stupid mistakes and irrational
decisions quite often because of this shortcutting.

I also think that on the other hand those types of shortcuts are actually
probably very useful aspects of our human intelligence.

I think that within 50 years or so we will see new species/upgraded humans or
AIs that actually don't have those problems, because they will have built-in
checks and alternative types of intelligence that rely on those shortcuts
less.

------
Sander_Marechal
I highly recommend reading "Predictably Irrational" by Dan Ariely if you
thought this article was interesting. It covers exactly this subject and makes
for fascinating reading. I picked up the book about a week ago via some other
post linked here on HN and I'm loving it.

[http://www.amazon.com/Predictably-Irrational-Revised-
Expande...](http://www.amazon.com/Predictably-Irrational-Revised-Expanded-
Edition/dp/0061353248/ref=sr_1_1?ie=UTF8&qid=1339561700&sr=8-1)

------
arihant
Got the lily pad one. I still cannot believe I said 10 cents on the first one
with a completely non-sarcastic chuckle.

This article reminds me of pg's reasons to have a co-founder to avoid being
delusional. Better be proven wrong on the inside than on the outside.

edit: Although on second thought, I think this bias theory probably extends to
organizations as well. Probably that's why big companies sometimes can't see
the obvious which a startup does.

------
gwern
Full text: <http://dl.dropbox.com/u/5317066/2012-west.pdf>

------
b1daly
Well I got both those questions right by following the heuristic that the most
obvious answer off the top of my head would not be the answer.

To my mind on any test that was supposed to be hard the appearance of any
obvious an answer triggers me to check for the proverbial trick question.

On the other hand, most brain puzzler type questions that get discussed on HN
(for example interview questions at Google) I find to be damn hard. I can't
imagine that "smart" people would do worse than "stupid" people on truly hard
problems. I guess that is the area of bias being pointed to in the OP.

------
Jordan_N
All of this is covered (much better)in Kahneman's 'Thinking, Fast and Slow'.

------
numeromancer
Here's a good mental test for the author:

When you're done with an article for the "Frontal Cortex" section, read it
aloud to yourself and smack yourself in the head with a frozen herring for
every time you use the word "we", "us" or "our" in your article. If you have a
headache when you're done, burn the draft and rethink the whole thing, b/c
your article obviously suffers from a "smug we" bias.

------
sin7
I don't see why, when smart people are trained to be lazy, researchers are
surprised that smart people are lazy.

~~~
terangdom
I am surprised. I thought cognitive traits had high correlation. I did not
expect laziness would factor into it so much.

------
antithesis
Just a couple of weeks ago we had an article about why smart people don't
think of others as stupid (<https://news.ycombinator.com/item?id=3984894>),
and now they're stupid themselves? I'm puzzled.

~~~
terangdom
A careful reading will tell you that the article you refer to is normative not
descriptive.

------
astrofinch
_This finding wouldn’t surprise Kahneman, who admits in “Thinking, Fast and
Slow” that his decades of groundbreaking research have failed to significantly
improve his own mental performance. “My intuitive thinking is just as prone to
overconfidence, extreme predictions, and the planning fallacy”—a tendency to
underestimate how long it will take to complete a task—“as it was before I
made a study of these issues,” he writes._

Doesn't Kahnman distinguish between intuitive and deliberate thinking? So it
could be possible to think better by distrusting our intuitions and
deliberating more, right?

------
marquis
Well, the ball one I've heard before and the lilypad is obvious if you've been
exposed to biology. Is this not more a matter of education not being applied
to real-world cases and relying on theoretical teaching?

~~~
saraid216
What, exactly, does the lilypad one have to do with biology...?

~~~
wtracy
Population growth follows an exponential curve.

------
harrup8
There is a simple cure for smart people to not be stupid, they can detect
those errors and bias easily in other people thinking, but not in their own
thinking because introspection doesn't work. So the cure is to play as if you
were the actor in a theater, that is pretend that you are not yourself when
you are thinking. You should imagine you are thinking as a known stupid person
and by miracle you get smart and not so stupid.

~~~
dmak
Easier said than done.

------
tetha
This isn't big news for me. It took me about 7 years to understand two courses
of high school while finishing my master of science. If you ask me the right
question or try to teach me just the right matter in just the right way, a
donkey will get it sooner than I do, and I'm talking about possibly years
sooner. I might just not see the problem or I might think the wrong way, I
don't know. There are things I just don't get.

------
javert
New Yorker articles (that get posted here, anyway) always have some sort of
take on things that attempts to bring down the good. Same for The Atlantic.
I'm not saying they're always _false_ , but there is a certain kind of thing
that these publications are interested in, and it's a kind of thing that makes
me feel dirty---or as if they're trying to make me feel dirty. Anyone else
noticed this?

------
akandiah
What did you think of first when you read about the bat and the ball problem?
Also, what's your background (e.g. CS, Maths etc.)? As someone who has a
relatively strong background in maths, I quickly saw the outlines of a simple,
algebraic substituion problem. I'm quite interested in how people analyze
problems, so I'd love to see how the HN community approached this.

------
farinasa
I think this is more an issue of the English language. English is not a good
way to speak math or logic. In the bat and ball question I mistakenly (and I'm
guessing everyone who got it wrong) ignored the word "more". That word
represents an operator and is therefore crucial to the question, but is
extremely diminutive in terms of English language.

~~~
sageikosa
Possibly, I immediately substituted one dollar plus ball for bat, then saw I
had 2 balls and a dollar making up $1.10. Or 5 cents a ball. I honestly didn't
pull out the algorithm sheet, it just sort of worked itself out in my head
before I had time to think: what are they looking for...

Not to say I don't have biases, just not for word-number problems.

Poker on the other hand is another matter, I still chase straights and flushes
in games with wild-cards, even though I _know_ those hands are almost
worthless.

------
dutchbrit
I answered the question correctly, which I'm sure the majority here probably
did too. When someone asks a simple mathematical question, I always seem to
give it more thought since I always know it must be a dodgy question. 4 years
ago, I'd probably of answered the question incorrectly. But the baseball
question is an obvious mindfuck.

------
Karunamon
So because I'm suffering from a deep case of the derp today, how are the first
guess answers to those questions wrong?

~~~
alaskamiller
Since most people don't get calculus they use simple math to shortcut to
answers that sounds right.

~~~
jere
Calculus? These problems require elementary school arithmetic or, being
generous, basic algebra.

~~~
alaskamiller
Think it through.

The first problem is operational and the second problem is change on a slope.

People use what they know to deal with problems, so facing these two they use
basic math (subtraction and division) being ignorant of higher-level math
concepts such as algebra or calculus. The answers they come up with _appears_
right to their known level of logic.

If you were the type of person that got to learn about high level math
concept, and are the studious type to double check answers, then these two
problems are condescendingly seen as trivial.

~~~
jere
I see it differently. The whole point of the article is that these problems
affect _smart people_.

>For one thing, self-awareness was not particularly useful: as the scientists
note, “people who were aware of their own biases were not better able to
overcome them.” This finding wouldn’t surprise Kahneman, who admits in
“Thinking, Fast and Slow” that his decades of groundbreaking research have
failed to significantly improve his own mental performance. “My intuitive
thinking is just as prone to overconfidence, extreme predictions, and the
planning fallacy”—a tendency to underestimate how long it will take to
complete a task—“as it was before I made a study of these issues,” he writes.

It has nothing to do with knowledge of higher level mathematics because these
problems are easily solvable with arithmetic. Lacking calculus doesn't kill
you on this problem. An intuitive gut feeling that you've already arrived at
the right answer and laziness is the source of confusion.

I've even read about that damn bat and ball problem and it STILL tripped me up
this time. I could easily have double checked my answer, but I wanted to read
the article. Even a child knowing nothing other than addition could get it
right with a little bit of trial and error. I hope after admitting that you
see that I _don't_ find the problems condescendingly trivial.

Personally, I found the second problem much easier... probably because
programmers have a better intuitive grasp of powers of 2. Bringing in slope is
stretching it a bit. Working in reverse from the completely covered lake, it
should be obvious that going back one day halves the lily pads. However, I
could imagine how someone more familiar with linear processes would get the
wrong intuitive result.

>The answers they come up with appears right to their known level of logic.

A studious habit sure, but checking your answer isn't a higher level math
skill.

------
melvinmt
Where can I find more of these questions?

------
taylorbuley
I studied "Choice & Behavior" at Penn -- the names Kahneman and Tversky were a
common refrain. If you're looking to self-teach, my prof Jon Baron has a great
course outline online: <http://www.sas.upenn.edu/~baron/p153.html>

------
jpwagner
do we all think about the same thing at the same time or does Jonah Lehrer
read HN religiously?

i _just_ watched that talk a couple of days ago because it was posted here:
<http://news.ycombinator.com/item?id=4082308>

------
njharman
Comments here reinforce the research.

------
karolist
These sort of questions always put me into "hold on, think about it" mode and
statements like "Your first response is probably to take a shortcut" are
simply not true. I'm actually more vulnerable to over-think a problem rather
than provide a quick wrong answer.

------
unlinear
If the lily pad patch was a mere 1 square inch, on the 48th doubling, the pond
would have to be (check my math)about 70k square miles - or 10 times the size
of Lake Erie. Those lilies would be consuming a serious amount of co2 during
that last doubling!

------
nadam
Both those questions are trivial and I answered them correctly. This is in
line with the article's conclusion: I don't consider myself very smart. I mean
I had some moderate successes in my childhood at math competitions, I am a
reasonably good programmer, but I am not very smart. I even failed at the on-
site Google interview.

But here is the problem with the article: The people who I consider smarter
than me (in the mathematical/IQ sense) also answer these kind of questions
correctly. This includes my friend working at Google, some researcher
mathematicians who I know from math forums who won serious math competitions
as a child, etc... These questions are really-really trivial. The researcher
mathematician guy who I know do not even make mistakes on 10x more tricky or
hard questions, it is scary how he do not make mistakes and thinks incredibly
fast. Something seems to be wrong with this study.

------
EricDeb
I remember the SAT as more about checking oneself's first reaction to a
problem. They often try to trick you with the obvious answer. The GMAT and GRE
were quite similar. I would often have to stop myself from taking shortcuts

------
vain
A bat and ball cost a dollar and ten cents. The bat costs a dollar more than
the ball

    
    
      bat + ball = 1.1
      bat = ball + 1
      2bat + ball = ball +1.1 +1
      2bat = 2.1
      bat = 1.05
      ball =0.05

~~~
debacle
I parsed that as:

    
    
        bat = 1.1
        ball = 1.1  
        bat = ball + 1
    

It confused the hell out of me.

~~~
saraid216
I immediately switched to generic variables X and Y, rather than keeping the
original names. It helps.

------
grandalf
I think that smart people are also prone to falling for headlines like this.
The reality is much more complex.

------
sonicaa
The last line "The more we attempt to know ourselves, the less we actually
understand." is worrying me a bit.

------
carsongross
Because they spend all day on HN?

------
Mordor
Taking more shortcuts as I get older too - a constant battle to stop and
think...

------
DannoHung
English is a terrible language for formalism: News at 11.

------
namank
Because they don't ask enough questions.

------
wissler
No, research did't show that "we do this" or "our approach is that" or "humans
aren't rational" -- what the research showed is that the _typical_ person does
this or that.

A similar experiment where people draw the wrong conclusions is the Milgram
experiment. Yes, most people are obedient to authority figures and do what
they are told. But not _everyone_ acts that way.

This research likes to sweep the best human beings under the rug, as if being
virtuous is not something to try to emulate, but is something to hide. This
explains why the majority of people act the way they do. Perhaps if they were
taught that their "we're only human" vices are not the ideal to emulate,
perhaps if the best that humanity had to offer were put forth as the ideal
instead, then these lesser human beings who make up the majority would become
what they might be and ought to be.

~~~
roguecoder
Animals are less irrational than humans. Children are less irrational than
adults. Why do you assume rationality is better, rather than maladaptive?

~~~
wissler
I don't "assume", I induce. See Newton's rules of reasoning. Pay particular
attention to Rule IV.

It is clear from many examples that rationality gives us the utmost ability to
adapt, prosper, and survive over the long term. And there is _no_ example that
truly leads in the other direction. (There are many perverse definitions and
applications of "rationality" that seem to trick some people into thinking it
does lead in the contrary direction).

~~~
cheatercheater
> I don't "assume", I induce. See Newton's rules of reasoning. Pay particular
> attention to Rule IV.

u so smaht

~~~
wissler
Your deranged sarcasm eloquently sums up what's wrong with our educational
system: kids are taught that their opinions matter, regardless of how idiotic
they are. Well the truth is, your uninformed opinion doesn't matter.

------
planetguy
Now _that's_ the kind of headline I'd give to my article if I wanted it to
reach the top of the HN front page.

------
moron
I notice this all the time, all over the place. It drives me nuts, to the
point that I am now extremely skeptical of what we call "intelligence".
Taleb's "The Black Swan" really opened my eyes to this. He talks a lot about
how we reason in ways that do not correspond to reality.

I don't know what right is, but I know the way we currently think about
intelligence is wrong.

~~~
farinasa
Agreed. IQ is a terrible method of evaluation also. For instance, if you
practice taking IQ tests, I guarantee you will get better at answering those
questions. Even if you gauge only the first attempt, how do you know their
daily job wasn't very similar to those exercises. In fact, IQ tests are very
much like building software to only work in one architecture and then hoping
it applies to others. It really only judges speed and word/number pattern
recognition. This encourages short cuts and is hardly applicable to any
practical situation.

------
phene
I find the abundance of "See? Smart people are actually dumber than I am!"
posts amusing.

