

Thinking, Fast and Slow (2011) - joubert
http://www.nytimes.com/2011/11/27/books/review/thinking-fast-and-slow-by-daniel-kahneman-book-review.html?pagewanted=all

======
CountBayesie
For anyone interested in Bayesian Statistics it is worth noting that ET Jaynes
(imho the arch-Bayesian) disagrees with Kahneman's idea that "people reason in
a basically irrational way". Jaynes died long before "Thinking, Fast and Slow"
was published, so his critique is based on Kahneman and Tversky's early work
on the subject.

Kahneman and Tversky's critique of Bayesian analysis is basically: If more
data should override a prior belief, then why is it as more data comes in
people have increasingly divergent opinions? For example we have 24 hour news
media throwing information at us and people only seem to be more divided
politically. If we reasoned in a Bayesian manner then our opinions should
converge, which they clearly do not.

Jaynes' answer is really fascinating and is covered in the chapter "Queer uses
for probability theory" from 'Probability theory: The Logic of Science'.
Basically Jaynes' argues that we are never really testing just one hypothesis.
He gives an example of an experiment designed to prove ESP, and points out
that no matter how low of a p-value the experiment reports, if you have a
strong prior belief that ESP does not exist the evidence won't convince you.
He argues this is because you actually have other hypotheses with other
priors: The subject is tricking the experimenters, there is an error in
experiment design, the people running the experiment are intentionally being
deceptive etc.

He then shows that if your prior belief in ESP is sufficiently lower than your
prior belief in these alternative hypothesis, not only will further evidence
fail to convince you of ESP, but will actually increase your belief that you
are being lied to in some way. So while Jaynes agrees that these priors may be
irrational, our reasoning given new information is completely rational.

~~~
yellowstuff
This is interesting, but this seems like a pretty minor thread in Kahneman and
Tversky's body of research. The review highlights a number of experiments that
show how actual human reasoning differs from maximizing utility. The critique
of Bayesian analysis isn't mentioned in the article, and I don't recall it
being in the book.

If you have any interest in how people make decisions then "Thinking, Fast and
Slow" is worth reading.

~~~
conistonwater
> The review highlights a number of experiments that show how actual human
> reasoning differs from maximizing utility.

The conventional definition of utility is pretty strict, for example, it must
be a function of the final outcome only, and there is no model uncertainty,
which would go under the title of ambiguity aversion instead. So maybe it's
not _that_ surprising that a really specific narrow concept doesn't describe
all of human behaviour and needs to be extended. But since you can extend it
sufficiently to describe some interesting behaviours, is it really necessary
to focus specifically on the utility function, and not the other things that
people might be maximizing?

~~~
yellowstuff
Sorry, I don't really understand your comment. Assuming that people maximize
utility is a useful model for certain tasks. Kahneman's work shows that
people's decisions differ systematically from any kind of rational
maximization, and are explained better when you allow for biases such as
anchoring, loss aversion, and substituting a hard question for a related easy
question.

~~~
conistonwater
My point is that utility is a rather narrowly-defined concept, so if you find
a situation where people don't seem to be maximizing _any_ utility function,
one of the possibilities is that the concept of a utility function is too
narrowly-defined. Things like anchoring, loss aversion, ambiguity aversion can
all be modelled: the only thing you lose is the name "utility function". Maybe
the utility function needs to depend on the entire history of states (for loss
aversion), or maybe the question being asked is subject to uncertainty, or
there is a fundamental amount of model uncertainty. All of those can be
modelled in probabilistic terms.

So if rational maximization means that people have a utility function that
they maximize, then yes, rational maximization is not what people do. But that
is partly the fault of how the definition of utility was chosen.

~~~
RockyMcNuts
Von Neumann and Morgenstern showed that, as long as people can consistently
order choices (ordinal utility), there is a utility function (cardinal
utility) they are satisfying.

What Kahneman and Tversky observed is that people don't even choose
consistently. It depends on how the choices are presented. For instance,
whether the subject frames an outcome as a loss or a smaller-than-expected
gain. No matter how you define a utility function, it will not always be
maximized. So, it's not a question of defining the function less narrowly. You
can present two games with mathematically identical sets of outcomes and
people consistently rank the outcomes differently.

Anyway, it's a very good and important book, and doesn't have much to do with
Bayesian statistics.

[Ninja-edited since HN doesn't let me respond further below...if you can show
the outcomes K & T observed are in fact consistent with a more broadly defined
utility function, then you too can win a Nobel prize!]

~~~
conistonwater
> What Kahnemann and Tversky observed is that people don't even choose
> consistently. It depends on how the choices are presented. For instance,
> whether the subject frames a choice as a loss or a smaller-than-expected
> gain. So, no matter how you define a utility function, it will not always be
> maximized.

I disagree, I think you are assuming that people accept questions at face
value and unfailingly trust the experimenter. Then equivalent but differently-
stated problems would be equivalent, and you would reach that conclusion.

But when people use heuristics, those heuristics are grounded in their
experience, and are like a prior on the meaning of the question. Stating the
same question in two different ways and getting different answers means either
that there is no utility function, or that the "utility function" depends
(through model uncertainty, for example) on the exact phrasing of the
question.

My point is that these discussions are very closely tied to the kinds of
assumptions you make about how people reason, what is rational, and what
inputs the utility function has. Kahneman and Tversky got around this problem,
I think, by doing something eminently reasonable: postulating a clear and
unambiguous definition of a utility function. But the concept of "rationality"
is richer than that, so the conversation should not stop there.

~~~
freyr
> _postulating a clear and unambiguous definition of a utility function. But
> the concept of "rationality" is richer than that_

The word "rationality" may be ambiguous, as most words describing anything
complex are, but the authors attempted to provide a clear model and work
within those bounds. When we begin discussing the ideas informally, and using
terms in a broader and more colloquial sense, then we're at fault if the
results have become muddied.

The authors demonstrated a reasonable utility function, one which most people
upon reflection would agree is logical, and demonstrated that people do not
consistently act in a way that maximizes that function.

We can always move the goal post, and claim that if people appear to be acting
irrationally it's because we simply don't understand their concept of
rationality (or the more complex function they're maximizing). But that seems
rather circular; it would be nice hear examples of a richer concept of
rationality, in the context of the author's experiments, that might explain
seemingly inconsistent behavior.

------
MichaelGG
>bold claim that humans are fundamentally irrational

That's bold? Didn't Kahneman and Tversky demolished the idea of humans being
rational, empirically, in the 70s? Is there any solid evidence of humans being
rational, or even an evolutionary explanation of why that'd happen?

Do we expect other animals to have accurate probability processing psychology,
too?

~~~
henrikgs
It's bold in a sense that most economic models regarding consumers as rational
actors becomes irrelevant or plain wrong. Everyone knows it is a
simplification and only partly true, but claiming its dead wrong really often
was quite bold.

------
manmukh
Probably the most important book I've read. I'm always trying to put the
concepts he discussed into practice (often unsuccessfully). Thinking about
what you're thinking is hard!

------
bradleyjg
Some very interesting material in this book, but I found it hard to get
through. It can be very repetitive. I know that's a well known teaching
technique, but it doesn't make for entertaining reading.

~~~
frankosaurus
Kahneman's talk at Google summarized the first 2 chapters pretty well:
[https://www.youtube.com/watch?v=CjVQJdIrDJ0](https://www.youtube.com/watch?v=CjVQJdIrDJ0)
I strongly recommend it for anyone who doesn't have time to read the book.

As for the book itself, I agree with some others' postings here -- it's
absolutely fantastic. It changed the way I think about human cognition and
education... not to mention machine learning, politics, and marketing.

------
utternerd
This was an absolutely fascinating read and has given me a fundamental
understanding of human cognition. This helped with other books as well, such
as the "Lean Start-up" and "Willpower: Why we do what we do." It even helped
me further understand some concepts behind Jeff Sutherland's Scrum book.
Highly recommended.

------
emmanueloga_
"Thinking, Fast and Slow is a best-selling 2011 book by Nobel Memorial Prize
in Economics winner Daniel Kahneman [...]. The book's central thesis is a
dichotomy between two modes of thought: "System 1" is fast, instinctive and
emotional; "System 2" is slower, more deliberative, and more logical. The book
delineates cognitive biases associated with each type of thinking, starting
with Kahneman's own research on loss aversion. From framing choices to
substitution, the book highlights several decades of academic research to
suggest that people place too much confidence in human judgment."

[http://en.wikipedia.org/wiki/Thinking,_Fast_and_Slow](http://en.wikipedia.org/wiki/Thinking,_Fast_and_Slow)

------
andrewstuart2
Bread and what?? My System 1 had a mild freak out until my System 2 suggested
I Google it.

For the record, I think the blank is "butter."

~~~
faitswulff
My own System 1 tried "bread and wine" and "bread and vinegar" before throwing
its hands up in the air at the author. I'm glad I'm not the only one who was
thrown off by this.

~~~
keithpeter
My system 1 went for "bread and circuses". I've had a political kind of day
(in the small p sense of having to take care of organisational business).

------
delbel
Just FYI Nobel Prize in Economics isn't actually a real Nobel Prize and was
invented by the central bank system.
[http://en.wikipedia.org/wiki/Nobel_Memorial_Prize_in_Economi...](http://en.wikipedia.org/wiki/Nobel_Memorial_Prize_in_Economic_Sciences)

~~~
gmac
... for some values of 'real'. It's certainly prestigious nonetheless.

~~~
daniel-cussen
It's the same amount of money, and that's half of what makes a prize
prestigious, I reckon, along with how old it is and who else has won it.

------
farslan
This book is like a textbook and there is tons of stuff you might want to try.
However I still couldn't finish it due to high intensity information it
contains.

------
SixSigma
We live life forwards but reflect backwards.

