

Who You Are - four
http://www.nytimes.com/2011/10/21/opinion/brooks-who-you-are.html

======
disgruntledphd2
Kahneman and Tversky are legends in psychology, but they didn't change the
fundamental way we view ourselves. The notion of humans as super rational
utility maximisers was entirely an economic model, and it was that they
attacked. It was well known (for over twenty years before their seminal 1974
science paper) that humans were poor at probability and utility judgments.

What they did do, was force the economic profession to face up (somewhat) to
these issues, and their contribution to loss aversion and prospect theory more
generally, is a significant advance.

That being said, their dual process models are about as predictive as those of
Freud (which is to say, not at all). its currently a really active phase of
research, so I suppose I can thank them for making it easier for me to get
funding.

They also did not invent priming, though they made heavy use of it. Likewise
framing effects were well known before them, going back at least to Asch 1951
study of conformity in judgements of line length.

To summate, Kahneman is an amazing scientist, but this reporter does not
appear to know much about what he is talking about.

~~~
klbarry
What are the best books available on the subject of predicting human behavior,
if you don't mind me asking? Advanced or technical are fine, scientific
journals would be over my comprehension, I'm afraid.

~~~
jasonshen
One great book on behavioral economics that's based on a lot of research and
experimental data is [http://www.amazon.com/Judgment-Managerial-Decision-
Making-Ba...](http://www.amazon.com/Judgment-Managerial-Decision-Making-
Bazerman/dp/0471178071)

Came highly recommended from a Stanford professor and is one of the most
"meaty" books on the topic.

~~~
joelhaus
Animal Spirits is probably a lighter read (it's still on my to-read list), but
I've been impressed when hearing the author, Robert Shiller, speak:

[http://www.amazon.com/Animal-Spirits-Psychology-Economy-
Capi...](http://www.amazon.com/Animal-Spirits-Psychology-Economy-
Capitalism/dp/0691142335/)

Discussing animal spirits in various videos on youtube:
[http://www.youtube.com/results?search=Search&resnum=0...](http://www.youtube.com/results?search=Search&resnum=0&oi=spell&search_query=Robert+Shiller+Animal+Spirits)

------
albertsun
David Brooks has a really terrible track record of horribly misinterpreting
social science research and drawing completely unfounded conclusions from it.
He's been taken down by academics many times over it, most memorably (for me)
here <http://languagelog.ldc.upenn.edu/nll/?p=478>

------
Jun8
"Most of our own thinking is below awareness."

Indeed!. Minsky once said that consciousness is the brain's debug trace.

~~~
mp01cnb
Have been reading a very interesting book on related subject -

On Being Certain - Robert A. Burton

------
benrpeters
This article reminded me of my undergrad econ classes. I understand that it
was just undergrad and we were learning a basic tool set. But it seriously
scares me when I remember how my classmates and I (some of whom are on Wall
St) took class after class that drilled supply/demand graphs premised on
rational, utility maximizing populations into our heads. Whether the theories
in this article are oversimplified or not, they do provide an important
counterweight against anyone who thinks that they can reliably predict
people's decisionmaking. I hope econ textbooks are evolving to reflect the
growing marriage between econ and neuropsych.

------
guimarin
I agree with the basic tenants of this article. Yes it was true, that k&t were
moving the model into the 'economic sphere'. But I think you cannot overstate
the importance of this. Behavioral finance/economics coming back into cog.
psych. and cog. neuro. is absolutely earth-shattering. the money dictated the
research and now that research is FINALLY being applied back into where it
belongs. I can't wait for these ideas, and those of choice
designers/specialists/researchers to make it into 'machine learning' and 'weak
ai'. If ever there was a subject that was full of shit from the beginning with
regards to how people actually think, and needs to be re-architected from the
ground up. Also, someone needs to start listening to other Princeton
Researchers like Eldar Shafir on these topics as well.

~~~
disgruntledphd2
It actually has made it into machine learning. See the works of Gigerenzer on
heuristic decision making, where he shows that simple heuristics outperform
complex statistical models unless the amount of data is really large. It blew
my mind when I saw it, and a good paper to start with is here:
[http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.130...](http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.130.7949&rep=rep1&type=pdf)

That being said, almost anything Gigerenzer has written in the last five years
is extremely relevant to this topic.

------
craze3
Anyone else feel that the article was exceedingly lacking in subtance,
especially for an NYTimes.com article?

~~~
hugh3
I thought that the article was exceedingly lacking in substance, typically for
an nytimes.com article.

~~~
chubot
NYTimes makes some mistakes and publishes some junk, but they also have great
content. It's been uneven. What's better?

I couldn't get my NYTimes this weekend, so being a Bay Area resident I got the
SF Chronicle. The writing there is for grade school kids. Not trying to be
funny, but it was sad.

(Don't get me started on the Economist...)

------
Pynkrabbit
It definitely makes a lot of sense. Just try speaking with someone about
politics or religion. Even if you conclusively prove that the other persons
views are not based in fact or reason they will refuse to acknowledge you are
right and then usually get mad and stop talking to you. Humans are most
certainly not 'rational beings'. Our thinking is constantly biased by our
formative experiences and our environment.

~~~
bermanoid
_Humans are most certainly not 'rational beings'. Our thinking is constantly
biased by our formative experiences and our environment._

It goes far deeper than that, too. Our raw pattern matching sensitivity is
cranked to the max at a very low level, and this hypersensitivity to perceived
order ricochets throughout the entire system of data processing that our brain
engages in.

We see patterns _everywhere_ , whether they're real or not, and we have
trouble unseeing them even once we know for a fact that the data is random, or
that the pattern fails. Statistically speaking, we're a freaking mess, we're
constantly pulled towards the wrong answers, we never have good estimates
about how reliable our inferences are, it's just an all around bad scene.

And yet the combination of all of these seriously flawed pattern inferences
leads to a creature that, all said and done, makes pretty damn useful
predictions about a lot of things, even if the details of how those
predictions get made are all wrong. This is surprising, since typically in
statistics when we use algorithms that are too optimistic or sensitive we end
up with pure garbage. If I had to guess, humans end up implementing something
like the reverse of a typical boosting algorithm, in that we take a bunch of
too-strong pattern recognizing subunits, and then put them together into
something that pits them against each other to become more robust against mis-
prediction, but I don't have any data to back up that assumption, or any clear
idea how it might work - which is, I guess, a perfect example of exactly this
kind of mental stupidity that we're so commonly driven by.

~~~
anirudhjoshi
_"If I had to guess, humans end up implementing something like the reverse of
a typical boosting algorithm, in that we take a bunch of too-strong pattern
recognizing subunits, and then put them together into something that pits them
against each other to become more robust against mis-prediction"_

"Ensemble methods" seems to be what you're talking about. (
<http://en.wikipedia.org/wiki/Ensemble_learning> )

The application of many models put together to produce one signal to
accurately predict the future.

I believe the Netflix challenge was won using ensemble methods acting in
concert.

 _"Our final solution (RMSE=0.8712) consists of blending 107 individual
results. Since many of these results are close variants, we first describe the
main approaches behind them. "_

( PDF paper:
[http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.142...](http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.142.9009&rep=rep1&type=pdf)
)

QIM, a large hedge fund that works futures, also uses the same model.

 _"In more direct language, Woodriff uses a statistical technique called the
ensemble method, which is a way of mining data to produce something akin to
the wisdom of crowds. A bundle of computer models, each searching for patterns
in different ways, are linked together to produce a consensus statistical
prediction—a sort of prediction by algorithmic committee. Scientists use the
method to help predict ozone levels, for example. Woodriff uses it to help
predict where futures markets are headed over a 24-hour period. His
predictions are derived from four basic bits of historical pricing
information: the open, close, high and low of specific markets.

Rishi Narang, whose Telesis Capital is a longtime investor in QIM, says other
fund managers use similar methods and techniques. "The core idea is not so
magical," Narang says. "It is how he puts it together. Getting the program
correct is very challenging."_

( [http://www.absolutereturn-alpha.com/Article/2361672/QIMs-
Jaf...](http://www.absolutereturn-alpha.com/Article/2361672/QIMs-Jaffray-
Woodriff-The-monk-in-managed-futures.html?ArticleId=2361672) )

~~~
joshhart
Boosting, which he mentioned, is an ensemble method so I assume the parent is
familiar with them.

Ensemble methods incorporate multiple weak classifiers and work to make them
stronger. I think the parent was thinking of the reverse of this, although
that idea seems pretty alien to me.

~~~
bermanoid
Yes, I'm familiar with ensemble methods, I use them a lot for classification.
But those are not really what I'm thinking about (I'm still groping towards
concrete ideas here, so forgive me if the following is a bit vague). Perhaps
my saying "the reverse of boosting" is not really an accurate way to put this,
in retrospect, so let me clarify.

Ensemble methods typically take several distinct (either by method or
training) weak learners and combine the predictions to get one strong hybrid
by smoothing, averaging, or otherwise combining the results. They are still
vulnerable to overtraining, though, and they're not very good at generalizing
from small amounts of data because the individual weak learners don't learn
from each other or from context.

My theory is that we might be able to get rid of the ensemble and tolerate
massive overtraining without detriment if instead of merely combining results,
we took a recursive approach and let the classifier use its output as input at
another level. My thought is that overtraining on some patterns could be
mollified by the ability to recognize error due to overtraining as a pattern
at a different depth of recursion.

This obviously would not be generally applicable to weak learners, it would
only apply to a particular subset of learners, and that's where my thoughts
get a lot muddier and speculative.

My really wild speculation: in the limit, if you set something like this up in
the right way, you might be able to come up with an efficient approximation to
Solomonoff induction as restricted to the subset of patterns that you're
actually exposed to, rather than over the entire set of possible inputs. If
I'm correct about that, it would enable staggeringly effective learning within
a domain, as long as the domain itself displayed patterns that had some sort
of underlying order.

But I don't have any codez to show, or really anything more than a hunch at
this point, so don't take me too seriously. :)

------
kristianp
There is also an article consisting of an excerpt from the Kahneman and
Tversky book here:

[http://www.nytimes.com/2011/10/23/magazine/dont-blink-the-
ha...](http://www.nytimes.com/2011/10/23/magazine/dont-blink-the-hazards-of-
confidence.html?ref=magazine&pagewanted=all)

(Empty) HN discussion of it: <http://news.ycombinator.com/item?id=3141022>

------
FD3SA
"We are players in a game we don’t understand."

We've had the opportunity to understand the game ever since Darwin published
the Origin of Species. Yet, even Darwin himself struggled with the
ramifications of what we truly are (and aren't) after his mind numbing
discovery. The truth is far too devastating for the majority, and it is this
fact that divides us. A brain can only be of three dispositions: one that
understands reality, one that refuses to, and one that doesn't. A subset of
the last is a brain which simplifies a complex, poorly understood reality into
one that is far easier to grasp. This last one is where the majority find
comfort.

Human motivation is frighteningly simple if looked at objectively, and it is
this truth that we hide from ourselves at all costs to preserve our sanity.

~~~
shadowfox
I must admit that I didnt quite get what you said here :( Would you care to
elaborate?

------
InclinedPlane
Relevant: [http://www.smbc-
comics.com/index.php?db=comics&id=2095#c...](http://www.smbc-
comics.com/index.php?db=comics&id=2095#comic)

------
pak
Yet another entry in the long list of pop psychology books. It seems like they
all gear up on one or two navel-gazing insights that just about anybody can
intuitively identify with (You have a slow, rational side and a fast,
emotional side! Doesn't that explain everything?) and then, they try to run as
far as they can with the implications of this overly dumbed-down hypothesis.
Carefully cherry-picked statistics from the millions of social phenomena and
psychological experiments taking place around the world are sprinkled into the
narrative to keep you engaged. (side rant: all of which have their
methodological details conveniently obscured to prevent your critical thinking
from kicking in, and you are extremely lucky if the sample size is provided,
much less any attempt at a p-value or other discussion of statistical
significance. Nope, it's usually just "Amazingly enough, 89% of ...")

Example: the silly birdie vs. bogie data presented in this little article.
Great, people want birdies more than they don't want bogies, and perhaps it
ties back into some aspect of your central hypothesis. But how many other
oversimplified statements about human nature could I "prove" with this
example? Probably hundreds. Maybe it's a completely rational strategy on the
part of the golfer, since their experience has taught them that the
(emotional|physical|mental) effort required to sink a birdie putt is not as
productive in the long-term as at least making par on every hole. That kind of
alternative thinking doesn't matter though, so we simply move to the next
experiment and supportive conclusion. Repeat ad infinitum, until we've
fulfilled the length requirement for a novel.

No, I did not enjoy Freakonomics (can you tell?).

~~~
karolist
I agree with you but wonder if the article was like that from Daniel Kahneman
himself or the journalist. Having dealt with the later I really understand how
good at bending what you said they are.

Also, if Daniel was a pop scientist, would he be given nobel prize in his
field? Yes, Obama comes to mind, but still.

~~~
pak
I wouldn't try to make any comment on the overall value of his research, and I
am sure his academic writing is much more rigorous. But I really think these
"universal secrets of the mind" books, written for a general audience, go too
far in clouding original and critical thought by presenting such a slick,
skewed narrative. They're sold as guides to better thinking but wind up
inducing the opposite. It feels dishonest.

