What they did do, was force the economic profession to face up (somewhat) to these issues, and their contribution to loss aversion and prospect theory more generally, is a significant advance.
That being said, their dual process models are about as predictive as those of Freud (which is to say, not at all). its currently a really active phase of research, so I suppose I can thank them for making it easier for me to get funding.
They also did not invent priming, though they made heavy use of it. Likewise framing effects were well known before them, going back at least to Asch 1951 study of conformity in judgements of line length.
To summate, Kahneman is an amazing scientist, but this reporter does not appear to know much about what he is talking about.
Is not an economic model, it is a straw-man of the economic "model" which states that en masse and over time humans behave rationally where their interests are involved. (I put model above in quotes because even that is really only an aspect of how economics models human behavior.)
As many "critics" of economics have pointed out, and as Surowiecki reported in The Wisdom of Crowds, the mass often make better decisions than an individual because, among other reasons, their errors tend to cancel out.
Came highly recommended from a Stanford professor and is one of the most "meaty" books on the topic.
Discussing animal spirits in various videos on youtube: http://www.youtube.com/results?search=Search&resnum=0...
Indeed!. Minsky once said that consciousness is the brain's debug trace.
On Being Certain - Robert A. Burton
That being said, almost anything Gigerenzer has written in the last five years is extremely relevant to this topic.
I couldn't get my NYTimes this weekend, so being a Bay Area resident I got the SF Chronicle. The writing there is for grade school kids. Not trying to be funny, but it was sad.
(Don't get me started on the Economist...)
It goes far deeper than that, too. Our raw pattern matching sensitivity is cranked to the max at a very low level, and this hypersensitivity to perceived order ricochets throughout the entire system of data processing that our brain engages in.
We see patterns everywhere, whether they're real or not, and we have trouble unseeing them even once we know for a fact that the data is random, or that the pattern fails. Statistically speaking, we're a freaking mess, we're constantly pulled towards the wrong answers, we never have good estimates about how reliable our inferences are, it's just an all around bad scene.
And yet the combination of all of these seriously flawed pattern inferences leads to a creature that, all said and done, makes pretty damn useful predictions about a lot of things, even if the details of how those predictions get made are all wrong. This is surprising, since typically in statistics when we use algorithms that are too optimistic or sensitive we end up with pure garbage. If I had to guess, humans end up implementing something like the reverse of a typical boosting algorithm, in that we take a bunch of too-strong pattern recognizing subunits, and then put them together into something that pits them against each other to become more robust against mis-prediction, but I don't have any data to back up that assumption, or any clear idea how it might work - which is, I guess, a perfect example of exactly this kind of mental stupidity that we're so commonly driven by.
"Ensemble methods" seems to be what you're talking about. ( http://en.wikipedia.org/wiki/Ensemble_learning )
The application of many models put together to produce one signal to accurately predict the future.
I believe the Netflix challenge was won using ensemble methods acting in concert.
"Our final solution (RMSE=0.8712) consists of blending 107 individual results. Since
many of these results are close variants, we first describe the main approaches behind
( PDF paper: http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.142... )
QIM, a large hedge fund that works futures, also uses the same model.
"In more direct language, Woodriff uses a statistical technique called the ensemble method, which is a way of mining data to produce something akin to the wisdom of crowds. A bundle of computer models, each searching for patterns in different ways, are linked together to produce a consensus statistical prediction—a sort of prediction by algorithmic committee. Scientists use the method to help predict ozone levels, for example. Woodriff uses it to help predict where futures markets are headed over a 24-hour period. His predictions are derived from four basic bits of historical pricing information: the open, close, high and low of specific markets.
Rishi Narang, whose Telesis Capital is a longtime investor in QIM, says other fund managers use similar methods and techniques. "The core idea is not so magical," Narang says. "It is how he puts it together. Getting the program correct is very challenging."
( http://www.absolutereturn-alpha.com/Article/2361672/QIMs-Jaf... )
Ensemble methods incorporate multiple weak classifiers and work to make them stronger. I think the parent was thinking of the reverse of this, although that idea seems pretty alien to me.
Ensemble methods typically take several distinct (either by method or training) weak learners and combine the predictions to get one strong hybrid by smoothing, averaging, or otherwise combining the results. They are still vulnerable to overtraining, though, and they're not very good at generalizing from small amounts of data because the individual weak learners don't learn from each other or from context.
My theory is that we might be able to get rid of the ensemble and tolerate massive overtraining without detriment if instead of merely combining results, we took a recursive approach and let the classifier use its output as input at another level. My thought is that overtraining on some patterns could be mollified by the ability to recognize error due to overtraining as a pattern at a different depth of recursion.
This obviously would not be generally applicable to weak learners, it would only apply to a particular subset of learners, and that's where my thoughts get a lot muddier and speculative.
My really wild speculation: in the limit, if you set something like this up in the right way, you might be able to come up with an efficient approximation to Solomonoff induction as restricted to the subset of patterns that you're actually exposed to, rather than over the entire set of possible inputs. If I'm correct about that, it would enable staggeringly effective learning within a domain, as long as the domain itself displayed patterns that had some sort of underlying order.
But I don't have any codez to show, or really anything more than a hunch at this point, so don't take me too seriously. :)
And current mood. I have no numbers on this but i am pretty sure that people with chronic pain tend to have more negative thoughts than the average person.
(Empty) HN discussion of it: http://news.ycombinator.com/item?id=3141022
We've had the opportunity to understand the game ever since Darwin published the Origin of Species. Yet, even Darwin himself struggled with the ramifications of what we truly are (and aren't) after his mind numbing discovery. The truth is far too devastating for the majority, and it is this fact that divides us. A brain can only be of three dispositions: one that understands reality, one that refuses to, and one that doesn't. A subset of the last is a brain which simplifies a complex, poorly understood reality into one that is far easier to grasp. This last one is where the majority find comfort.
Human motivation is frighteningly simple if looked at objectively, and it is this truth that we hide from ourselves at all costs to preserve our sanity.
Example: the silly birdie vs. bogie data presented in this little article. Great, people want birdies more than they don't want bogies, and perhaps it ties back into some aspect of your central hypothesis. But how many other oversimplified statements about human nature could I "prove" with this example? Probably hundreds. Maybe it's a completely rational strategy on the part of the golfer, since their experience has taught them that the (emotional|physical|mental) effort required to sink a birdie putt is not as productive in the long-term as at least making par on every hole. That kind of alternative thinking doesn't matter though, so we simply move to the next experiment and supportive conclusion. Repeat ad infinitum, until we've fulfilled the length requirement for a novel.
No, I did not enjoy Freakonomics (can you tell?).
Also, if Daniel was a pop scientist, would he be given nobel prize in his field? Yes, Obama comes to mind, but still.