Hacker News new | comments | show | ask | jobs | submit login
Why Is Behavioral Economics So Popular? (nytimes.com)
208 points by malshe 6 days ago | hide | past | web | favorite | 144 comments

I blame freakonomics. It's a great podcast (and book) but it occasionally pushes some shoddy science.

Also, TED and TEDx for popularizing pseudo intellectual garbage.



.... and basically the idea that telling a good story is more important than representing the truth when talking about science.

Freakonomics etc made us fall in love with that particular kind of story where the outcome is kinda counter-intuitive and surprising and clever all at the same time.

For example - the legalised abortion and crime effect. I'm sure you can think of plenty more.

I think these stories are like crack to "rational" people. They have all the thrill of a good conspiracy theory, feel rebellious and anti-establishment, yet they're backed up by "real science". And you get to sound clever when you tell your friends about them. I know I'm a sucker for them.

These stories become like memes that get reposted over and over.

There was a "study" about drug testing welfare applicants in Florida a few years ago where the authors claimed that it only caught a handful of people and cost more than the money saved.

I looked up the numbers and found that applications took a historic plunge when the law went into effect, and an equally large increase after the law was suspended. It was a huge multi-stage funnel where the study only looked at one stage after thousands of applicants had already self-selected for drug-freeness. Applications went down by thousands, and payouts by millions.

Basically if you hired a data scientist who told you this about some funnel you were analyzing, like a purchase or sign-up flow, you would fire this person within five minutes. But people repeated this gotcha story everywhere.


This does not support your claim.

I found a reference that 1600 applicants declined the mandatory testing from July-October 2011. Meanwhile 7000 applicants underwent testing, of which 1% failed.

I haven't been able to find monthly figures for before, during, and after -- so I, too, would like to see kwillet's data.


This is heavily slanted, but I believe their numbers are factual.

>feel rebellious and anti-establishment

I wonder what this is and why it is so prevalent (and maybe how it made our species/specific cultures successful).

You can list a large number of groups that wrap up their identity and value systems around being opposed to or having knowledge that counters their perception of "the majority". It is both incredibly attractive and a way to signal your value by going against the grain in some way or another, in some field or another.

Some philosopher/pyschologist/neurologist/evolutionary biologist could probably do a very compelling analysis about the human need to rebel or obey.

One of the societal ills of the 21st century is that we're running out of big clear cut things to rebel against but are continuing to rebel with the same intensity while directing it somewhat aimlessly all over the place, and now mostly just at each other's rebellion itself. Political parties in America these days are less anchored in particular issues and more about believing strongly in something your opponent is against and making sure your team wins.

"One of the societal ills of the 21st century is that we're running out of big clear cut things to rebel against but are continuing to rebel with the same intensity while directing it somewhat aimlessly all over the place, and now mostly just at each other's rebellion itself."

Yeah but the key phrase is clear cut. We have plenty of big institutions and one can see them do terrible things. It's just they fade into each and so challenging a single one becomes a dubious interest. Health care is a mess but blaming say, insurance companies in particular, is problematic. The environment and global warming are huge problems but blaming single institutions misses the point, etc.

Moreover, large institutions are adept at harnessing issues-as-social-signal and directing them according to their interests. One can fight all this but such a fight doesn't follow the path of least resistance. And boy, are people trained, today, to follow that path, whether in rebellion or conformity.

> Health care is a mess but blaming say, insurance companies in particular, is problematic.

I think this attitude is the key problem: fuck blame. Seriously, who cares? There absolutely are concrete things we can do about healthcare and climate change and so on, but not if we collectively think that blaming this or that institution is going to motivate people to fix the problem.

Someone has to pay for any change that happens - for example if a company is dumping toxic waste into the environment, $100M is going to be lost from their pockets if they have to stop (cleanup + price of proper disposal). That $100M could come from anywhere in the economy, so unless you want Amazon warehouse workers to start paying in to an industrial waste disposal subsidy system you better start constructing arguments that look something like, "the perpetrator should pay to clean it up."

What we have are complex problems instead of simple ones.

For example, slavery up to 150 years ago vs. segregation up to 50 years ago vs. subtle often unconscious racism today.+ Fighting to abolish slavery is a lot simpler than fighting for racial equality. It is not that there are not problems, but there are not so many obvious problems with obvious solutions today.

+ I'm trying to make a broad point here not perfectly depict the history of racism in America, forgive any unjust implications

Stopping global warming seems very simple - just limit C02 production. The main thing is institutions are complex and so even simple change requires complex processes.

I would strongly disagree that any plan to limit CO2 production would be simple.

The good news here is we're all going to find out very soon just how hard it might be.

Simplest are carbon credits.

Being intelligent is the most prized human trait, so being seen as "stupid" hurts our egos really bad.

Being contrarian in this manner is proof that you're smarter than everyone else, plus you get to take a jab at the pretend smarty-pantses. I bet this same effect is why political opinions are so difficult to change.

It probably as simple as it helps your chance of getting laid a small bit by being the archetypal "rebel" as the opposite sex like a vicarious thrill.

"I rebel — therefore we exist." - Albert Camus

"we're running out of big clear cut things to rebel against"

Or we're getting much better at the PR processes that muddy up any public discussion of problems and you lack the history education to know how rare actual clear cut historical political problems were.

Do you really think it is more likely that everyone else is virtue signaling than that you have a status quo bias from living in a culture that pushes that message (and I'm guessing you are quite successful as well?).

When has the majority been right about any complex issue anyway? What a bunch of reactionary anti-intellectual nonsense.

Indeed many of the exact claims are not to be trusted, though on the other hand keeping this sort of mentality/thinking framework when analyzing the world around us can still be helpful. Putting it as a "conspiracy theory, rebellious and anti-establishment" is really quite a mischaracterization. I'd say quite the contrary, those who sit at the top of the society, of the "establishment", definitely think in very different manners than the population. We all know we need to look beyond what's being mass produced and fed to people on the surface to understand how a lot of things truly work. You may be one of the minority that are closer to the truth, but you're not alone in your understanding of the matters. This is very different from some bunkers conspiracy theory that is dreamt out of nowhere.

Basically, being "rebellious, anti-establishment" means denying/rejecting everything about how the society works and searching for an alternative. Being "smart" means understanding exactly how the society actually works instead of how it's "claimed" to work under propaganda and falsehoods, and maybe using the mechanism to your advantage as much as possible. They are very, very different things.

The Tipping Point came out five years earlier.

Also, I think it's important to note that these stories are not actually anti-establishment, even if they have such an appearance; they don't offend any powerful interests. In fact, they're often quite the opposite (consider that Malcolm Gladwell wrote an entire book arguing that being disabled or growing up poor or whatever else is actually an advantage! The people most likely to take exception to that thesis do not have bylines in the New York Times, nor do they have the editor's personal phone number)

See also: anything by Malcolm Gladwell

I think "pattern addiction" is definitely a thing. Human beings are constantly searching for order and patterns. Getting other people to buy into your pattern/narrative/whatever is probably as old as the human race, but in the last 100-150 years we've gotten really fucking good at tricking each other on a massive scale. I think this is why people seem to intuitively understand that political media manipulation, advertising, junk science, etc. are related phenomena. They're all about hijacking the pattern recognition capacity of human beings for personal gain. It's essentially a form of parasitism.

The books were more about finding common patterns and behaviors among seemingly disparate markets than exploring counter-intuitive outcomes. Like how you can spot cheating by looking at whether latter outcomes match former. There's only really one chapter about counter-intuitive outcomes, and that's the one on parents and education, and that's not even counter-intuitive.

You say all that like there's a better way of learning about how the world works that doesn't involve getting another university degree...

>For example - the legalised abortion and crime effect. I'm sure you can think of plenty more.

source on why this is shoddy science?

e.g. https://en.wikipedia.org/wiki/Legalized_abortion_and_crime_e... or https://www.economist.com/finance-and-economics/2005/12/01/o...

> It was a good test to attempt. But Messrs Foote and Goetz have inspected the authors' computer code and found the controls missing. In other words, Messrs Donohue and Levitt did not run the test they thought they had—an “inadvertent but serious computer programming error”, according to Messrs Foote and Goetz.

> Fixing that error reduces the effect of abortion on arrests by about half, using the original data, and two-thirds using updated numbers. But there is more. In their flawed test, Messrs Donohue and Levitt seek to explain arrest totals (eg, the 465 Alabamans of 18 years of age arrested for violent crime in 1989), not arrest rates per head (ie, 6.6 arrests per 100,000). This is unsatisfactory, because a smaller cohort will obviously commit fewer crimes in total. Messrs Foote and Goetz, by contrast, look at arrest rates, using passable population estimates based on data from the Census Bureau, and discover that the impact of abortion on arrest rates disappears entirely. “I am simply not convinced that there is a link between abortion and crime,” Mr Foote says.

>In 2005 Levitt posted a rebuttal to these criticisms on the Freakanomics weblog, in which he re-ran his numbers to address the shortcomings and variables missing from the original study. The new results are nearly identical to those of the original study. Levitt posits that any reasonable use of the data available reinforces the results of the original 2001 paper.[10]

>The effect of legalized abortion reported by Donohue and Levitt (2001) is largely unaffected, so that abortion accounts for a 29% decline in violent crime (elasticity 0.23), and similar declines in murder and property crime. Overall, the phase-out of lead and the legalization of abortion appear to have been responsible for significant reductions in violent crime rates."

According to your link it doesn't appear to be shobby science.

That was not a rebuttal against the pointing out of the software error, but to the Lott and Whitley criticism.

The rebuttal to the Foote and Goetz criticism of a programming error is this:

> Donohue and Levitt subsequently published a response to the Foote and Goetz paper.[12] The response acknowledged the mistake, but showed that with different methodology, the effect of legalized abortion on crime rates still existed.

That's troubling to me. They had a hypothesis, and when the experiment didn't confirm that hypothesis, they simply tried a different methodology that might still show it. That feels like moving the goalposts to me.

It's also representative of the negative feeling I had all the time when reading their book: after all their praise of rigorous scientific methods and of not confusing correlation with causation, they follow that up with making those same mistakes themselves. (And I guess the reason that rubbed me wrong is because that is interjected with continuous praise all the time of Levitt. That in turn probably makes me more likely to criticise, so perhaps you should take my criticism with a grain of salt...)

Wow, you cherry-picked parts of that out and left out the part where they admit their science was wrong and say there is no way to prove/disprove. Re-read the last paragraph of that 2005 criticism section to see the authors admitting it was bad.

Sounds like you're the one cherry picking by focusing on only one part of the link the OP posted.

Sadly that extends to everything. Story and emotion trump facts.

Over the short term. Over the human term.

Over the long term, facts win out because facts tend not to change. People will moan and complain and twist and excuse and blame, but facts just don't give up being facts. Facts don't get tired or change their mind. It might take 10 years or a hundred or a thousand. Truth is patient like that.

Unfortunately, that doesn't really solve any problems in the short term before people start dying.

> Story and emotion trump facts.

Pun intended?

Every story is a twist. "This one thing you thought you knew isn't actually true". Great for telling friends in the pub but misleading through selection bias as they don't tell all the stories where things aren't different to what was expected.

Malcom Gladwell is also very good a pushing interesting, well-told stories with spurious scientific basis.

I agree. Although Freakonomics feels like an exploration of the weird and unexpected, while Gladwell feels like he is handpicking studies to push the bigger narrative of a book.

TED and TEDx had great content in the first few years, when profs were able to present their lifetime's work.

Once they burned through those folks, they turned to cranks/homeopaths/etc

It sucks what happened with them. TED/TEDx talks were awesome and I used to be able to recommended people watch any and all of them.

Until I started seeing stuff like this https://www.youtube.com/watch?v=w8J5BWL8oJY

lol i skipped to the end and instantly felt bad for the audience even with my audio off

my god, that is terrible! I generally think there is good content in the TED talks, but I now won't even look at the TEDx talks because regionally they do have to turn to the cranks.

Haha, that's even worse than the TED rocks parody...

OMG. That's garbage.

Even in the first years, they suffer from the simple fact that you can't present something correct and useful in a small popular talk with no pre-requisite shared knowledge. Its "empty calories" that makes it feel like you learned something but its really just entertainment. Neil Postman in "Amusing Ourselves to Death" regards these types of talks as more pernicious than things like reality TV that don't claim to provide any useful information.

Would you also apply that to say MinutePhysics on YouTube?

Recommend buddying-up with some friends and subscribe to Great Courses on ROKU if you want some additional intellectual stimulation. Runs around US$180/annum but app channel can be shared among (I believe) 5 other remote sites. I subscribe, use it on three separate ROKU boxes, and allowed registration from online acct to a relative living a couple of miles away and a retired neighbor who was pining for some mental stimulation - also got her away from the damned news channels :)

The style ended up being their signature, after burning through that content, leading up to this piece of satire:


Though from a public speaking point of view, it's well worth studying (both actual TED talks and this satire) to help make legitimate talks more polished. There's an exchange of emotions that goes on during great talks, and that really helps drive home points.

Once these talks became more about the delivery style than actual substance, it kinda devalued the whole TED branding.

Ted-X is where it all went wrong. Anyone could give a Ted-X talk and that gives us garbage of the sort that you have linked.

TED talks were never the best source of information, but at least they weren't actively misleading.

ted-X on the other hand.....

The most impactful Ted talk I ever saw was on tying shoes, so I’d say I appreciated the range of presentations. I can’t really remember any other talks but that one changed my life.

TED and TEDx aren't the same thing

Oh my, how can anybody think that they are in any way affiliated!? /s That's just a cop out. Both are equally dubious, sugarcoated garbage.

Gladwell’s whole schtick is shooting an arrow and then drawing the bullseye around where it lands.

you blame them for what exactly?

Behavioral Economics is applying knowledge of human behaviour to economics. There is need to go to the opposite direction. Economics analysis of applied to human behaviour.

Biases are are not always errors. They can be cognitive shortcuts and optimizations that may be reasonable heuristic.

There Is More to Behavioral Economics Than Biases and Fallacies http://behavioralscientist.org/there-is-more-to-behavioral-s...

> A widespread misconception is that biases explain or even produce behavior. They don’t—they describe behavior. The endowment effect does not cause people to demand more for a mug they received than a mug-less counterpart is prepared to pay for one. It is not because of the sunk cost fallacy that we hang on to a course of action we’ve invested a lot in already. Biases, fallacies, and so on are no more than labels for a particular type of observed behavior, often in a peculiar context, that contradicts traditional economics’ simplified view of behavior.

>The conversation around biases is almost uniformly negative: they screw up our decision making, or undermine our health, wealth, and happiness. However, biases evolved with us, and for good reasons...

It's widely acknowledged in behavioral economics that biases have their uses. In fact, usually the phrase used to describe them is "Heuristics and Biases"[1].

It's first and foremost a heuristic -- a reasonably good way to generate good behavior. Secondarily, in certain specific situations, it causes non-optimal behavior.

[1] ( https://www.amazon.com/Heuristics-Biases-Psychology-Intuitiv... )

Exactly, discovering economic biases is like discovering optical illusions. The existence of optical illusions doesn't mean that human sight is always flawed, only flawed in certain edge cases.

"All models are wrong, but some are useful." -- statistician George E.P. Box

The guy who Tversky and Kahneman really hated, Gerd Gigerenzer had a very good answer to a lot of their work. He pointed out that:

A critic of the work of Daniel Kahneman and Amos Tversky, Gigerenzer argues that heuristics should not lead us to conceive of human thinking as riddled with irrational cognitive biases, but rather to conceive rationality as an adaptive tool that is not identical to the rules of formal logic or the probability calculus. He and his collaborators have theoretically and experimentally shown that many so-called cognitive fallacies are better understood as adaptive responses to a world of uncertainty—such as the conjunction fallacy, the base rate fallacy, and overconfidence.[4]


His books are well worth a read.

Nassim Taleb also has a similar criticism in that he says that Kahneman and Tversky essentially said that humans don't act according to theoretical rules but instead have their own heuristics that have been derived from dealing with an uncertain world throughout history:


Rationality is somewhat of a new concept to humans. It's somewhat surprising that economists took it so literally for that long and put and any real faith in it. The lemma of rationality does make the math a LOT easier, and we're doing better now about the irrationality aspects, but that spurt of faith in rationalism was quite interesting.

For the large majority of our history, we've not been rational in the least. HN had a good discussion [0] on the Medieval Mindset [1] earlier in the summer. For ~1000 years in the Medieval period, the main mindsets were not rationality vs. irrationality, right vs. less-wrong vs. wrong, etc. But more Pious vs Impious, Cruel vs Kind, The Ideal vs the Real, etc. The people were no less people, but their heads weren't ours.

What economic theories will come next, what new ways of thinking, what new mindsets? We're so focused on the rational, the 'right' answers, these days. But life, as we all know, is VERY stochastic ( a fancy word for random ). Maybe new mindsets about the randomness, bounded and given standards of deviation, will be a new paradigm, not just in economics, but in everything we do.

[0] https://news.ycombinator.com/item?id=17058487

[1] https://coinsandscrolls.blogspot.com/2017/09/thinking-mediev...

I parsed Taleb differently to you, I think part of his point is most behavioural economics is only looking at whether behaviour is rational or not and the individual level, but they are missing the fact that we're a herd species and behaviour has meaning as well in aggregate - there's second order effects. His point about nudges etc is that it is dangerous, as we focus on first order effects but ignore second order effects. This is what creates large risks, for example, defaulting everyone into one type of pension plan which is itself defaulted into one style of index investing, most of the indexes use the same tracking algorithm creates second order risks that we don't understand where there were none previously.

I think it's mostly the weakness of the pre-behavioral economics. The status quo was economics that, almost on principle, ignored the way actual humans behave. Any amount of consideration of how actual humans behave, however limited or flawed, will look like an improvement. It's much like the way behaviorism in psychology, which seems lacking nowadays, actually looked good compared to the Freudian school of psychology it was displacing.

> The status quo was economics that, almost on principle, ignored the way actual humans behave

Why do people say this. This was never true. The Wealth of Nations is basically a compendium on human behavior, and that was 200 years ago.

The Wealth of Nations was not the status quo, though. The neoclassical model of economics was much more committed to a model in which the individual maximizes some utility function. The neoclassical model certainly built on Adam Smith, but by no means the same as, and one of the differences was that neoclassical economics tried to find ways to ignore or abstract away anything to do with human psychology.

Much of behavioral economics is based on a very shaky foundation of psychology.

For instance, the priming experiments cannot be reproduced.

> This result confirms Kahneman’s prediction that priming research is a train wreck and readers of his book “Thinking Fast and Slow” should not consider the presented studies as scientific evidence that subtle cues in their environment can have strong effects on their behavior outside their awareness.


I was thinking about this with Edward Bernays recently. He is considered the 'father of Public Relations', and comes up in a lot of pseudo-sciency conversations, like Adam Curtis's documentary, "The Century of the Self." *

Bernays comes up a lot in counter-cultural circles, but it's always bugged me that I've never seen any validating evidence that his techniques were actually effective, beyond "he came up with the idea of 'freedom torches' and smoking went up among women."

His techniques sound interesting, and they feel like they'd be effective, but I had trouble finding any solid research that validated the idea that his techniques were effective at anything other than making himself famous.

(* Don't get me wrong – I love Curtis's films, as art. But it bugs me how they usually present a flood of information presented as fact, with little to know citation or corroboration. If they were just art, it wouldn't bug me, but a lot of people seem to swallow the films' conclusions wholesale.)


It’s not limited to marketing either, management theory is full of this too.

Almost all of Frederick Winslow Taylor’s work and reputation was built on unverified case studies and anecdotes. (A great book on this is ‘The management myth’.)

The conversation is rarely about his actual techniques, but his popularization of that particular approach to communication with the public.

So really over-simplify: "If you want people to do X, don't just tell them to do X or order them or try to convince them, use psychology to understand which Y's and Z's you can tell them about that will statistically lead many people to do X".

He was just the guy who said "Hey, this emerging scientific field can be really useful in manipulating people even though it is early days!"

While reading "Thinking, Fast and Slow" I was astonished by the priming effects, and the confidence with which Kahneman presented information about priming. I became obsessed with the priming concept and began doing further research, only to find countless articles discounting priming's legitimacy. I haven't been able to pick up the book since, because of the way in which the information was presented as sure fact, when in reality the research was early and inconclusive.

I think you're not quite right with the timeline here. IIRC, when "Thinking, Fast and Slow" first came out, the research was considered pretty settled and had had lots of replications. It's only a few years afterwards that the replication crisis really hit psychology in a big way, and especially priming.

So with the benefit of hindsight, yes, he presented faulty research - but he didn't (and couldn't) have known it at the time.

(Of course, if the lesson is that the entire field should be considered skeptically, I don't think I'd disagree)

See also Kahneman's response to this: https://replicationindex.wordpress.com/2017/02/02/reconstruc...

> I still believe that actions can be primed, sometimes even by stimuli of which the person is unaware. There is adequate evidence for all the building blocks: semantic priming, significant processing of stimuli that are not consciously perceived, and ideo-motor activation. I see no reason to draw a sharp line between the priming of thoughts and the priming of actions. A case can therefore be made for priming on this indirect evidence. But I have changed my views about the size of behavioral priming effects – they cannot be as large and as robust as my chapter suggested.

(Discussed on HN: https://news.ycombinator.com/item?id=15228712)

I think I have some old comment here where I tried to find anyone else who found that book to be completely unconvincing and really bad at justifying the conclusions.

People acted like I was insane.

I know that psychology is particularly devasted by the replication crises but aren't branches of research affected as well?

I think one of the problems "we" have is that bias confirmation is a hell of a drug. By "we" I mean individuals, researchers, companies, corporations, governments, everybody. When we see something that intuitively makes sense to us, especially one that confirms our biases - we tend to want to believe it, because we all believe what we think is true and so things that support what we believe must therefore be true, according to what we believe. Some word salad there, but I think that's reasonably clear.

Like you mention the replication crisis should be really leaving most of all psychology results, not only past but also present, in serious doubt -- let alone anything contingent on those past results. But again the problem is that when new research comes out that confirms our biases, we don't expend the energy to challenge it.

And it seems that many in science are more concerned with themselves than their science - something that the replication crisis provides a great deal of evidence for. People aren't putting out trash science by accident. They need to publish, and trash science is what gets published. And the replication crisis hasn't changed this. There seems to have been, at best, token efforts to try to more fully ensure the truthfulness of what's being published. So we continue to believe what we want to believe, with science increasingly falling victim to the act of starting at a conclusion and working your way backwards.

Personally, I wonder if it's not simply that psychology is the first field to investigate the issue seriously.

Medicine is being hit almost as bad. In general, fields where a lot of studies have low statistical power(usually due to noisy data and/or small sample sizes) are the hardest hit. Statistical power is the chance that a significant result is found if the effect is indeed significant (one minus the chance of a false negative). Counter-intuitively, this increases the chance that a given significant result is invalid.

Combine that with non-reporting of negative results and you basically have a huge pile of bullshit.

I would extend to to "statistics based knowledge" in general. If you don't understand the mechanism it's not real knowledge.

For example: You have a drug that 100% helps in a certain context (of what the problem is and the person's genetics plus maybe even epigenetics, and maybe even things like what they eat and what environment they are exposed to).

However, clinical trial studies don't go very deep in separating different kinds of people. We just don't know enough, we don't know how to even measure most thing, and when we do it's extremely costly. And since we don't understand the mechanism - if we did we didn't have to go through the trials - they are needed because even we have _a_ mechanism we are not sure what else the drug does in the body, or about follow-up and higher order effects - we would not know what to look for anyway. Plus, without full understanding of the mechanism it's hard to combine the new "knowledge" with other knowledge. The experiment you gained the data from is very specific and results are hard to generalize.

So the result is the drug will be a complete failure, because we are unable to tell which people would benefit.

That's always the problem: When you don't have a very good understanding of the mechanism and all the consequences you have to be lucky that the population you study is more or less the correct one. You don't even know when you got the wrong one. If the drug - but same in any other field that uses statistics - fails for 95% of people you may still have a hit for a sub-population (it's not as easy as "it's the other 5%" of course).

There are people working specifically on looking closer at some "failed drugs". They have been able to find a few "miracle drugs" that way. They only help certain people (they use genetic testing) but when they do they do great. But they failed their initial clinical trials big time.

Over the years I have gotten much more skeptical of all statistics based "knowledge". Reading those studies always leaves a strange taste in my mouth. Something doesn't feel right. It does not taste like knowledge. I see the relevance given that it often is the only way to make practical progress of course.

>That's always the problem: When you don't have a very good understanding of the mechanism and all the consequences you have to be lucky that the population you study is more or less the correct one. You don't even know when you got the wrong one. If the drug - but same in any other field that uses statistics - fails for 95% of people you may still have a hit for a sub-population (it's not as easy as "it's the other 5%" of course).

Good point, but this kind of subgroup analysis also runs into the multiple comparisons problem. If you test all kinds of subgroups (or just do a lot of any kind of tests), chances are good that you will run into false positives. In such scenarios, it becomes more probable than not that a given positive result is in fact false. Combine that with the base-rate fallacy (the good majority of drug trials yield null results, but this is not taken into account) and you are in really bad shape.

There was a replication study a while ago that tried to replicate IIRC 50 or so "landmark" cancer studies, and only came up with significant results in 6 cases.

Bayesian methods go a long way towards solving these issues, but there is no cure for low-power studies. They just can't tell you much, and will lead you heavily astray if you don't properly account for multiple comparisons, etc.

>Over the years I have gotten much more skeptical of all statistics based "knowledge". Reading those studies always leaves a strange taste in my mouth. Something doesn't feel right. It does not taste like knowledge. I see the relevance given that it often is the only way to make practical progress of course.

Clinical studies of new drugs have uncertainty. Thus any decision-making based on results must take this uncertainty into account. You can't get away from statistics here. The bad feeling in your mouth may be from the unintuitive and usually inappropriate use of P-values and null-hypothesis statistical tests. The vast majority of researchers are actually completely mistaken on what P-values even mean. Most think that they are "the chance of a false positive" or something similar, which is completely wrong.

Bayesian methods help here, because they are the only valid way of combining past information with new information.

> but this kind of subgroup analysis also runs into the multiple comparisons problem. If you test all kinds of subgroups (or just do a lot of any kind of tests), chances are good that you will run into false positives.

As I said, the issue is statistics based "knowledge". What you just said just continues down that path, so of course it does not solve the problem, actually makes it worse because now we throw more randomness at randomness and get random matches - but still not one bit more understanding.

Most all knowledge is "statistics based." Anytime you are not 100% sure about something, then your knowledge about that thing is inherently probabilistic. There is absolutely no way around it.

You can't get away from statistics. You can only replace bad statistics with good. If you ignore these statistical aspects of an experiment such as multiple comparisons, then your reasoning is even worse!

You can't just "decide" to not use "statistics based knowledge" any more than you can decide that your experiment is not subject to uncertainty or error. You could say that, but that doesn't make it true.

Bayesian statistics are much more intuitive, however. Baye's theorem is actually the generalization of contrapositivity (If A implies B, then not A implies not B) to situations where we are not certain of A and B.

Why do you re-interpret what I wrote instead of going with what I wrote? Why do you lecture me about things that I didn't write?

>As I said, the issue is statistics based "knowledge". What you just said just continues down that path, so of course it does not solve the problem, actually makes it worse because now we throw more randomness at randomness and get random matches - but still not one bit more understanding.

You seem to be saying that nothing is gained with statistics knowledge, and attempting to use better statistics is just throwing "randomness upon randomness." If this isn't true, then you are being unclear. All of your criticisms about statistics are very vague, and do not cite any specific problems.

I gain understanding from "statistics-based knowledge." If you do not, then that is a problem you should solve by reading more about these issues.

Absolutely, and we should have a lot of skepticism to conclusions from those branches of research as well.

On teh other hand, priming is a real effect.

When I was in college (2004) the school I went to had an interdisciplinary major called Philosophy, Politics, and Economics (PPE). It has a "thematic concentration" called "Choices and Behavior" that was very popular. Many, many students majored in PPE so I don't mean to comment on literally all of the people who chose it, however, amongst my circle of friends the ones who chose "Choices and Behavior" chose it because in their minds there was something cool about the idea of using psychology to manipulate people. I don't think they had anything nefarious in mind but they were definitely attracted to the magical language used to describe the practical applications of the things they would learn. It actually felt somewhat similar to the way my software developer friends talk about machine learning today.

Nitpick here.... There seems to me be to be significant difference between "loss" in terms of something already tangibly owned and "loss" in terms of missing out on an opportunity.

Of course marketing schemes for "act now or lose out" won't work. I don't currently experience the opportunity and therefore losing out exerts no power over me. However, losing a mug when I already own the mug would demand a higher price from me. I would agree with Dr. Thaler that the inertia thing is a minor point about terminology.

Call it loss aversion or call it inertia. Marketing schemes that include the word "loss" in their pitch are not using the same strategy that behavioral economists are talking about here.

If loss aversion exists but it's not inertia, then you should expect to see "win aversion" as much as "loss aversion". Do you?

I'm not saying it isn't inertia. I'm saying that Thaler is right in saying that is a semantics issue. Call it what you want, the actual behavior is real.

Bingo, this criticism of behavioural economics is a straw man.

There's a lot of pop science and good marketing around behavioral science but at it's core I think it helps complete the picture of what economic research tries to deliver.

Prior to being introduced to behavioral economics my exposure to economics was very quantitative, while this model is necessary it's incomplete. A lot of economics seems to assume that the actors are equally rational beings but in the real world that's just not the case. Behavioral economics seems to bring actual human experience into economics.

Go back to the 1980s or so and a lot of economists may have recognized that their assumptions didn't really hold in a some cases but the orthodoxy was still that underlying theory was fundamentally economically rational [in the sense of assuming, for example, that people maximize expected value] in nature. What behavioral economics has done is started to provide intellectual foundations for observed behavior that isn't explained by traditional economic models.

Richard Thaler's Misbehaving is a good read on how this developed. I had Thaler as a professor for a couple of classes in the early 80s and I found some of the insights from early-on behavioral economics some of the more useful things I learned in my MBA.

> A lot of economics seems to assume that the actors are equally rational beings but in the real world that's just not the case. Behavioral economics seems to bring actual human experience into economics.

This is a huge misconception. Economics doesn't assume that actors are perfectly rational actors, any more than physics assumes that interactions always take place inside a frictionless vacuum. It's just one model that's used as a starting point to understand mechanical interactions.

It's even stronger than that: the model you derive by assuming a room full of perfectly rational agents may well be accurate, even though in reality the agents are not rational. Or at least, it may do an excellent job of describing the aggregate behaviour, even if it does terribly at describing individual's behaviour.

Physics does this all the time, too. You can, indeed must, be wrong about all the microscopic details... but despite that, you can often get the correct macroscopic model.

The fundamental problem with this is that the economy is made up of intelligent people who can actively find ways to exploit flaws in the model, and that - because the model's assumptions make it hard to become filthy rich - they have a direct financial incentive to do so. So, in practice, the interesting parts of the economic system all rely heavily on breaking those nice elegant assumptions.

This is something like Milton Friedman's 'pool player analogy'. It doesn't matter if a pool player isn't calculating all the angles and frictions when he pots a ball, because it's as if he was. So as long as your model has predictive power, unrealistic assumptions don't matter.

The unfortunate part is that most models in economics have lousy predictive power.

>Economics doesn't assume that actors are perfectly rational actors

At the undergrad level, it often does. I had a friend who had just become an assistant professor in economics. He was teaching an undergrad class, and wanted to pose a few scenarios to them. While preparing his lecture notes, he called me up and gave me the scenarios and asked how I would behave. These were not "hard" or "wild" scenarios. Mostly day to day stuff.

I gave my answers. On several of them, he told me I was giving irrational answers. This was a problem for him, because if his students answered likewise, they wouldn't support the points he was trying to make in the lecture.

He and I had a similar outlook on life. So I asked him: "Would you behave differently from me?"

Him: "No"

Me: "So what is the value of the economic model you're teaching if even you would not behave the way the theory indicates you would?"

As an aside, and this always comes up when we talk about rationality and economics. The disconnect is often that basic economics courses have a narrow view of people's motives and desires. It is often simplified to "optimizing for money" or "optimizing for time". Rarely things like "optimizing for mental stress". Pretty much everything someone does is for some perceived gain. That gain is often not what economists teach it is.

> At the undergrad level, it often does.

Even introductory economics classes don't do this. One anecdote about a friend of yours who's an inexperienced instructor notwithstanding, if you look at any of the most widely-used and reputed introductory textbooks for undergraduates, you'll see discussion of rationality and the ways in which those assumptions can be relaxed.

Just as people study Newtonian mechanics under idealized circumstances before they learn about friction and Van der Waals forces, people learn about the outcomes of rational behavior first, but it's ludicrous to judge an entire field by an outsider's perception of the topics covered during the first few weeks of an introductory course.

I was once asked in an economics lecture what I would do in a particular situation. So I told the professor how I would try to proceed, only to be told that I was wrong--I would instead do this other thing that hadn't occurred to me at all.

Economics is often right about that kind of thing. You think you will not snack as much if you put your M&Ms in a faraway place, but you end up doing the incentive-predicted thing and eating more M&M's each time you go to get some.

A similar observation on irrational choice vs. optimization can be borrowed from GPS navigation.

Most of the time GPS offers an optimized shortest fastest way to reach a destination. Yet, often I may prefer a familiar route, or an 'easy' one (e.g long straight runs), or a 'scenic' ('cause it's that time of year) or some other irrational choice.

Then would just absorb any resulting inefficiency as a fact of life (chalk it up in Debit column, if at all).

So ultimately it's kind of optimizing for personal satisfaction (at the moment). Sticking to a rational discipline does bring satisfaction too.

>or some other irrational choice

None of those is an irrational choice.

Sure, it makes sense to the actor himself.

But to an external observer (say, who's going to the same destination) such choice does not follow common reasoning (arriving at the shortest).

Gains of such choice are intangible (indeed personal satisfaction is subjective). At the same time, the 'loss' is very specific and measureable (extra time, fuel etc.)

Thus in the context of a common measure such choice could be seen as irrational, especially when a rationally optimized choice is equally available.

Rational expectations models are a huge part of modern macro though.

Even when models in economics don't assume rational behaviour, they tend to lean pretty heavily on assumptions of optimising behaviour - which, for me, isn't much more convincing.

"Why Is Behavioral Economics So Popular?"

Probably because people got sick of a guy with a formulae lecturing them about how human's aught behave as opposed to developing functional models of how humans do behave


The Nash Equilibrium of this game is to offer the other person $0...unless you incorporate the idea that humans may care about more than pure monetary payoffs in their mental gymnastics.

Kinda like when that friend from undergrad studying business told you that paying anything more your minimum tax bill was 'irrational'

Behavioral economics is popular because its a better model. It's clear that any model of human behavior and decision making will necessary be very complicated. It doesn't make any sense to right at the start decide that the only factor that will be considered is financial expected value. Even though any model based on such an assumption will be severely flawed, this is the view of main stream economics, and this explains why behavioral economics is a better model.

Merlin Mann coined, or at least heavily uses the phrase "turns out journalism" which I really like. We like to have something that subverts our expectations. At some level we want to be told that the advice we don't like isn't actually useful.

This very website was built on a foundation of "turns out", one of Paul Graham's favorite rhetorical crutches.


Behavioral Economics has only gotten "so popular" in non-academic circles. This is mostly due to the quasi-science of economics where we try to mathematically model the world. These "cute Freakonomics" type studies, while publishable in some outlets, haven't even come close to displacing traditional microeconomic foundations in mainstream economics.

I'm an Economics PhD and former professor and most of the research isn't taken very seriously. Everyone acknowledges that people don't behave "rationally", but no one yet has been able to figure out how to build these behavioral assumptions into a working model that is actionable.

Aren't "loot boxes" and casinos and the free-to-play game industry a great example of behavioral economics in action?

That some businesses have figured out how to exploit people's irrationality does not mean that economists have a good model for how actual people behave, and how it differs from rational behaviour.

No one takes behavioral econo.ics seriously? Well maybe stodgy old school economists who are more mathmetician than social and behavioral scientists. But there are lots of serious scientists using ideas from behavioral economics to study the brain and get published in outlets like Nature and Nature Neuroscience. psh!

What are the expectations for such actionable economic models?

To improve predictability or to better the management of resources in human societies?

Indirectly, it's a question about the optimization goal that the mainstream models are trying to achieve.

It's classic pop psychology (which we've always loved) mixed in with some data points so it's like crack for the 'modern' bourgeois. So it's like 19th/20th century political theory minus the ideology, plus the 'science' ... the ultimate 'educated' parlour room fodder!

Because it's interesting according to this definition: https://www.sfu.ca/~palys/interest.htm . Also the article mentions the mug example (selling a mug for more if you have one than you'd be willing to buy it for) as being a classic behavioral example of loss aversion. I actually thought it was an example of the endowment effect according to behavioral economists.

Because almost anything that bothers being at all empirical is better than the nonsense we get from classical economics.

Now we just need a revolution in economics comparable to the cognitive psychology movement that got that field beyond Skinnerian behaviorism.

I know this paper well (and the larger point being made in the paper; I have seen Dr. Gal present on it in seminars). I think the point he is trying to make is that what we really need for understanding of human behavior is good psychological theory of how humans operate. In many cases (loss aversion being one of them, per Dr. Gal), behavioral economics describes the data, but does not provide a deeper analysis of why the data are the way they are (that is, why the humans being studied acted as they did). Of course not true of all behavioral economics, but some of it.

I'd like to add that the data comes from experiments, which are simplified versions of real life problems. Oftentimes, "irrational" behavior depends on the way simplification is made and have no chances to generalise outside the laboratory, because in real life signal is more complex and informative. Yet a paper will be published stating "people behave irrationally". (in the lab with given peculiar instructions)

It's easy to understand and relate to. To even comprehend contemporary research in most scientific disciplines, you need a seriously strong understanding of math or chemistry--a level so high that it cannot be 'popular'. Behavioral Economics only require remedial algebra, statistics and literacy, and the topics they address are usually familiar to everyday people's lives.

The subtitle is entertaining: "The recent vogue for this academic field is in part a triumph of marketing."

and in the article: "It reflects the widespread perception that behavioral economics combines the cleverness and fun of pop psychology with the rigor and relevance of economics."

The author is using behavioral economics to argue against behavioral economics.

Agreed - I'm confused by this op-ed. A professor of marketing is upset that marketing is not getting enough credit for the ideas that are also present in behavioral economics ('fame of behavioral econ is a triumph of marketing' har har).

The main purpose seems to be to crap on Thaler for dismissing his critique of loss aversion instead of embracing his hypothesis on the buy-side. This neither discredits behavioral economics, nor makes a clear case for how academe related to marketing would better humanity or its understanding.

Because when you can't win by building a better mouse-trap, you try to win by tricking the mice.

If your "trick" does not actually work, you're wasting time and resources.

Most of behavioral economy is not seriously tested, because the endpoint is not "tricking the mouse" but positioning yourself as a trickster educator, a place that cannot be blamed.

One of classic cons.

> If your "trick" does not actually work, you're wasting time and resources.

Many of the popularly-cited experiments both in behavioral economics and in psychology involve misleading or deceiving the experimental subjects, and many more simply conceal their objectives. The assumption that the suckers never caught on and the results should be interpreted accordingly is seldom questioned. In some cases, the assumption is verified by asking the subjects about their motivations post-mortem, ignoring the long-standing axiom of applied behavioral economics (business) that there are two reasons anyone does anything, the one they will tell you and the real one.

The null hypotheses in these experiments ought to be (1) the subjects know or can figure out what the experimenters are looking for, and (2) the subjects are there because they want to help the experimenters.

Consider the subject who is paid a small sum to participate and interacts with a either a researcher who must publish or perish or one who will put another batch of subjects through a similar rigmarole next year if they cannot get their dissertation accepted this year. What does motivational psychology say about the behavior of such a subject? Is such a search for truth better than a congressional committee?

Ideally you try different tricks in different areas as an experiment, and merge and adjust the best performers. "It's not random marketing BS, it's a genetic algorithm."

I tend to think this is perfectly correlated with the rise of Big Data. Just as banks scored your likelihood to repay a loan based on your past behavior, marketeers want to score your economic value based on the same and more (transaction history + whatever personal data history assumed to be relevant). I suppose this doesn't follow the real definition of behavioral economics but it sure seems related - like let's hire a data science expert to write economic models on consumer behavior(?) In short, this field is popular because it gives power to the internet Giants.

Isn't that regular economics? Big Data marketers don't care about understanding psychology, they just care about what generates conversions. "Big data" means you make decisions automatically from data without forming theories, since you have plenty of signal.

Micro-economics. I suppose the point is if you aren't applying expertise in psychology then it's not "behavioral"; however, big data is essentially the push to model your behavior to greater level of precision (nano-economics?). I don't see how signal:noise ratio fits in at all.

Maybe because it has uncharted economic value, especially with access to large number of people via the internet, large number means that taking advantage of small behavior can translate into much value.

Great point. Javascript behavioral tracking and shadow profiles gave advertisers and various experts (psychologists, sociologists etc) both the incentive and the unprecedented ability to perform experiments on hundreds of thousands of clueless people. All that in real time, in their "natural" environment, with real stakes. Of course we can model this type of economics better than we did 10-20 years ago.

"In order to appeal to other economists, behavioral economists are too often concerned with describing how human behavior deviates from the assumptions of standard economic models, rather than with understanding why people behave the way they do."

That is...actually, a good point, and exactly describes something I'd noticed.

One of the problems with economics is the assumption of the "perfectly rational actor", which sometimes leads to economic's descriptive vs. prescriptive issue: someone will create a massively complicated scheme that maximizes some positive value under a certain set of assumptions and then assume that people actually behave like that.

Some of what I've read, including by Richard Thaler, who I otherwise rather like, buys into that scenario, saying "no, this is what they do; they behave irrationally". Sure, people aren't by any means perfectly rational, but it's not irrational to not perform a complicated maneuver that's only useful in a specific, odd, circumstance.

There's another interesting bit from the article:

"[In the class mug experiment showing "loss aversion,"] the participants may not have had a clearly defined idea of what the mug was worth to them. If that was the case, there was a range of prices for the mug ($4 to $6) that left the participants disinclined to either buy or sell it, and therefore mug owners and non-owners maintained the status quo out of inertia. Only a relatively high price ($7 and up) offered a meaningful incentive for an owner to bother parting with the mug; correspondingly, only a relatively low price ($3 or below) offered a meaningful incentive for a non-owner to bother acquiring the mug.

"In experiments of our own, we were able to tease apart these two alternatives, and we found that the evidence was more consistent with the “inertia” explanation. Dr. Thaler has dismissed our argument as a “minor point about terminology,” since the deviant behaviors attributed to loss aversion occur regardless of the cause. But a different account for why a behavior occurs is not a minor terminological difference; it is a major explanatory difference. Only if we understand why a behavior occurs can we create generalizable knowledge, the goal of science."

"The deviant behaviors attributed to loss aversion?" Not only is there nothing deviant about the behavior, it's not in any sense "loss aversion". In fact, it's perfectly rational, given limited rationality resources, not to engage the whole engine in an otherwise minor scenario.

The author looks at some rather trivial aspects of behavioural economics or as he puts it

>In this respect, behavioural economics can be thought of as endorsing the outsize benefits of psychological “tricks,”...

But it also covers bubbles and crashes which have major effects in the billions/trillons financially and with millions having their jobs and housing effected. It would seem sensible to take that stuff seriously.

Insightful essay. The nudges he describes were elaborated in great detail in the book called Nudge by Richard Thaler.

In the book, Thaler describes nudges such as placing the fresh fruit in the school lunch line in a more easily reachable location than the junk food. The idea is that kids will be more inclined to choose a piece of fresh fruit if it's in easy reach but the chocolate bar requires bending down, etc.

While such nudges may sometimes be measurably effective, we must also realize that the idea of socially beneficial nudges evokes a sort of utopian paternalism.

The idea behind Nudge paints the picture that there is a light-weight, unobtrusive version of central planning (or central nudging) that can achieve some of the utopian outcomes that planners wish for, but which is less encroaching upon individual freedom.

At what point do situational nudges start to feel like social nudges? What if the chubby kid who wants the chocolate has to humiliate himself by reaching far overhead and fishing around blindly in an out-of-reach bin to find the chocolate while everyone else waits impatiently?

When does spending hours to opt out of helpful services become an inappropriate encumberment?

Google just launched sentence completion in gmail. Combine this with nudges and fuck you won't be corrected to duck you, the typist will see an autocompleted hey I'm feeling really frustrated about what you said right now ready to accept with the tap of a single button.

Nudges are meant as a mechanism of social control. Gal points out wisely that if we rush to judgment about the why of behaviors, then our behavioral economic remedies (nudges, etc.) might be terribly wrongheaded.

"Nudge" is on the Center for Homeland Defense and Security reading list:


Why has Economics been so popular? What has Economics' mathematically-validated storytelling gotten wrong and how has that impacted society?

Interestingly, if Scott Alexander is right, behavioral economics stopped being en vogue a few years ago. From Slate Star Codex's review of The Black Swan [1]:

> All of them continue to do great object-level work in their respective fields, but it seems like the “moment” for books about rationality came and passed around 2010. Maybe it’s because the relevant science has slowed down – who is doing Kahneman-level work anymore? Maybe it’s because people spent about eight years seeing if knowing about cognitive biases made them more successful at anything, noticed it didn’t, and stopped caring.

[1] http://slatestarcodex.com/2018/09/19/book-review-the-black-s...

We need less behavioral economics and more get off your lazy arse and vote economics. 27% voting rate for those aged 18-34.

Why wouldn't it? It's a more accurate model than what came before it, the rational actor model.

To support a claim that an economic actor is not rational you would first need to assert that you are more familiar than they are with their personal preferences and their understanding of the available options, which seems to me the height of arrogance. Which do you suppose is more likely: that people deliberately choose to act in ways that they know are not in their own best interest, as they define it, or that they are acting rationally but their idea of what is in their best interest differs from your own?

A key element of the rational actor model is that there is no way to know anyone else's preferences aside from observing the choices they make. Even asking them directly is not considered an authoritative source, since preferences can change at any time and people don't always know just what they would choose until they're actually confronted with the choice.

It helps that when you read Dan Ariely's stuff it confirms every kind of thought you had on the back of your head about how people 'work'.

I'm aware that doesn't mean it's correct per se, in a hard-science sense, but it does help popularizing the field when every person who reads Predictably Irrational recommends it to other people

Perhaps it’s facile to say it’s to economics understading what placebo is to medicine.

I would also ask: why are "useless" majors so popular? Ostentatious elite uselessness?

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact