Which itself is a part of a series of articles in JCP debating the issue: https://onlinelibrary.wiley.com/doi/abs/10.1002/jcpy.1054
The definitive statement made by this article's headline isn't really supported by the evidence presented in the papers. Rather, the state of affairs seems to be that "loss aversion" has been the victim of incessant overgeneralisation. It's a very simple hypothesis about human behaviour that plays nicely into a lot of interesting (and therefore publishable) narratives. This has lead people to blindly accept the general hypothesis of loss aversion without enough critical investigation of its manifestation. The authors don't really refute "loss aversion" (i.e. they don't present an alternative theory to explain the papers that purport to demonstrate "loss aversion"), but rather they refute the pop-psychology belief that it's a general principle of human behaviour.
“You have a $10 credit, it expires in 2 days” would test loss aversion.
That consumers behave rationally in the face of price rises is an interesting finding. But it’s a far cry from testing loss aversion.
If it really is a general cognitive bias, it will show up as a difference from expected statistics.
Imagine a held asset that has an even chance of going up or down. You'd expect to see about half of people sell it and half hold it. A cognitive bias would alter that ratio. If 75% of people sold it, and only 25% held it (despite even odds), you could say that there appears to be a bias at work.
A great example of cognitive bias at work in the real world is the Monty Hall 3 door riddle. Most people get this wrong even though the math is not hard.
But simply avoiding a predicted loss is not "loss aversion" as a cognitive bias. It's not even a bias at all; it's rational to avoid loss.
Is there a meaningful difference when these are used as psychological terms of art (as opposed to pop psych just-so explanations)?
> Loss aversion has been represented as a fundamental principle. Loss aversion is not understood as the idea that losses can or sometimes loom larger than gains, but that losses inherently, perhaps inescapably, outweigh gains. For example, Kahneman, Knetch, and Thaler (1990, p. 1326) describe loss aversion as "the generalization that losses are weighted substantially more than objectively commensurate gains." In a similar fashion, other researchers do not qualify the idea of loss aversion; Tversky and Kahneman (1986, p. S255) state that "the response to losses is more extreme than the response to gains;" and Kahneman and Tversky (1984, p. 342) state "the value function is … considerably steeper for losses than for gains."
The authors are refuting "loss aversion" as Kahneman et al is describing it.
> (i.e. they don't present an alternative theory to explain the papers that purport to demonstrate "loss aversion")
Why do they need to? The paper isn't about trying to explain when losses or gains are most impactful; the paper is about whether or not there's a clear tendency.
The basic principle behind loss aversion is simple.
What's the primary motive behind an action - Running away or running towards? Prevention or gain.
For instance. Yesterday an article about American child care was on HN.
American parents are acting primarily to PREVENT injury, discomfort or death of their children. That's action motivated by loss aversion.
Japanese, maya parents still want safety for their children but independence of their kids is a primary motivator for their action. In other words gain.
I think the author is confused about something. I want more money and I don't want to lose the ones I have. Both feelings aren't mutually exclusive. However at the point of decision I could be swayed more by greed or by fear.
If a site, seller or investment is shady, fear wins. I'll protect myself. If not, greed or gain could win in that instance.
I could speed down towards a party one moment. And a near miss could make me reconsider and slow down. Both modes occurred on the same journey. No grammar by some clickbaity author would change that.
He'd found they worked much harder to retain the note once it was in their hands, than if he offered a bonus of $100 at the end of the day.
- Loss aversion
- Trust (i.e. the manager believes in you): When we hear the word "bonus" we often think "that's something that happens 10% of the days". However, when the manager is giving you the money at the beginning of the day they're saying "I think you can do this today. I might as well give it you already." The manager very clearly shows that they believe in you, and they probably know what they're doing.
- The prize is visible: We know from many examples that humans become more motivated when they can physically see their prize. One part of this trick is that you have the note in your pocket. Maybe you even take it out a few times during the day.
There's a few ways to test what factor is most important. For instance, you would expect the trust-factor to fade over time because you'll realize that the manager gives you the note regardless of their faith/belief in that you can make it (there's nothing special about "this day" or "this employee"). You could also replace the $100 note with a more neutral coupon that says "$100 bonus". This makes the prize less visible, but we should still value it as $100. Or maybe there's a checkbox on a sheet inside the office which says "Tick off if bonus not reached". If the effect goes away, then the visibility-factor is stronger than the loss-aversion-factor.
This is my main beef with the pop culture around "loss aversion" (and other psychological terms): There's so many interesting things to discuss around it, but we so badly want to combine everything into one simple buzz word.
Did you read the paper? It's not a paper that "brings new information to the table", it's a paper which presents recent experiments and tries to show that there is little scientific evidence of loss aversion.
> The basic principle behind loss aversion is simple.
Huh? I don't understand what you're saying? You're saying that "loss aversion" is simple, but then you give examples of losses not being universally more impactful than gains? What you're describing here is exactly the point of the authors: Both modes are important
> … swayed more by greed or by fear.
And you should be aware that "loss aversion" is very careful to not talk about the psychological process behind. Loss aversion is not about greed, fear or any feeling and/or instinct. Loss aversion is a measurable effect. None of the papers that claim that loss aversion is a general principle claims that "greed is stronger than love" or anything similar to that. In fact, they are very "chicken" and just shrug it away.
I have no skin in this one but I would like to call this out: these comments make an argument combative. It pushes people up a tree and makes it hard to focus on the facts. Imagine user vezycash actually was swayed by your argument; how easy would it be for them to say, hey, you’re right? Pretty hard after all those comments, because it ties in their pride with their viewpoints and makes changing their point of view humiliating rather than enlightening. The conversation is now a battle, and admitting fault is losing face.
I’m calling this out now but by no means is it specific to you; it happens all the time. My request to anyone here is: please leave all those phrases out. “You should...”, “did you even...” etc. The argument works just as well without them. It makes it much easier for someone to say, hey, I guess you’re right! And isn’t that what we all want, in the end? ;)
There is something to be said about "Did you read the paper?" though. We have here an article where an author has published a rather large article (59 pages) and done a substantial amount of research (quoting over 80 other published papers). I don't expect everyone to read all of that, but I wish people were more upfront about whether they're talking generally about the topic or discussing the actual story.
Like, I honestly wonder "Did you read the paper?" not because I expect everyone to read the paper, but because it means we can have a more constructive discussion. If you haven't read the paper and is confused about what the author means then I can try to find quotations that better explain the author's opinion. Or maybe we can discuss the general topic (ignoring the story).
Take a good common example of loss aversion - admitting being wrong. Why do people find it difficult to admit that they are wrong?
What's at stake here? Reputation, respect, pride, even money.
100 scientists vs Einstein is a classic example of this. Pointless wars have been been fought because someone wouldn't admit being wrong. The Iraqi, Vietnam wars are good examples.
Your reaction to this issue is another.
Limiting loss aversion to just economic behavior betrays lack of understanding of the topic.
The loss aversion hypothesis is the hypothesis that given the choice between either of the following two scenarios:
- Having an object x and then risk losing it.
- Being offered an object x but risk not getting it.
- This is a universal motivator, which means it must explain "economic behavior" (which you mention) as well as anything
else. The fact that -- as the article says -- people prefer *keeping* a stock which is just as likely to lose in value as
to gain, is a *perfect* example to illustrate that it is *not* a universal motivator.
- That it is not rational. There are cases where losing something, like for instance money, is *truly* more damaging
than gaining the equivalent amount of money. For example, if I lost $100,000 it would be much more devastating than if I
gained $100,000 -- in this scenario, it's not a psychological *bias* but in fact a completely rational belief. This example
is in the article. You can not use examples like this one to argue in favour of "loss aversion", because there would be no
evidence of an irrational bias.
Ego. People primarily lie to themselves, in order to retain the coherent (constructed) reality/continuity of their life. (c.f. cognitive dissonance)
Is there a term for this kind of logical fallacy? It’s almost in ad hominem argument against an entire group
If we're looking for a general logical fallacy, it might be something like, "Using the mere fact of theoretical possibility as a way to justify unlikely beliefs, or as a counter-argument to strong evidence." I'd love to know if there's a term for that. It comes up everywhere.
> And people are not particularly likely to sell a stock they believe has even odds of going up or down in price (in fact, in one study I performed, over 80 percent of participants said they would hold on to it).
He refuted himself right there, in the article.
Actual investor behavior is sell winning investment, and hold on to losing investment until loses triple. And sell for significant loss.
Recent Tesla short sellers come to mind.
Yeah, it would be terrible if you read through 59 pages of well-cited, well-explained text and tried to understand what the author is saying. Might as well judge everything from one sentence in an online article written for popular audiences.
> He refuted himself right there, in the article.
That example is showing an example of Status Quo Bias that is completely orthogonal to losses/gains: People prefer inactivity to activity. If you construct an experiment where doing nothing constitutes the "loss" (e.g. keeping an item) and doing something is the "gain" (e.g. obtaining a new item) then you would expect people to prefer the first choice. For loss aversion to be a general principle you need to decouple it from the status quo bias.
"People do not report their favorite sports team losing a game will be more impactful than their favorite sports team winning a game." Same.
And there are other factors at play. Often the thrill is the dopamine release when you make these decisions — that’s the pain. If you’re intensely interested, losing $500 in Monopoly or scoring a run in baseball may be felt profoundly.
It all depends on context, not the means by which we measure an outcome.
Could we settle the argument by reading the definition of loss aversion to 10 people and asking each person whether the author's experiment measures loss aversion?
Is this me being all Dunning Kruger? Is it my positivist bias obscuring my vision? Have I misunderstood the nature of the scientific method or missed some major aspect of practical epistemology? I sincerely hope so as the alternative explanation regarding the nature and visibility of the emperor's couture is rather upsetting.
I do genuinely think I am probably - at least partially - incorrect on this. But I would like some help in shaking my sense of unease.
So, if you have $200, getting $100 one thing, losing $100 is way worse, since log(200)=2.30, log(200)-log(100)=0.3, log(300)-log(200)=0.18. A potential loss of $100 must be rewarded by a gain of $200 to "feel" worthwhile if you already have $200, since log(400)-log(200)=log(200)-log(100).
"What is the smallest gain that I need to balance an equal chance to lose $100? For many people the answer is about $200, twice as much as the loss. The "loss aversion ratio" has been estimated in several experiments and is usually in the range of 1.5 to 2.5. This is an average, of course; some people are much more loss averse than others. Professional risk takers in the financial markets are more tolerant of losses, probably because they do not respond emotionally to every fluctuation. When participants in an experiment were instructed to "think like a trader," they became less loss averse and their emotional reaction to losses (measured by a physiological index of emotional arousal) was sharply reduced. [...]"
Or more plausibly, because they're richer, and the utility they stand to lose is smaller.
Let’s take the example from the comment above. A car salesman gets $100 in the morning and has to give it back in the evening if he doesn’t sell enough cars. Compare that to just handing out $100 if he’s successful in the evening. From an expected utility view both of these arrangements are equivalent.
Loss aversion argues that they’re not, because the salesman’s reference point for loss/gain shifts after obtaining ownership of the $100.
So here is an interesting thought experiment. Suppose you take a person with some appreciable intelligence (at least average) but no particular knowledge about a certain topic. In this instance, we'll let that topic be psycology.
Now we present this person with an unfortunate dilemma. For a particular hypothesis, they observe a significant amount of peer reviewed literature asserting empirical evidence in the affirmative. They don't have any real familiarity with any individual papers, but they can understand that there's an established consensus.
On the other hand they are given an article like this one, which mounts a critical refutation of all the established literature. Furthermore, this lone paper is presented to our stalwart examiner amidst the zeitgeist of a reproducibility crisis. Thus we have a mountain of peer reviewed but undigestible evidence on one side, and one readily digestible paper on the other which specifically rebuts the mountain of evidence.
The fundamental dilemma is this: how should this person examine the available evidence to maximize their chances of coming to the correct conclusion? Should they abstain from trying to discern the truth of the matter, and strike out any opinion they could have as unqualified? Should they read through the most comprehensive surveys of available evidence to come to a full understanding? Should they take the new, contrarian paper at face value?
There are a few dimensions here which (from my view) make the dilemma nearly intractable unless you 1) abstain from an opinion, or 2) become a subject matter expert. The cloud of the reproducibility crisis is a first dimension of uncertainty - not only does that muddy the waters for existig research, but we also have to be careful not take contradictory research at face value simply because it's contrarian. For another, we have to figure out how to weight the reliability of evidence against our own time. It's tempting to discount the evidence of existing research if an especially compelling critique is released, because we can more easily read it and follow its arguments.
It seems like this is enormously difficulty all around. How do we rate the critical correctness of differing amounts of conflicting literature published under academic uncertainty?
> There are a few dimensions here which (from my view) make the dilemma nearly intractable unless you 1) abstain from an opinion
I think the majority of the time, you should abstain. I mean, you can certainly debate things, it's fun to do and improves your thinking. But you should be sure of very few "controversial" things. You should be very ready to change your mind about most things. There is value in just saying "I'm not sure" about most things.
How do I define controversial? Well yeah this becomes a circular problem very quickly. As others have said, practically speaking, you usually don't really need to use the latest social science research for practical purposes. Sometimes you do - but hopefully then you are an expert.
However, sometimes there are things you need to decide. E.g., what diet maximizes your health/fitness goals? A classic case where there is a huge lack of expert consensus, etc. In these situations, I usually try to defer to who sounds smartest in general to me, but keep a very open mind to the idea that I might be totally wrong.
Documentaries providing new narratives to cleanly refute the old ones - all the while promoting social awareness - seem to be very "in" these days.
You either have to follow consensus of the experts, or, to go contrary to consensus, you must understand the consensus well enough to be one of the experts.
Down any other road lies pop-sci nonsense. It's incredibly easy to be a wrong contrarian, when you don't actually understand what you are attacking.
In this case I'm happy to conclude that it's not clear if people do systematically make poor judgements on important issues due to an in-built "loss aversion" heuristic.
You don't hire a plumber to do a colonoscopy.
The reproducibility "crisis" is not a crisis, it is a fundamental limitation of the scientific method for things that depend on a greater number of variables, on which a lot are unobservable and/or not known.
Having read Larry Laudan recently however, I'm a big fan of his pragmatism which trumps the question a bit: just use whatever works.
In this case, we don't all need to be aware of loss aversion, and really I suspect barely anyone was using it in a practical sense. A pragmatist might say that it was only ever a theory, and this is why it thrived in largely theoretical exercises (or in the case of economics, in a context where the tangible consequences were extremely far removed from the application of the theory). But to me loss aversion still seems like an observation after the fact, though I did believe it at the time, which are the worst kind of observations ;)
In short, from where I stand there is no such thing as "correctness", only what has been successfully applied and in what context, or temporary applicability if you will. Any further interpretation is usually a case of extrapolating knowledge from a vastly incomplete picture. Being a psychology student, I look at its early history as a tragic example of why this is counterproductive. Some habits are hard to break though.
This is a special case however, in that we have to assume correctness because otherwise we'll all be much worse off, and the possible cost of reducing pollution are slim in comparison. But if anything I think this supports my point that correctness itself doesn't matter, only the material consequences do.
I assume you're not headed into religious debate territory but I don't imagine pragmatists focus much on metaphysical matters (I know I don't).
Two points on this approach: 1. My assessment lies on a gradient rather than a simple true/false. 2. I rarely read the source evidence - my assessment relies mostly on the assessment of others.
I don’t know the best diet for the general population. I know roughly what works for me.
I don’t know how microwaves work. I know how my microwave oven works.
However, this is just a couple of researcher's opinions, and tomorrow a response article may come out saying that loss aversion IS supported by evidence.
To me, this is the sign of a healthy science.
Also they claim the body of evidence doesn’t find support for loss aversion. But a meta analysis on the topic does in fact find support for loss aversion (albeit the magnitude is probably smaller than originally thought):
How naïve is that? We're not interested in what people said they would do. We want to know what they did!
His example of "Messages that frame an appeal in terms of a loss (eg, “you will lose out by not buying our product”) are no more persuasive than messages that frame an appeal in terms of a gain (eg, “you will gain by buying our product”)" is extremely weak IMO. In both cases the consumer never had the product to begin with. The author is trying to argue FOMO is the same thing as loss aversion, which on the contrary, FOMO is really greed.
- The fear of realizing a loss
- The unrealistic expectation of upside
Also, you'd be hard pressed to find a stock IRL that you could predict in advance as "50/50" and see what participants do, wouldn't you?
Besides, the point of that particular study isn't a stock, it's that they have a 50-50 chance of losing money they were just given. The stock is just a placeholder for anything.
A wonderful, concise summary of some of the human obstacles to the progress of ideas, theories, stories and models about people and the world, scientific in nature or not.
Doesn't the fact that we keep growing and progressing as a species even tho the getting of these gains exposes us to risks, not just in the getting there, but once we have arrived, the "curse/paradox of development", suggest that loss aversion is not a majority principle of human behavior?
But... maybe...gain attraction / gain pursuit is a general principle...perhaps of all life, not just ours? Maybe that's one of the characteristics that tautologically has to define life. Life exists in an uncertain environment and couldn't continue to do so without taking risks / experimenting into that unknown.
For fans of winning teams (Warriors, Patriots, etc.) I'm not sure if this is true. They're expected to win, so watching them win can feel like nervous relief or ambivalence but watching them lose can feel like disappointment.
I think we see loss aversion in soccer too where teams will often play "not to lose" and "park the bus" rather than risk pushing forward and trying to win the game but being vulnerable to counter attacks.
In video games however most seem to favor aggressive styles of play and loss aversion is likely an instant loss.
I'm sure as with most social behavior loss aversion varies and depends on a complex set of conditions. I'm sure in some contexts it's certainly at play though.
People are randomly given one of two problems:
Probem A: In addition to whatever you own, you have been given $1,000. You are now asked to choose one of these options: 50% chance to win $1,000 OR get $500 for sure
Problem B: In addition to whatever you own, you have been given $2,000. You are now asked to choose one of these options: 50% chance to lose $1,000 OR lose $500 for sure.
According to Kahneman, many more people will take the gamble when it is framed as potential win (Problem A) rather than as a potential loss (Problem B).
So is Gal saying this experiment is not reproducible or that it doesn't demonstrate loss aversion?
It's easy to say that consumers and investors behave irrationally, it's harder to tease out hidden factors in a rational utility/loss function that may depend more on the expected value of one action.
Loss aversion is the “fact” that “buy now to save 10$” is less motivating than “buy now to avoid 10$ surcharge”, and that “here’s 100$; if the coin tails, you lose $50” is more distressful than “here’s $50; if the coin heads, you gain another $50”.
"A bird in hand is worth two in the bush" is a popular saying with its equivalent in almost every culture.
Diversification which is studied, recommended and practiced by almost every investor, CEO, child... Is related to loss aversion.
There are many more real life examples of loss aversion.
You said nothing about the Ikea effect.
Why do people stay in abusive relationships with individuals & companies?
I've put in so much, can't back off now. Scammers know and use this to great effect. Once you've paid, you'll keep paying.
Why do investors rush to sell winning stocks but stick stubbornly to losing losing stocks or trades?
The network effect is powerful because of loss aversion. "All my contacts, friends, pictures are in..." so I can't switch.
LOSS AVERSION CHEAT SHEET.
Ask anyone for the reason behind an action. If the sentence begins with or is dominated by, "I don't want" or "I didn't want..." the action was motivated by loss aversion.
What's important is the primary motive behind an action.
In fact, strategy in military, business and soccer is divided into two - offensive and defensive.
Offensive strategies are primarily motivated by gain. Defensive - by turf protection, prevent loss of market, or prevent a goal.
Both strategies use similar, virtually the same tools. And like loss aversion the difference is simply motive.
If China learns that Iceland is planning an attack on them and decides to attack first, it's a defensive strategy.
Again, the key is motive. It doesn't matter what the action is, what matters is why it's done.
If I kill someone for the heck of it (gain), it's called murder. If instead, it's to protect myself (loss aversion) it's called self defense.
Limiting loss aversion to just financial behavior is a myopic view of the subject.
But also, you horribly simplified military and soccer strategies too. Fun fact: army can decide for offensive strategy, because defense would end up in bigger losses. They may also go for defensive strategy despite bigger loss, bc some other reason.
Lastly, no not every kill to protect yourself is self defense. That is not how law and sentencing works.
>"The IKEA effect is a cognitive bias in which consumers place a disproportionately high value on products they partially created." //
Which is a weird turn of phrase, as almost all IKEA stuff is already fully created, you just fit it together. I guess they mean something you put effort in to realising.
I'm not sure I agree, I think people preference stuff they took part in the production of (like kids helping with cooking their own tea), but I'm not sure we consider them higher value in an objective sense ... we often prefer things that we know to be of objective lower value, like in sentimental attachment.
For example (fictional), I have an old Pentium CPU, it has zero (general) market value as a functioning object; it has some cultural value as an historic artefact; it may be highly valuable to some geek somewhere, eg to run equipment they might otherwise not be able to run; it has sentimental value to me as my first CPU. My preference is unrelated to the intrinsic value of the item.
Loss aversion, and risk aversion as well, are themselves the economic pseudoscience that are based on the psychology of compulsive habits.
This article doesn’t really depict the research paper though. The author isn’t saying loss aversion isn’t real, just that it’s been over popularized in a way that isn’t founded in evidence. It also doesn’t address the other economic pseudoscience on the field, so you could quite literally write the opposite article as well.
Ironically when it was likey the media representation of “loss aversion” that broke the term to begin with.
I read D. Kahneman's book Thinking Fast and Slow a number of years back and it did present some pretty clear looking graphs demonstrating loss aversion of 2:1 iirc. I've since lost my copy due to a friend's "borrowing". ;) Certain other elements of his book have come into question, including priming. I'm eagerly waiting to see how the cookie crumbles here.
Also, most people rarely make rationally-beneficial decisions based on other a myriad of fears, prejudices, laziness, negative cognitive distortions, distractions or inadequate future-planning. Worse, most people are sold on hype, feelings, gossip, peer testimonials and appearance when they can’t be bothered to do due-diligence.
OTHO of gain avoidance/scarcity: You can put a “free” sign on a decent household good, set it out, and it won’t move... put a price-tag of $100 on and watch it get “stolen.”
tl;dr: The human condition is messy and imperfect... there’s no firm solution except thorough qualitative hypothesis testing via experiences.