
Loss aversion is not what we think it is - abeinstein
http://www.basilhalperin.com/blog/2015/12/loss-aversion-is-not-what-you-think-it-is/
======
razzaj
In his book Predictably Irrational Dan Ariely described loss aversion by means
of a set of experiments that go something like this:

A- take a person and promise her a substantial reward against completing a set
of tasks. Measure stress level during tasks execution

B- take a person and give her the substantial reward up-front, but take back
part of the reward for each task she fails. Measure stress level during tasks
execution.

Ariely experiments showed stress was significantly higher with subjects that
were in Experiment B. So much so that one of the subjects took the money and
ran away by jumping out of a window. Ariely attributed the difference in
stress levels to "Aversion to loss".

There is a difference between the scenario depicted by the experiments, and
the cases in the OP's article. IN the article the author compares DMU as it
changes from, say, 10K->2K vs 10K->18K, and argues that the latter has less
impact than the former. Whereas in Ariely's experiments it is really comparing
2K-10K vs 10K-2K and showing that even if the Delta is the same, and both
points are the same, subjects still experienced a different level of emotional
distress due a visceral aversion to loss.

------
carbocation
In summary, what is often described as "loss aversion" is actually just an
expected property of diminishing marginal utility, and does not require a new
term to explain it.

Loss aversion, instead, is state dependence that makes you feel worse about
state X if you came from state X+1, and better about it if you came from state
X-1 (assuming X is, say, wealth).

It seems to me that "loss aversion" might be reconstituted as "utility
hysteresis" to avoid the ambiguity.

~~~
lbhnact
I like the phrase 'utility hysteresis'.

I would say that, at least academically, the difference between 'diminishing
marginal utility' and 'loss aversion' is appreciated. To quote course
material, 'When directly compared against each other, losses loom larger than
gains'.

The difference in perception on the gain and loss side has been tested some,
and the curve is supposed to look like this:
[https://goo.gl/PS8Z7c](https://goo.gl/PS8Z7c)

------
Tloewald
His argument would be correct if the effect size were tiny, i.e. people got as
much satisfaction from gaining $102 as from losing $100, but it's a _much
bigger_ effect than that. E.g. I will spend half an hour searching for a lost
$5 widget (e.g. a lens cap or an iPhone recharging cable... ok a $20 widget)
when I could easily earn far more money doing almost anything else (or keep my
extremely expensive free time).

Also, recouping a loss is far more satisfying than an equivalent windfall, and
DMU completely fails to explain that.

------
jellicle
Ugh. For a person with $1,000,000 in assets, the difference between +/\- $100
in marginal utility is almost zero.

Yet all the psychological studies will show that said person HATES losing $100
much more than they like gaining $100. It's a major effect.

Heck, if you even give people the same amount of money but FRAME it as a loss
versus a gain, people change their behavior. The marginal utility is
identical!

Loss aversion is MUCH larger than any difference in marginal utility. It's a
real thing, that has huge effects on our politics, our economics, our media
and our entire lives.

TL;DR: Article writer has absolutely no idea what he is talking about.

~~~
ccleve
Agreed; the author is wrong. But his insight is that DMU is related to loss
aversion when the sums involved are large, and that really is interesting.
Hadn't occurred to me. He just overstates the case.

------
fizzbizz
I disagree.

If you come by and swipe a quarter off my desk, I'm going to be way more
pissed than if you came by my desk and left a quarter.

The delta of my being pissed is way bigger than the DMU of +/\- $.25, or my
feeling if I'd parted ways with that $.25 in a variety of other manners.

I don't know if what I'm experiencing is or isn't "loss aversion". But it's
not "just DMU".

This is an important distinction. In my experience, people equate "loss
aversion" with "unwilling departure of money" and "windfall" \-- i.e. being
unwilling to be swindled or ripped off.

~~~
devinhelton
I don't think your anger is a result of loss aversion, but rather the fact
that someone committed predation upon you.

Civilization depends on an equilibrium where I don't steal your property, and
you do not steal mine. Civilization flourishes when everyone is in the
cooperate-cooperate quadrant of the prisoner's dilemma. When someone steals
from you, they have violated this very basic principle. If you accept this
defection without responding, then they will likely only defect more in the
future. Thus maintaining the equilibrium requires you to respond harshly to
even small defections.

~~~
betenoire
yeah... but all the examples in the article involved "magically appearing
money". How does that fit into our understanding of civilization?

~~~
ikurei
We use thought experiments about magically appearing money, but the aim would
be to later try to apply those ideas to more realistic cases of, for instance,
risk-taking.

------
austinjp
Interesting. A way of testing the author's assertion would be to ask: who
feels worse, a person who gains then loses $1000, or a person who loses then
gains $1000?

If there is a difference, then the order of events has impact beyond the
initial/end states.

~~~
lamby
I gain interest in the first case :)

~~~
ch4s3
Assuming no gravity, friction, or interest and your money is a perfect sphere
;)

------
mangeletti
I'd also add that loss aversion's supposed "reference dependence" is addressed
by a more fundamental concept, a cognitive bias: anchoring[1].

It's interesting how these (especially pertaining behavioral economics and
behavioral studies in general) create compound ideas from combinations of
various fundamental ideas. Sometimes, it adds value. Other times, it simply
obfuscates the truth.

1\.
[https://en.wikipedia.org/wiki/Anchoring](https://en.wikipedia.org/wiki/Anchoring)

~~~
mercer
It makes sense to me that particularly 'salient' things get their own
dedicated category, despite being a subset of some other category.

Consider racism. Most (or at least much) of the time it's really just a more
salient and specific version of the 'in-group bias'. And I'd argue that on an
academic level, researching and discussing it as the latter is vastly
preferable to using the much more loaded concept of racism. But on a societal
level it perhaps makes sense that we treat 'racism' as a category in itself.

Personally I think discussing whether x is 'just' a subset of y is less
valuable than carefully delineating the contexts in which we use particular
terms. For example, I wish that on a societal level we'd consider 'innate'
differences a bit of a taboo, but that on an academic level we could go wild
researching, say, IQ differences of particular populations without it being
coopted by politically motivated individuals.

I vaguely recall Steven Pinker (almost?) making that argument in The Blank
Slate, and even though I've never read Anathem, I understand one theme is the
idea of having academics locked up and doing research similar to orders of
monks isolating themselves from the world.

At the risk of this becoming (even more of) a ramble, I can't help but wonder
what effect our increasing interconnectedness and the inability to do
something in isolation has on all of this.

~~~
mangeletti
It's not as much a matter of determining whether "x is 'just' a subset of y",
as it is a matter of finding the most fundamental truth about something.

For instance, if you state that (paraphrasing) racism is wrapped up in 'in-
group bias' and it turns out that people are actually racist because of
something in our DNA (similar to being afraid of spiders) or something like
that (NB: completely contrived, to provide context related to your comment),
the you would have lost the truth in a higher abstraction layer, by hiding the
underlying principle.

Now, abstraction is a helpful tool, but it should be used to abstract things
to the level necessary to convey an exact message. Einstein might have said,
"Abstract things as high and low as necessary, while preserving the truth, but
no higher or lower."... but in the meantime, I made that quote up.

Take, for instance, my "Anchoring" example. There are more fundamental things
that are going on _beneath_ the "Anchoring" abstraction. Perhaps a specific
part of the human psyche, which is also responsible for recognizing patterns
(e.g., Reticular Activating System) is also responsible for the mental
constructs that lead to such a cognitive bias. But, once that science is
understood sufficiently enough to be trusted, a concept like "Anchoring" is a
helpful delineation.

In the aforementioned case of racism that I contrived, the abstraction layer
of "racism" and the perceived lower level abstraction "in-group bias" hides
the truth, and this fallacious abstraction actually becomes, as Noam Chomsky
might suggest, a part of our mental grammar, which prevents us from ever
learning the truth, unless the abstraction is broken in our minds.

~~~
mercer
I agree with you within the 'academic' context, but that doesn't address my
main issue of whether this same approach can or should be used in a 'broader'
context.

Could it be that on societal level creating a separate category like 'racist'
is important and valuable, even if it's technically a 'fallacious'
abstraction? We don't always have the time to properly get to the bottom of
things, assuming all of us are even capable of doing so, so we're going to
grasp for salient categories anyways.

It vaguely reminds me of the discussions here about functional programming
versus object-oriented programming. Alongside the heated debate for and
against either approach (and the definitions of the 'true' version of each),
there's always someone who points out something like 'closures are just a poor
man's objects', and 'objects are just a poor man's closures'.

Even if, for the sake of argument, the differences between FP and OP are not
as fundamental or clear-cut as they seem, I'd argue that for me and many
others these discussions are very useful. I'd never get out of my 'OOP box' to
explore FP if it wasn't contrasted and put in a whole separate category and
given a bunch of tantalizing pros that pull me to investigate.

Isn't 'truth' really just another abstraction, but one we cannot (yet) dive
into to find the underlying 'truth'? What I mean is, we stop at a certain
point not only because it's where we end up at for the time being, but also
because it's a useful abstraction.

And to be clear, I'm arguing this primarily in the context of the already very
murky and messy field of the social sciences / psychology, where definitions
are a lot less definitive than in physics or mathematics, and where they have
a much more immediate effect on society.

(honestly, I'm not sure I'm disagreeing with you, and I'm sorry if I'm perhaps
not making much sense. I'm not usually this openly... explorative in my
comments here. Your comment(s) just tickled my brain in a good way.)

~~~
mangeletti
Hey @mercer

Thanks for the reply. I just now saw this (haven't been nearly as active on
here for the past week).

I enjoyed reading your reply. I had just one comment back:

> Isn't 'truth' really just another abstraction, but one we cannot (yet) dive
> into to find the underlying 'truth'?

It's not, because 'truth' is a logical construct, an a priori kind of concept,
so whatever that 'truth' is that we find underneath what was once thought to
be truth would just be a more fundamental truth (e.g., we discover that
photons are actually an imbalance in another dimension or something).

------
js8
I think the article is confusing. Loss aversion is a property of human
decision making, while diminishing marginal utility is a property of some
economic system.

The two may be the same, if you believe in (or talk about) subjective utility.
But I think subjective utility is a terrible concept to begin with, so better
not to go that route.

------
mrow84
It seems to me that you can reverse the sense of the author's conclusion by
changing the initial conditions:

The change in utility by going from 2k up to 10k (~ +1.3) is significantly
more than the change in utility by going from 18k down to 10k (~ -0.55).

In the simple model provided the change in utility caused by a change in
wealth, positive or negative, depends on your starting wealth. This is
different to the "pop definition" of loss aversion, which seems to be making a
claim that the ratio of the change in utility between gains and losses is
approximately independent of your starting point.

I should point out that I don't know what the correct formulation of loss
aversion is, it just seems to me that the argument presented is a bit weak.

------
mazsa
FYI: Rabin,2000: "Diminishing marginal utility of wealth cannot explain risk
aversion"
[https://scholar.google.com/scholar?cluster=12987999332583387...](https://scholar.google.com/scholar?cluster=12987999332583387312)
cf.
[https://scholar.google.com/scholar?cites=1298799933258338731...](https://scholar.google.com/scholar?cites=12987999332583387312)

------
contravariant
Bit of an overcomplicated proof that DMU implies that gains add less utility
than an equivalent loss removes. A simpler proof goes as follows:

Let U be a concave utility function, and let 'w' and 'e' be some amount of
'wealth'. Concavity implies:

U(w+e)/2 + U(w-e)/2 <= U(w)

hence

U(w+e) + U(w-e) <= 2*U(w)

rearranging the terms we find

(U(w+e) - U(w)) + (U(w-e) - U(w)) <= 0

and therefore

U(w+e) - U(w) <= U(w) - U(w-e).

So gains add less utility than losses remove.

