

Game theorists offer a surprising insight into the evolution of fair play - molbioguy
http://findarticles.com/p/articles/mi_m1134/is_5_111/ai_86684497/?tag=mantle_skin;content

======
nhaehnle
The game is kind of weird:

 _Each player of the pair begins with a set amount of money, say $5. Each puts
any part or all of that $5 into a mutual pot, without knowing how much the
other player is investing. Then a dollar is added to the pot, and the sum is
split evenly between the two. So if both put in $5, they each wind up with
$5.50 ($5 $5 $1, divided by 2). But suppose the first player puts in $5 and
the second holds back, putting in only $4? The first player gets $5 at the end
($5 $4 $1, divided by 2), while the cheater gets $6 ($5 $4 $1, divided by 2--
plus that $1 that was held back)._

It seems to me that there isn't actually anything to be gained from
cooperation. If both players "cheat" completely (put $0 into the pool), they
still get $5.50. In that sense, the Nash equilibrium (both are cheating) is
also socially optimal. Kind of untypical for something where you want to
demonstrate the advantages of cooperation.

~~~
molbioguy
It is a weird game, but experimental situations are usually contrived. They
define cooperation as the absence of cheating. But the surprise is that people
jump at the chance to fine the cheater, even though they have to pay the same
amount as the fine:

 _You can fine the cheater by taking away some money, as long as you're
willing to give up the same amount yourself. In other words, you can punish a
cheater if you're willing to pay for the opportunity._

~~~
klenwell
Another famous experiment that supports the finding that people are hyper-
sensitive toward cheating:

[http://en.wikipedia.org/wiki/Wason_selection_task#Policing_s...](http://en.wikipedia.org/wiki/Wason_selection_task#Policing_social_rules)

 _This experimental evidence supports the hypothesis that a Wason task proves
to be easier if the rule to be tested is one of social exchange (in order to
receive benefit X you need to fulfill condition Y) and the subject is asked to
police the rule, but is more difficult otherwise. Such a distinction, if
empirically borne out, would support the contention of evolutionary
psychologists that certain features of human psychology may be mechanisms that
have evolved, through natural selection, to solve specific problems of social
interaction, rather than expressions of general intelligence. In this case,
the module is described as a specialized cheater-detection module._

------
molbioguy
From the article by Robert Sapolsky (COPYRIGHT 2002 Natural History Magazine,
Inc.) -- seems relevant to the Jonathan's Card experiment:

 _Think about how weird this is. If people were willing to be spontaneously
cooperative even if it meant a cost to themselves, this would catapult us into
a system of stable cooperation in which everyone profits. Think peace,
harmony, Lennon's "Imagine" playing as the credits roll. But people aren't
willing to do this. Establish instead a setting in which people can incur
costs to themselves by punishing cheaters, in which the punishing doesn't
bring them any direct benefit or lead to any direct civic good--and they jump
at the chance. And then, indirectly, an atmosphere of stable cooperation just
happens to emerge from a rather negative emotion: desire for revenge. And this
finding is particularly interesting, given how many of our societal
unpleasantries--perpetrated by the jerk who cuts you off in traffic on the
crowded freeway, the geek who concocts the next fifteen-minutes-of-fame
computer virus--are one-shot, perfect-stranger interactions._

------
mbateman
What difference does it make that you're playing against different people?
People engage and justify behavior based on _types_ of action.

One punishes a cheater that one will never encounter again partly on the
presumption that other people also do this to cheaters they encounter. Thus
one engages in a behavior that, if performed universally, will reduce the
likelihood that one will encounter a cheater.

This is almost exactly the same as iterated games where you play the same
person over and over and thus confront the other player's action as an
instance of a type of decision (a strategy). The fact that it isn't the same
person doesn't mean you won't think in terms of types.

Yeah, you can try to free ride and just hope that other people punish cheaters
for you and that you'll benefit without ever having to do it yourself (since
punishing incurs a cost). But if the choice is between _no one_ punishing
cheaters and _everyone_ punishing cheaters, then you choose the latter. If
you're thinking in terms of types, those are the two choices. Even if it
doesn't totally make sense in a particular context to do this, people
habitually think this way.

Human beings think in terms of types and systems of actions, and choose
actions at least partly based on what types and systems of actions they are
endorsing. These game scenarios rely on that in the same that iterated games
do. It's tit-for-tat all over again, just one level more abstract.

------
zeteo
I'm surprised that the article, while otherwise well-researched, doesn't even
mention the Zahavi Handicap Principle.

In Zahavi's view, altruism is a form of signalling: the altruist is doing so
well, they can afford to lose a good deal of material benefits. The altruist
then benefits from the high regard of the peers who witnessed the facts (e.g.
potential partners of the opposite sex).

From this perspective, the crucial step in the experiments presented is not
the punishment, but the subsequent public exposure of the in-game behavior.

~~~
jamesbritt
This makes me wonder then why some cultures frown on public displays of
altruism. Basically, do good but please don't brag about it.

~~~
Swizec
To even the playing field maybe? If you brag about how awesome you are, you're
putting the pressure on everyone else to be as awesome. Some people don't like
that ...

~~~
jamesbritt
Good point. The less-than-outstanding will look less less-desirable mate
material. But they still want the benefits of altruism. So a morality develops
that says, "Do good things for others (that includes me), but keep it to
yourself less the rest of us look bad in comparison and fail to find suitable
mates."

------
michaeldhopkins
I would put $0 in the pot and if my opponent put in more than me, I would pay
him until we were even. This beats the game because it sets a cooperative
standard while being fair immediately and also protects me. However, if my
opponent put in more and could punish me before I could even it out, I imagine
I would find it hard to "turn the other cheek." I also don't think that
putting in the full amount would create the culture I would want to exist
because such an action would be indiatinguishable from naïveté, and other than
asking for my money back and getting it I would have no power to do good once
the game ended

------
beza1e1
The wierd thing is, you could translate this insight into a ethically
questionable business idea:

A website lists the wrong doers to humiliate them. Each crime gets its own
list. Pay 5$ for "the jerk who cuts you off in traffic on the crowded freeway"
or 1000$ for "the geek who concocts the next fifteen-minutes-of-fame computer
virus" or 100,000$ for "the child molestor".

I hope this would not work out, but i fear it would.

~~~
kiba
How about catching corrupt officials in the act for 500 dollars?

Though I am sure that there is an unintended consequence somewhere in the
idea. Sometime all we can do is watch the system in action, and try to fix
it...if it let us.

For example, the US government is in a slow motion train accident that's
taking a long time to happen. It's very hard to stop the train in time and fix
the stuff that's broken.

------
MetallicCloud
> If enough of them do so--and especially if the cooperators can somehow
> quickly find one another--cooperation would soon become the better strategy.
> To use the jargon of evolutionary biologists who think about such things, it
> would drive noncooperation into extinction.

I don't think this works. Although co-operating works great for the majority,
that just means it allows a few to 'cheat' and get the biggest payoffs.

------
locopati
SuperCoorperators is an interesting book exploring these ideas

[http://www.amazon.com/Supercooperators-Mathematics-
Evolution...](http://www.amazon.com/Supercooperators-Mathematics-Evolution-
Altruism-Behaviour/dp/1847673376)

------
cschmidt
The Economist had a good article about this same topic two weeks ago. It seems
to be talking about unrelated studies.

<http://www.economist.com/node/21524698>

