Hacker News new | past | comments | ask | show | jobs | submit login
Game Theory Calls Cooperation Into Question (quantamagazine.org)
38 points by droque on Feb 13, 2015 | hide | past | web | favorite | 38 comments

Rather amusing title since it could be reworded as "Game theory still fails to explain biology." We know organisms cooperate. The fact that we consistently fail to find reasons to cooperate in game theory suggests that the simplifying assumptions that it makes are incorrect. To give only one example, what happens when agents can choose from among a set of games to play with opponents with known histories?

Exactly correct. Why play with cheaters?

The fundamental flaw in applying game theory to biology is to constrain the possible outcomes when unfairness is detected. In game theory, all you can do is change your strategy for the next round; in reality, we have multiple punishments to bring to bear, such as refusing to play, public shaming, fines, imprisonment, killing, etc.

Game theory is only an "ok" model of cooperation/competition in real life.

That's not a problem with game theory. That's a problem with using overly simplistic games. You can easily model all of those punishments with different payouts in a game. In fact, when I studied it in school most of the games we worked through included punishment mechanisms.

More like "one very simple model within game theory fails to explain everything in biology." There's nothing here that invalidates the whole of game theory. If the payoff matrix doesn't match reality, then the theory will make inaccurate predictions. Some models do predict cooperation (like the snowdrift game mentioned in the article). It just depends on what the incentives are.

Also worth mentioning that while the classic game theory approach is to calculate the optimum strategy, there's also evolutionary game theory, which looks at what strategies could actually be found by an evolutionary process. They tend to be the same, but not always.

"Choosing among a set of games" seems to me to be just a bigger game. "Known histories" can include any iterated game, which is a basic part of game theory. It's also studied a lot in the context of poker, where the challenge is that if you deviate from the game-theory optimal strategy to better exploit your opponent, you open yourself up to exploitation.

People do cooperate on a self-survival instinct though, cooperating now let's you get ahead of others together. However, there are always competition and hierarchies within the team/group as well that determine choices and game theory within the group cooperating. The larger the team or less dependent then the more each player may be unhappy or not doing the best for themselves personally.

For the most part, people cooperate when the outcome of cooperating puts them at a better place. In the same way everyone is unique but they copy proven paths to achieve certain ends or move to certain stages where they can eventually break off and get ahead personally. In the same way groups suck but we all live better that there are other people.

All of life is similar, self-interested for survival, but they want a place of their own and territory, eventually.

For me comparative advantage (economics) was always enough to "justify" cooperation as a good general attitude in life.

Comparative advantage theory (like most of economy) is a misleading simplification.

Historicaly the countries that were on the "low profit" end of international trade and followed the comparative advantage theory advice - lost (see Argentina), and the countries that contrary to the theory tried to move to more complex products/services and bigger profits - won (see Germany, USA), even if they had to abandon their comparative advantages, suffer international trade restrictions, etc to get there.

It turns out industrialization not only makes you better at producing machines - it also makes you better at producing food, and almost everything else. Meanwhile producing food just produces food.

Organisms don't cooperate all the time, cheating is also common in the world of biology.

I find it fascinating that twisting the parameters changes the outcome. Thinking about our social world, for example fraud in finance, perhaps there is something to be learned about the way the rules of society should be set up to eliminate cheating.

One thing the article fails to discuss is Evolutionarily Stable Strategies. Dawkins discusses the idea in The Selfish Gene and The Extended Phenotype, and it can show how specific behaviors become resistant to invasion and thus settle in an equilibrium. It provides a much better model than the simple games.

Perhaps "cooperation brings game theory into question" would be a better title?

You can't upvote this enough. If the predictions of your model don't square up with reality, reality that came into being through evolution on the geological timescale, it's time that you should look at the shortcomings of your model. Perhaps it can evolve into something with actual predictive value.

What is in question is not game theory (math really does aspire to "absolute truth") but whether this game theoretical model applies to typical biological populations.

The problem with game theory is that you already have to know the outcome in order to know whether game theory matches up with the situation.

article title is terrible, agreed

A beautifully crafted argument.

Many people seem to have game theory backward.

Game theory is a modeling tool that assumes all relevant utility is baked into the payoff matrix.

Games (payoff matrices) that capture unique outcomes/behaviors are often given memorable names from human situations that approximate the games. For example the matrix referred to as 'prisoners dilemma' demonstrates a situation where dominant actions give a suboptimal outcome. In describing where this game may apply, economists found simultaneous interrogations to be a colorful and close enough approximation to the idealized game.

It's not the case that game theorists started with the situation of interrogating prisoners and ended up with the game matrix.

Similarly, for anyone saying that 'that wouldn't happen in real life', you're actually saying that the payoffs don't accurately model the outcomes.

> Game theory is a modeling tool that assumes all relevant utility is baked into the payoff matrix.

Correct, and they mostly use the prisioner's dilemma, which is very simplistic, and probably not what it happens in nature (or common situations) most of the time.

Pack animals like dogs naturally discourage fighting amongst themselves [1]. Strife weakens the pack.

Consider the effort it takes 1 person to build a house, or the amount of effort it would take to build an mp3 player alone from scratch.

Game theory would have you believe the optimal way to win at poker is to booby trap the card table and rob your competition.

[1] https://www.youtube.com/watch?v=hstLdzCg6l8

Correct and there are games that model that behavior, for instance iterated stag hunt[1]. This shows the benefits of cooperation for pack animals much more than the classic prisoner's dilemma. Of course you could construct a more detailed game that includes factors like defense and having a single leader that would be even closer to reality. The math behind game theory is right, its a question of whether we have chosen the right game to play.

[1] http://en.wikipedia.org/wiki/Stag_hunt

There's a theory that suggests altruism ultimately trumps selfishness within species, as aggressive warlike creatures would kill each other off ad-infinitum, where as cooperation ultimately strengthens individuals beyond the sum of their parts. [1]

[1] http://www.radiolab.org/story/103951-the-good-show/

A monkey will scream to warn its neighbors when a predator is nearby. But in doing so, it draws dangerous attention to itself. Scientists going back to Darwin have struggled to explain how this kind of altruistic behavior evolved.

Seriously, this is not that hard. If the prey is relatively mobile, for most predators the gig is up if the prey is aware of them. If the prey is alerting it's friends that a predator is around, then that particular animal has seen the predator and is usually therefore relatively safe from it. For example, if an angry dog is barrelling towards your unsuspecting friend, shouting out may draw attention towards you, but you can shout and take countermeasures at the same time. It's not a zero-sum game, and you don't need hundreds of iterations to make it beneficial to yourself. I mean, hell, watch a random Attenborough nature special, and you're likely to hear him talk about the prey spotting the predator, leading to an abandonment of the hunt.

The strange maths continues with the "Bat's Dilemma" example. In the case where both bats do the same thing, share or not share, there are disconcordant outcomes. Same population of bats, same amount of food available, yet somehow there is much more hunger if they don't share. This would only make sense if each bat only occasionally had a meal from a source which was much larger than it could eat by itself, then had a long period without finding food... in which case, sharing the excess really isn't a dilemma, since it's excess. I really don't understand the maths in that example.

Game theory really does seem to be a hammer desperately searching for anything that looks like it might possibly be a nail. It is interesting that the end of the article says it has some suitability for microbe research, where things are much more stimulus/response and much less complex.

> In a single instance of the prisoner’s dilemma, the best strategy is to defect — squeal on your partner and you’ll get less time.

No, that is not right, if it was there would be no dilemma, and this subject would not be discussed ad nauseam. The whole point is that the situation is symmetrical for both players, they should reach the same conclusion and act the same way... and using the strategy of cooperation their outcome is better than defecting.

Cooperating is strongly dominated by defecting in Prisoners Dilemma. This is an obvious and very basic game theory result.

Game Theory models strategic situations and doesn't offer insight outside what is modeled. If you think there should be communication supporting cooperation in the game, that game is NOT the Prisoners Dilemma and is in fact another game.

The Prisoners Dilemma is a model that is stacked very much against cooperation. Think about it, the prisoners are held in separate rooms and not allowed to communicate at all in the original story.

Thought experiments are not set in stone, and even if they were modifications could me made.

A real game show [1] put two people in a position of the prisoners dilemma and they were free to communicate. [2]

[1] https://www.youtube.com/watch?v=S0qjK3TWZE8

[2] http://www.radiolab.org/story/golden-rule/

It gets presented as an obvious and basic game theory result, but it only makes sense if you don't believe in the basic tenants of game theory, amongst which is the claim that it is a theory of maximizing rational actors.

There is no moral choice-making for maximizing rational actors, and both actors in the PD have exactly the same information, including the fact that the other individual is a maximizing rational actor. As such the off diagonal elements of the payoff matrix are irrelevant to any rational decision making because any two rational actors with the same goal will always make the same choice in the same situation. To do anything else would be irrational.

So within the frame of the theory both players know with certainty that because the game is being played by maximizing rational actors that the other player will always do exactly what they do. This is true no matter what they do: the other player will always reach the same conclusion. Rationality dictates it, if rationality means anything at all.

It is only when you smuggle in the possibility of an irrational choice on the part of one of the players that the off-diagonal elements become relevant, because one player can for unaccountable reasons choose to do something irrational, which a maximizing rational actor would never do.

Game theory is not about people. It's about rational actors who want to maximize their payoff. For such entities there is no dilemma, since only the diagonal elements of the matrix matter, and cooperation is the obvious maximizing strategy.

Unfortunately, game theory under this constraint becomes very boring. There is probably a salvagable variant of it that remains interesting, but I'm honestly not sure what it's a theory of. "Semi-rational not-very-smart actors"? That would describe humans reasonably well, I guess. It certainly describes me. Or maybe the decision-maker being analyzed could be considered a rational actor and the rest of the players irrational, although that would be equivalent to playing against a random number generator.

Iterated Prisoners dilemma is less clearly stacked in favour of defecting, although I assume that the actors' memories would fall under "communication":


Yes, a repeated game like Iterated PD is a different game with a different solution. The Grim strategy for example.

I always found it interesting that the Prisoner's Dilemma is so context-free. For a lot of criminals, doing a moderate amount of jail time is far preferable to being a snitch. No point in getting out early if you're facing vicious repercussion from your peers.

"domination" is a local phenomenon PD shows that local phenomena do not generalize to global solutions.


If I duplicated you and made you play a PD against your duplicate and then sent you off to different corners of the universe to enjoy the spoils, would you cooperate or defect?

Yes but the dilemma is that no matter what the other person does, you are better off by defecting. You get a good outcome if you both cooperate. If he defects, you get a horrible outcome if you cooperate. If he cooperates, you get a decent outcome by cooperating, but an even better outcome by defecting.

Since both players think this way, they both defect and get a bad outcome.

Real prisoners don't always defect, but the reason is that they're not actually playing the prisoner's dilemma. The payoff matrix has been altered....eg., by severe penalties for ratting, which give the player a much worse outcome for defecting. Those penalties wouldn't be necessary if the dilemma weren't real. (Another alteration would be if the prisoners care enough about each other so each sees it as a bad outcome if the other suffers. This isn't contrary to game theory, it's just a different set of payoffs.)

Depending on the risks a somewhat random strategy can work out really well. IE if both defect they both get 10 years if none defect both get 9 years, and if one defects the other gets 15 years. Now if they both defect 10% of the time the 19% of the time there better off collectively as 9+9 > 15 saving 3 years and 1% of the time there worse off by 2 years collectively.

The question I was left with is, what conditions do you introduce into an evolving population that make cooperation more beneficial than extortion? They talked about just introducing random conditions or mutations but I wonder if the ones that led to (cooperation beating extortion) had any sort of pattern.

The most interesting point that was glanced at in the article from my point of view is that small changes in conditions lead to big changes regarding optimal strategy. My chaos theory is a little flimisy but I think conceptually a general framework of considering optimal strategies as strange attractors makes sense.

In fact I think it's more interesting to find the systems in which a given strategy is successfull as opposed to finding a successfull strategy given the system. There's all sorts of interesting questions that arise from this point of view the most obvious one being how does one (or animal populations) change the inputs that form the system in such a way that it leads to the given strategy being successfull.

Evolution happens primarily in spurts with high selection pressure in which cooperation probably makes less sense than in stable populations with relatively low selection pressure.

Yes, this is a quip:

Carrot and stick: The two poles of community (of the community magnet).

>Researchers have proposed different possible mechanisms to explain cooperation. Kin selection suggests that helping family members ultimately helps the individual.

This is wrong and should significantly reduce any trust you may have had in the journalist who wrote the piece. Kin selection is about how helping family members helps the genes that make the individual. Behaviour will spread if it increases inclusive fitness. If you can save a sibling (coefficient of relatedness 0.5) with probability 1 and the chance of you dying is 0.4 you do it. If P(death) is 0.51 or higher you don't.

>Group selection proposes that cooperative groups may be more likely to survive than uncooperative ones.

The conditions necessary for group selection are incredibly strong and very rarely occur in practice in biological settings. When they do you get hive organisms like naked mole rats or the Hymenoptera. There is stronger evidence for group selection in cultural evolution than in most of biology.

Further reading http://lesswrong.com/lw/kw/the_tragedy_of_group_selectionism

>“As mutations that increase the temptation to defect sweep through the group, the population reaches a tipping point,” Plotkin said. “The temptation to defect is overwhelming, and defection rules the day.”

>Plotkin said the outcome was unexpected. “It’s surprising because it’s within the same framework — game theory — that people have used to explain cooperation,” he said. “I thought that even if you allowed the game to evolve, cooperation would still prevail.”

>The takeaway is that small tweaks to the conditions can have a major effect on whether cooperation or extortion triumphs. “It’s quite neat to see that this leads to qualitatively different outcomes,” said Jeff Gore, a biophysicist at the Massachusetts Institute of Technology who wasn’t involved in the study. “Depending on the constraints, you can evolve qualitatively different kinds of games.”

Mathematicians develop model that gives us a deeper understanding of the shallowness of our understanding of cooperation.

Unfortunately my math isn't strong enough to understand the paper but you'll get a much better understanding of how game theory applies to biology from The Selfish Gene by Richard Dawkins than from this article.

Don't read anything by Stephen Jay Gould http://pleiotropy.fieldofscience.com/2009/02/krugman-on-step...

http://en.wikipedia.org/wiki/Stephen_Jay_Gould#The_Mismeasur... In 2011, a study conducted by six anthropologists reanalyzed Gould's claim that Samuel Morton unconsciously manipulated his skull measurements,[82] and concluded that Gould's analysis was poorly supported and incorrect. They praised Gould for his "staunch opposition to racism" but concluded, "we find that Morton's initial reputation as the objectivist of his era was well-deserved."[83] Ralph Holloway, one of the co-authors of the study, commented, "I just didn't trust Gould. ... I had the feeling that his ideological stance was supreme. When the 1996 version of 'The Mismeasure of Man' came and he never even bothered to mention Michael's study, I just felt he was a charlatan."[84] The group's paper was reviewed in the journal Nature, which recommended a degree of caution, stating "the critique leaves the majority of Gould's work unscathed," and notes that "because they couldn't measure all the skulls, they do not know whether the average cranial capacities that Morton reported represent his sample accurately."[85] The journal stated that Gould's opposition to racism may have biased his interpretation of Morton's data, but also noted that "Lewis and his colleagues have their own motivations. Several in the group have an association with the University of Pennsylvania, and have an interest in seeing the valuable but understudied skull collection freed from the stigma of bias."

Applications are open for YC Winter 2020

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact