Hacker Newsnew | comments | show | ask | jobs | submitlogin

you can either consider the will of the dominant male and the acquiescence of the other primates to be a form of law, or you have to concede that a tribe working in such conditions is not really engaging in mutual cooperation.

I don't think "dominant male" is a valid description of the organization of all pre-civilized human tribes. I agree that the incentive for mutual cooperation is not the only incentive in play; and of course that's just as true now as it was 15,000 years ago.

Law is how we scale cooperation from small bands lead by a dominant male to countries of 300 million people.

I agree with this, and I should have mentioned it in my earlier post. As you note, law was the way mutual cooperation was scaled even when it just involved cities of a few thousand people 4,000 years ago. I was only trying to point out that the incentives for mutual cooperation are logically prior to the means used to facilitate it.

Also, of course, scaling mutual cooperation is not the only thing law is used for; it is also used to facilitate rent-seeking and other non-cooperative behaviors. That was true 4,000 years ago as well.

You can always even greater gains by cooperating only up to the point where it's most advantageous for you.

If the interaction is non-iterated, yes. In an iterated interaction you can't--or rather, you can, but the other person will just retaliate by ceasing to cooperate, and you both will be worse off than you would have been if you had continued to mutually cooperate. See the Prisoner's Dilemma.

(Btw, I'm not claiming that this doesn't happen. I'm just claiming that it is not actually a gain in the long run.)




> If the interaction is non-iterated, yes. In an iterated interaction you can't--or rather, you can but the other person will just retaliate by ceasing to cooperate...

You're assuming things about the nature of the equilibrium that I don't think you can assume.

There is I think an illuminating but relatively unexplored set of parallels between human social dynamics and algorithms. There is a lot of social theory that is predicated on assumptions that can be likened to the assumption that a given optimization problem has greedy solution. Convergence versus divergence, the existence of polynomial time algorithms to solve particular problems, etc, I think all have a lot of potential to illuminate social theory.

-----


You're assuming things about the nature of the equilibrium that I don't think you can assume.

I'm assuming that the "payoff matrix" for the two-person interaction has the same general form as the Prisoner's Dilemma matrix does, yes, which means that the Nash equilibrium, which is mutual defection, is also the outcome with the lowest aggregate payoff summed over both players. Is that what you're referring to? If so, I agree that this is an assumption, but I don't think it's a very extravagant one.

-----


You're extrapolating from the payoff matrix for a two-person interaction to the equilibrium behavior of millions of people. You're also assuming rational actors, etc. You're making a mountain of assumptions here.

-----


The equilibrium behavior of millions of people is just the aggregate of the behavior of individuals in small-scale interactions. A better objection would be that not all small-scale interactions can be modeled as two-person games.

Yes, I'm assuming "rational" actors, in the sense that they respond to incentives in a way that can be modeled by game theory. But that's not actually a very extravagant assumption. In particular, it does not entail that "rational" actors have to be conscious of the incentives they are responding to. I think many people who respond "rationally" to Prisoner's Dilemma-type incentives are not actually conscious of them; that's what I meant by my comment about tribal instincts. For example, saying that people punish defectors for emotional reasons rather than coldly calculated rational ones misses the point, because the emotions evolved in response to the same sorts of game theoretic incentives.

If you really object to the "rationality" assumption, then you need to come up with a better one. Attempts to do that (I'm thinking, for example, of the work of Kahneman and Tversky) often end up showing that the incentives involved are more complicated than we thought, not that we respond "irrationally".

-----


> The equilibrium behavior of millions of people is just the aggregate of the behavior of individuals in small-scale interactions.

The dynamics of a complex system cannot in any sense be described by simply aggregating the individual small-scale interactions. This is a huge unjustified assumption.

-----


The dynamics of a complex system cannot in any sense be described by simply aggregating the individual small-scale interactions.

In many cases it can, so this statement as it stands is much too strong. For example, a country's economy is a huge game of mutual cooperation whose dynamics can be perfectly well described by aggregating a huge number of two-person games (or perhaps "two-player" would be better since one player is often an organization, like a company or the government, rather than a single person)--or in some cases perhaps games with larger numbers of players, but still small-scale.

There may be cases where a system's dynamics can't be described this way; can you give a specific example?

-----




Applications are open for YC Summer 2015

Guidelines | FAQ | Support | API | Lists | Bookmarklet | DMCA | Y Combinator | Apply | Contact

Search: