At the present time, yes, there's a high correlation. But 15,000 years ago, there were no laws, yet humans still engaged in mutual cooperation. Mutual cooperation is prior to law; laws are just the way we currently try to facilitate mutual cooperation.
I cooperate with you, but only because I know that if you don't hold up your end of the bargain or act anti-socially, I can call the other monkeys to put you in your place.
This is one possible reason, but it's not the only one. If both of us can realize greater gains from mutual cooperation and trade than we can from each being isolated individuals, then we have an incentive to mutually cooperate even without any other enforcement mechanism. The reason the other enforcement mechanisms are there is that, as you say, we evolved from creatures that were not intelligent enough to consciously apprehend the incentives for mutual cooperation, so we evolved unconscious mechanisms for doing so. That's what all those tribal instincts are: calling the other monkeys to put an anti-social monkey in his place happens in the absence of laws too.
Depending on how you look at it, either "law" exists even in primate tribes, or what they do can't be considered "mutual cooperation." That is to say you can either consider the will of the dominant male and the acquiescence of the other primates to be a form of law, or you have to concede that a tribe working in such conditions is not really engaging in mutual cooperation.
In any case, law isn't "just the way we currently try" to achieve mutual cooperation. It's coincident with civilization. We have legal codes dating back 4,000 years. Law is how we scale cooperation from small bands lead by a dominant male to countries of 300 million people.
> If both of us can realize greater gains from mutual cooperation and trade than we can from each being isolated individuals, then we have an incentive to mutually cooperate even without any other enforcement mechanism.
You can always achieve greater gains by cooperating only up to the point where it's most advantageous for you.
I don't think "dominant male" is a valid description of the organization of all pre-civilized human tribes. I agree that the incentive for mutual cooperation is not the only incentive in play; and of course that's just as true now as it was 15,000 years ago.
Law is how we scale cooperation from small bands lead by a dominant male to countries of 300 million people.
I agree with this, and I should have mentioned it in my earlier post. As you note, law was the way mutual cooperation was scaled even when it just involved cities of a few thousand people 4,000 years ago. I was only trying to point out that the incentives for mutual cooperation are logically prior to the means used to facilitate it.
Also, of course, scaling mutual cooperation is not the only thing law is used for; it is also used to facilitate rent-seeking and other non-cooperative behaviors. That was true 4,000 years ago as well.
You can always even greater gains by cooperating only up to the point where it's most advantageous for you.
If the interaction is non-iterated, yes. In an iterated interaction you can't--or rather, you can, but the other person will just retaliate by ceasing to cooperate, and you both will be worse off than you would have been if you had continued to mutually cooperate. See the Prisoner's Dilemma.
(Btw, I'm not claiming that this doesn't happen. I'm just claiming that it is not actually a gain in the long run.)
You're assuming things about the nature of the equilibrium that I don't think you can assume.
There is I think an illuminating but relatively unexplored set of parallels between human social dynamics and algorithms. There is a lot of social theory that is predicated on assumptions that can be likened to the assumption that a given optimization problem has greedy solution. Convergence versus divergence, the existence of polynomial time algorithms to solve particular problems, etc, I think all have a lot of potential to illuminate social theory.
I'm assuming that the "payoff matrix" for the two-person interaction has the same general form as the Prisoner's Dilemma matrix does, yes, which means that the Nash equilibrium, which is mutual defection, is also the outcome with the lowest aggregate payoff summed over both players. Is that what you're referring to? If so, I agree that this is an assumption, but I don't think it's a very extravagant one.
Yes, I'm assuming "rational" actors, in the sense that they respond to incentives in a way that can be modeled by game theory. But that's not actually a very extravagant assumption. In particular, it does not entail that "rational" actors have to be conscious of the incentives they are responding to. I think many people who respond "rationally" to Prisoner's Dilemma-type incentives are not actually conscious of them; that's what I meant by my comment about tribal instincts. For example, saying that people punish defectors for emotional reasons rather than coldly calculated rational ones misses the point, because the emotions evolved in response to the same sorts of game theoretic incentives.
If you really object to the "rationality" assumption, then you need to come up with a better one. Attempts to do that (I'm thinking, for example, of the work of Kahneman and Tversky) often end up showing that the incentives involved are more complicated than we thought, not that we respond "irrationally".
The dynamics of a complex system cannot in any sense be described by simply aggregating the individual small-scale interactions. This is a huge unjustified assumption.
In many cases it can, so this statement as it stands is much too strong. For example, a country's economy is a huge game of mutual cooperation whose dynamics can be perfectly well described by aggregating a huge number of two-person games (or perhaps "two-player" would be better since one player is often an organization, like a company or the government, rather than a single person)--or in some cases perhaps games with larger numbers of players, but still small-scale.
There may be cases where a system's dynamics can't be described this way; can you give a specific example?