This seems to make the same assumption as analysis of the prisoner's dilemma. That is that it's an isolated decision. If you take the decision into any kind of realistic context, you'd never say something as cold as "I'm sure you'd do it anyway" (similar to defecting in the case of the Prisoner's Dilemma) as any reasonable analysis will show it to be suboptimal.
I've often encountered thought experiments like this with a moral along the lines of "Some times it pays to be bad, it's upsetting, but it's the truth and here's the maths that proves it" but it's always been equally interpretable as "look how bad a job naive game theory does at explaining real life".
Very few people would outright say "I'm going to defect here, you'll do the work anyway". They might say "that sounds like a great idea, but I promised my kids we'd go to Disneyland this weekend". And next week, "I promised my wife some alone time". Then, "my herd is coming down with something, I need to pick up some antibiotics". And so on, until you give up and dig it yourself.
And if you install a pump in a village and ask people to pay to use it without enforcing that, they might say "I'm not going to pay, water should be free".
It would be naive to try to model everything as a simple one-shot game. It would also be naive to think that simple one-shot games don't usefully model real life at all. Just look at https://en.wikipedia.org/wiki/Prisoner%27s_dilemma#Real-life... and see how many players defect.
However, I agree that I should probably have included some discussion of iterated versions.
How about this resolution: "Ah. Okay, so it's like that. Tell you what, why don't you mull it over and I'll come ask you about it again next year." You were getting by just fine without it before, you'll get by fine without it even after you know about it. No need to burn any crops, just make your case a little better next year, be a little less cooperative come harvest time, and he'll get the picture eventually.
Game theorists always seem to assume that there's no such thing as nuanced communication.
> Game theorists always seem to assume that there's no such thing as nuanced communication.
No, they don't. Non-iterated, binary-strategy-choice games are the simplest things in game theory, but not the whole of game theory. The same as the simplifying assumptions used at the beginning of an Econ 101 class aren't the whole of economic models.
Nuanced interactions in iterated games (and what kind of games nuanced interactions actually make any difference in) is, in fact, studied in quite some depth in game theory.
Agree. This applies even more to the "clearing the snow" game mentioned at the end. The work to clear the snow repeats every time it snows and each player can remember what happened the previous time.
Couldn't agree more, I came to say something similar -- there is no time dimension here. There is no understanding that cooperation now yields cooperation in dis-advantageous situations in the future.
Specifically, public goods are goods where there is a temptation to free ride. As illustrated in the article, this doesn't always mean 0 public goods get made even by perfectly rational actors -- sometimes it's worth it for someone to do it anyway (Imagine putting up a street light in front of your shop if your local government didn't do it for you.)
The Wikipedia page has dozens of possible solutions, both economic and social -- everything from government provision to buying out possible free riders to subsidies to social sanctions.
It's actually a special case of the public good problem. Most public goods aren't worth more to one individual than it would cost that individual to produce. A strong military defense for my country is a classic example of a public good, but it probably costs way more than it is worth to me. Yet it's still a public good because it is worth more to the entire country than it costs to produce.
The public goods problem can be either the described Farmer's Dilemma or the Prisoner's Dilemma (its usually considered the latter, since the cost is usually greater than the benefit to any one individual from having the good provided, so that defecting is a dominant strategy, unlike with FD.)
My personal favorite: Throw in a little extra to bring them over the top in their utility calculation.
"Tell you what. Go halfsies on this ditch thing with me and I'll bring over a half a pig to throw in your freezer in October."
You reduce your workload by half and you only give up 1.1 utilon. You get 1.9 utilons for 1 (1.9x efficiency) vs getting 3 for 2, which is only 1.5x efficiency.
If you have plenty of good work lined up, that's the better way to go. You'd only dig it yourself if you had a lot of free time.
Plus if you offer them something you produce yourself, you can get half of it back as profits. I.e. the foreign aid approach.
There is an interesting metaphor here of how B2B integrations are similar to a farmer's dilemma. Two B2B companies talk about doing an integration. Both will benefit if it is done, but one generally does more of the work.
Hmm... your "Farmer's Dilemma" description makes me want to call this a Free-Rider Problem, but that doesn't quite seem to be equivalent
Maybe it's that you're about contemplating incentive structures underlying the creation of a public good rather than the traditional Tragedy of the Commons maintenance of a public good?
In the specific construct of this example, another option would be for the farmer who digs the ditch to alter its placement such that it doesn't provide a benefit to his neighbor.
Rather than run the ditch between the properties, he could run it down the far side of his own, or the center, for that matter.
> Maybe I have a bad back, and digging is more costly for me than for you. This may or may not change the Nash equilibria, and it may or may not change the amount of sympathy we each get in the various continuations.
And this is why money is nice. Because it enables the outcome with greatest total utility in this case: that the healthy guy digs the ditch by himself and gets a promise of future reward.
I've often encountered thought experiments like this with a moral along the lines of "Some times it pays to be bad, it's upsetting, but it's the truth and here's the maths that proves it" but it's always been equally interpretable as "look how bad a job naive game theory does at explaining real life".