Now I understand why globalisation is dangerous; if people start to favor short term relationships (both personal and business) with many different partners (smaller number of rounds <interactions> per match <lifetime>), then society will evolve to become full of cheaters.
Interestingly enough, seemingly to compensate for this phenomenon, almost every new platform that requires people to interact with each other has a built in trust and reward system with ratings (e.g. Uber).
And for physical businesses, we have things like Yelp. We are effectively trying to anticipate our opponents behavior through a trusted intermediary.
There was a Black Mirror episode on giving every person on the globe their own rating. It seems that may be the actual destination that we're headed for.
> There was a Black Mirror episode on giving every person on the globe their own rating. It seems that may be the actual destination that we're headed for.
"Nosedive".
One of the scariest things I've ever seen on screen. I joke about how realistic black mirror is but that episode was too much. Beyond uncanny. It was incredibly hard to watch and I hated every second of it. Awful. Cannot recommend it enough.
For one thing, people can build trust with the intend to abuse it after a longer period of time. An abuser, for example, could plan to offer 20 "good rides" on Uber until s/he strikes. For another thing, many people have learnt to skillfully simulate trust which may pose a problem for you. For instance, you may believe to be in a loving relationship but all of a sudden you find out that the other person isn't interested in you at all and just didn't want to be alone etc.
The book "Phishing for Phools" has a good chapter on "reputation mining", which is essentially that, with the added wrinkle that you can buy a company that has a good reputation and cut corners until the customers wise up. I'm looking at you, InBev.
A clear economic incentive to cheaters or a way to give a weight to mistakes? Does yelp use an algorithm to weight the thing or use the mechanism only as a source of income?
> Pluralitas non est ponenda sine necessitate
The economic incentive to encourage cheaters is the reason why the "trusted intermediary" should always be verified.
But someone wishing to get an advantage wouldn't just rely on that metric (IMDB, Yelp, Amazon ratings). There will be a mismatch between some personal metric and your own value system, and the difference is value you can reap.
There was a tremendous amount of social technology that had to be developed to allow people to regularly interact with strangers without trying to kill each other.
EDIT: Which is to say, a global community of 7 billion is absurd but not all that much more absurd than a city-state of 50 thousand.
There's an intriguing but hard to prove theory that some popular religions (Christianity, Buddhism) arose as cities became more populous, and in response to that. Part of your "social technology".
On that topic Harari's Homo Deus is a pretty interesting read. He argues that Humanism is the de facto "new" (2-3 centuries old) religion. Soon to be replace by the celebration of something even more global -- data.
Our Data, Who art in the Cloud, hallowed be Thy Index, Thy Algocracy come, Thy Classification be done, in IoT as it is in the Cloud, lead us not into Underfitting, but deliver us from Anecdotal Reasoning. For Thine is the algocracy and the market share and the celebrity: of the Network, and of the Server, and of the Holy Algorithm, DateTime.Now() and ever, and unto ages of ages. Amen.
Maybe, I have Big Gods: How Religion Transformed Cooperation and Conflict on my reading list but that seems to be more a part of later developments than city states. Before that came written laws, and before that came having states. I guess I'd recommend reading The World Until Yesterday followed by The Origins of Political Order.
but we know that's not true. In the case of Christianity it arose in rural areas and was opposed by urban governments. Also, it did not spread in conjunction with urbanization but instead spread suddenly.
Interpersonal trust can be replaced with trust in legal frameworks, rules and contracts. Well functioning rule-based society is important for globalization.
Countries where personal relations are important for business suffer from smaller number of opportunities because forming personal trust takes time.
Sure, contracts and rules have their place, and are necessary in the absence of trust. But they do not substitute perfectly. A no-trust world of optimized rules and contracts will be lacking in important ways from a world with trust (to say it mildly).
Interestingly I had the opposite but complementary conclusion: if we end up forming many long term relationships through cooperation and maintain them we all are better off.
What I meant was that more interactions per relationship on a smaller number of relationships is better that fewer interactions per relationships across a larger number of relationships.
Basically with globalization, social media etc... The number of relationships have gone up but the quality and depth of each relationship has gone down. This makes it harder for the rational copycat personality type who relies on empirical evidence to build trust.
It's not clear that globalization will have this effect, though. For example, a study "In search of homo economicus: behavioral experiments in 15 small-scale societies" found that individuals from cultures with more "market interaction" tended to make higher offers in an ultimatum game [http://authors.library.caltech.edu/11498/1/HENaer01.pdf].
(these studies are about market institutions, not explicitly globalization, but one could imagine that globalization could involve more market institutions; my point is just that globalization involves all sorts of factors, some of which are probably pro-cooperation and some of which are probably anti-cooperation, and it's not immediately clear whether the final outcome will be more cooperation or less)
Of course, this all breaks down when you have a cartel :) I recall in one of the major prisoners dilemma tournaments, the winning entry was actually a coalition of users who would use the first few moves to establish a code for membership in the cartel, and then one entry was the master while the others were slaves. The slaves always cooperated and the master always defected, to their victory.
Indeed, the major weakness of the linked piece is that it doesn't account for any kind of social structure or any class behaviour. Which, in my book, makes it fairly worthless as an analysis of actual social behaviour.
Not worthless. This has some clear assumptions on what holds. He looks at individuals playing a certain game, with no outside information. What the parent describes is two games: 1. Prisoners dilemma between each "slave" and the "master" 2. The actual tournament
10 rounds with changed payoffs has some really interesting results.
Punishing cheating from both sides, regardless of how high, still leads to the cheaters taking over.
But if you increase the payoffs for both sides cooperating, even if it's just by +1, you already eliminate all the cheaters very quickly and even the copycats are gone pretty quick, leaving a population of only cooperate.
Sure that's for the 10 round game without any of the other players, it's still very interesting how even in that setting it seems more beneficial to reward good behavior vs punishing bad behavior.
The execution of this visualization was rather disappointing.
I didn’t like the overly cute text (the description of the Simpleton algorithm was almost incomprehensible), the low-contrast captions and colorblind-unfriendly color scheme, and the limited navigation (there was no way to go to the previous slide within a chapter, for example).
But more importantly: If you are going to design an entire interactive exercise like this, graphs are a much better way to explore the effects of varying different parameters. Trying to experiment (as instructed) with different parameters by watching animations in the various chapters and the "sandbox mode" included in this simulation was not only tedious, but prevented effective comparisons. If you just run each iterated tournament (from chapter 4 and onwards) by pressing the "Start" button, there is too much going on simultaneously at a high speed to follow along – I would recommend a sorted table or bar chart rather than many multi-digit numbers arranged in a circle – while stepping through is too slow to keep everything in your head.
I noticed that some of the other "explorable explanations" by the same creator include graphs; I think omitting them from this visualization was a mistake. http://explorableexplanations.com/
"So, it seems the math of game theory is telling us something: that Copycat's philosophy, "Do unto others as you would have them do unto you", may be not just a moral truth, but also a mathematical truth."
I also found it interesting how the success of cheaters requires a limited number of interactions (so their opponents don't catch on that they are cheaters). Perhaps that's why certain occupations such as used car salesmen have a reputation for being sleazy -- most people are not buying cars very often and so don't get the chance to get to know an individual seller over the course of many transactions. So while you might know that the corner-store merchant is screwing you and end up avoiding him, the used car salesman has a steady stream of suckers who don't know him.
Used car sales has its direct antecedents in horse trading and dealing.
There was a famous-for-its-time story called David Harum, (1899), made into a film in 1915, and a radio serial in the 1950s. Its principle legacy today is the use of the term "horse trading" to mean "underhanded dealing".
I'd run across it via H.L. Mencken's essay, "Bayard v. Lionheart", itself best recalled for its concluding sentence:
As democracy is perfected, the office represents, more and more closely, the inner soul of the people. We move toward a lofty ideal. On some great and glorious day the plain folks of the land will reach their heart's desire at last, and the White House will be adorned by a downright moron.
... but containing oh so much more, in the foil of a critique of the 1926 U.S. Presidential Election. Really, read it:
Both used cars and horses have several characteristics in common:
* They are anti-commodities. That is, individual items for sale are highly disuniform, complex, and not readily assessed.
* They are expensive. For the ordinary consumer, they are not purchases likely to be made frequently. (A horse and a car each have fairly equivalent useful working lives: about 3-10 years, depending on use and care afforded.)
* Sellers of quality instances are much inclined to stay away from the general market. It's a case where selling to someone who is specifically aware of the qualities of what you're selling is a far better customer.
This has other cognates. Software and consulting services come to mind.
The treatment in the economic literature is somewhat disappointing. There's Ackerloff's "The Market for Lemons", of course (won him a Nobel prize), but it's a generally underserved area of theory.
More notable to me is the premise of the simulation that it is ok to kill off the poor to make room for the rich. Because using the simulation to justify moral/ethical/rational acceptability of cheating sometimes means accepting that premise.
This isn't the premise of the simulation at all. It explicitly says that the agent replacement phase is an abstraction and could just as easily be interpreted as agents changing their strategies.
"Note: you don't have to wait for people to literally die & reproduce for culture to evolve -- all that's needed is that "unsuccessful" behaviors go away, and "successful" behaviors are imitated."
The premise of the replacement is that poorest adopt the strategy of the richest and is premised on this being the right thing to do even if the change is from always cooperating to always cheating.
In some ways the description of replacement rather death is a more explicit example of the premise underlying the simulation that cheating is morally/ethically/rationally justifiable. The underlying moral/ethical theory behind the simulation is not even Utilitarian (never mind Deontological) it is purely Randian where the justification of behavior is only what is in it for me.
In that sense it completely misses the point of the Christmas Peace which from a deontological perspective reflected moral/ethical principles such as the golden rule and peace on earth and goodwill toward men. Even from a utilitarian perspective there was the idea that the greatest happiness for the greatest number included the enemy in that number.
The individual benefits were a side effect that was only possible because of the higher level principle. And the individual benefits were always going to trend toward a short life. The shell with someone's name on it was still going to be lobbed on 26/12/14.
The "reward" is an unspecified scalar value. You are casting the terms of the model into "poor" and "rich". It's a mathematical model, not an ethical one.
'Cooperation' and 'cheating' and 'mistake' are not mathematical terms. They are part of moral/ethical/rational frameworks. The realm in which the simulation is supposed to provide insight is moral/ethical/rational. Moral/ethical/rational conclusions are the point of the website.
If the point is just mathematical, then limiting interactions to one and cheating is the best strategy. The simulations assume autonomous decision making on the part of the agents, that's what provides insight into human behavior why considering the moral/ethical/rational premises of the simulation are relevant when evaluating what the simulation shows.
If you are complaining that the mathematical toys of Game Theory are inadequate for modeling human moral/ethical/rational behavior then I agree with you whole-heartedly.
I'm just insisting that we keep the math toy and the interpretation of the math toy on different "logical levels".
If the presentation of the models was an academic paper filled with equations, I would not find the insistence unreasonable. In this case the context for The Evolution of Trust is alongside the Parable of the Polygons and parables are not mathematical or game theory.
At the mathematical level, if there is nothing moral/ethical/rational to draw from the simulation, then what's the point? If it's just math, then why is it only the least successful agents that switch to the most successful strategy and why are they able to switch to the most successful strategy before it is clear that that strategy is the most successful? Going further why don't moderately unsuccessful agents switch strategies? And since mistakes are part of the simulation, why don't agents mistakenly switch strategies? Why does the simulation maintain a constant number of agents rather than varying based on outcome?
The reason is that the goal of the presentation is to encourage people to be more open to the possibility of mistakes before retaliation. The presentation is trying to appeal to people using mathematics. My criticism is that the price of the mathematical model is too high: justifying cheating because it is best for the individual.
I think I see what you're saying. The author is trying to use the Game Theory mathematical model to encourage a moral or ethical response, but the model itself is flawed for that purpose because it just as readily portrays a moral "poison", if you will: the idea that selfishness can justify cheating. Is that it?
> Despite strict orders not to chillax with the enemy, British
and German soldiers left their trenches, crossed No Man's Land,
and gathered to bury their dead, exchange gifts, and play games.
Is this actually true? I'd love to read more about it.
Essentially, the simpleton is treating your previous move as either a reward or punishment.
If you cooperated, then the simpleton thinks the thing it did last time must have been good and does it again. If you cheated, the simpleton thinks the thing it did last time must have been bad and does the opposite.
It follows these rules based on its last move even if that last move was flipped from what it should have been due to the chance of mistake.
The creator of this project has a really amazing talk about his work, including interactive explainers, narrative games and chaos theory. Really interesting: https://www.youtube.com/watch?v=Zl9m0AQInBk
This is super impressive. I have been asking myself these questions for years, it is awesome to see such a nice visualization of the answers. The "30 minutes to play" almost made me click away but you got me hooked at the next screen.
One extension I'd like to see to this simulation is the exploration of group/tribe dynamics, as a way of exploring questions regarding in-group loyalty (anthropomorphized in as "ethnocentrism" or something analogous) vs egalitarianism. It seems like much of the success of these agents can depend on whether Agent A can determine what sort of actor Agent B is -- communication that can be conveyed beyond just the framework of actual transactions between A and B.
Are there any existing generic frameworks out there for testing out a wider range of agent/group-based game theory scenarios?
Another important missing factor here is the modeling of empathy and altruism. It can often simply feel good to help someone else, even if you get nothing tangible in return. This is a biological fact, as far as I'm aware.
Incredibly crude approximation: some situations that appear as [empathetic helper +0, receiver +3] on the surface, might in reality be [empathetic helper +1, receiver +3].
There is a ton of complexity in how this works in the real world. The magnitude or presence of the non-tangible reward may depend heavily on a certain aspects of one's value system. It may be heavily contextual -- only certain situations foster the non-tangible reward. Sometimes there is a potential tangible reward that may come much later, after some weeks, months, or years. And so on... I think this is barely scratching the surface.
Lastly, none of the people or dynamics in the system are static.
The linked interactive tool was very cool. However, I'm increasingly skeptical of these types of analyses as they generally are forced to ignore most of the complexity of the real world.
EDIT: I completely forgot about technology... things like the internet which completely change the nature of an interaction between people, or apps that mediate or flavor in-person interactions in ways that society has never seen before.
We like to think we are complex, free-will wielding creatures capable of doing complex and unpredictable things.
It's striking to see how simple we really are and honestly - how easy it is to explain how we behave. I get it - this is a simulation and it leaves out alot of colors to human behavior. But it is the essence of our value-oriented decision making and really quite sad that a better world is actually? quite easy to make.
I think software has the potential to create that world. I propose the creation of a formal group dedicated to exactly such a purpose.
More reading on this: There is a paper by David Axelrod and W.D. Hamilton called "The Evolution of Cooperation". There is also a book by Axelrod of the same name.
The little animated chums strike me be as a bit too cute.
One has to take care to not anthropomorphize models. Even models ostensibly created to model aspects of human behavior, such game theoretic models.
Anthropomorphism might induce people either to think there's more under the hood than there actually is, and thus lead them to a hold a wrong mental representation of the model, or to naively assume that it's conclusions are robust to specification changes by making the conclusions a bit too familiar, even though even somewhat simple model extensions might significantly change their behavior (e.g. going from single to repeated games, which makes cooperatives strategies viable).
Comparing tit-for-tat strategies with the Golden rule, while mixing in references toe the Christmas truce somehow seems to go in the opposite direction of being cautious regarding model interpretation.
> even though even somewhat simple model extensions might significantly change their behavior (e.g. going from single to repeated games, which makes cooperatives strategies viable).
He explicitly examines this in the application, did you not run through it? He actively encourages you to modify the rules and see how it changes the results.
I ran through it. I didn't suggest he didn't cover it. I just used it as a trivial example, because other readers supposedly also ran through it. Some other commenter suggested communication between agents as an extension, I could've used that if I knew what results from it.
Now I understand why globalisation is dangerous; if people start to favor short term relationships (both personal and business) with many different partners (smaller number of rounds <interactions> per match <lifetime>), then society will evolve to become full of cheaters.