”After the game Kasparov shocked many people on the MSN forum, which was kept open after multiple requests, by announcing he had been reading the World Team strategy board during the game.”
⇒ one could argue this was “the World, including Kasparov versus the World, excluding Kasparov”
You miss the parents point: White was playing with the rest of world's analysis and Kasporov's analysis, Black was only playing with the rest of world's analysis.
Literally reading all the thoughts of your opponent is a little more than "cheating a bit". It is putting your opponent on your side of the table.
If you read the whole wiki, it’s pretty clear that the use of chess engines was not prohibited, and a team of engine users was suggesting moves. The World team also tried unsuccessfully to simplify to a position where tablebases could point out perfect play. It should be pretty clear that the analysis offered by the World team leaned heavily on external help, in this case engines and tablebases.
With all due respect to Barcot, Felecan, Krush, and Paehtz, it would’ve been highly unlikely that without the aid of engines, they would’ve had a close game against Kasparov. Barcot is easily the most accomplished out of these players, and by 2001 (the earliest available data) he was still “only” a 2600 player. In 1999, Kasparov was still a 2800 player at the peak of his powers. None of those 4 players would’ve stood a chance against Deep Junior, an engine used by both sides.
Had Kasparov gone out and purchased the best computer hardware at the time, and had it run analysis 24/7, would that have been cheating?
The article alludes to the fact that in 1996 Karpov crushed another World Team. I would argue that the development of competent chess engines during the period between the two matches is a key reason why the World team fared much better the second time.
Chess is a game of perfect information. Here, what Garry did can be best characterized as a “shortcut”. If he had run an engine 24/7, it would’ve produced moves better than what the World team played.
I've mixed feelings. At first I lost some respect for him when he said he cheated but then I realized he conceded that he cheated and that is also respectful.
I mean honestly, when it is not against the rules it is just good meta-gaming. At least when playing on this level. And the game was still very interesting!
Is it “reading their thoughts” if they’re openly communicating them? Would you consider it cheating if someone wrote down their ideas on paper during a game and didn’t stop you from reading the paper?
A lot of sports have a rule about unsportly behaviour where others rules don't specifically cover the infinite variety of ways a person can be morally obstructing the spirit of the game. I would definitely say reading your opponents strategies fall under that.
The entry reminds me how Microsoft had one of the better proto-social media platforms of the 2000s and a solid messenger app, and then managed to run both into the ground. RIP old blogosphere
This sort of proves that there is no such thing as "Wisdom of the Crowds". There seems to be a bit of a cult myth [1][2] about groups being able to "on average" outperform even the best individuals but that seems to be based on some rather flimsy studies whose methodology has not been published. It seems more likely that in reality, data is cherry picked, where it correlates with the "Wisdom of the Crowds" myth nit gets published, if not it gets discarded.
The most rigorous experiment I've seen performed, tested a group of reddit users on estimating lines in an image [3]. Some expert individuals were far more accurate than the group average.
It's important to keep in context where the term "Wisdom of the Crowd" came from and where it succeeded.
"The classic wisdom-of-the-crowds finding involves point estimation of a continuous quantity. This has contributed to the insight in cognitive science that a crowd's individual judgments can be modeled as a probability distribution of responses with the median centered near the true value of the quantity to be estimated." [1]
Notice that the term said nothing about adversarial games and or games which have moves.
A better model for those types of games is the experts' problem / multiplicative weights algorithm -- where the solution consists of a weighted plurality which is updated every turn. [2]
This sort doesn't prove nothing about "Wisdom of the Crowds" because the wording is too broad. But it is evidence that crowd can outperform the best individual at least theoretically. The worst mistake the crowd had made was openness of dicussion allowing Kasparov to read this discussion. It gave him the advantage and by his own words it was cruicial advantage to win.
Chess is the good candidate for hive-mind, because it is computational task which can be parallelized. You can hire tons of minds, and made them to solve different parts of the common task. There was some issues with management with mind-resources, the worst at the end-game, when many people lost their faith and started to discuss not a next move, but what is wrong with hive-mind.
> The most rigorous experiment I've seen performed, tested a group of reddit users on estimating lines in an image [3]. Some expert individuals were far more accurate than the group average.
I'm not convinced by this. There are few objections.
1. There was no discussion between elements of hive-mind. So the abilities of hive-mind remains unused.
2. Suspicious statistics.
a. The articles compares estimates of individuals with estimates of hive-mind. Hive-mind have one set of answers, while answers of individuals are combined with knowledge of true answer. There are no example of individual outperforming hive-mind on all four data points at once.
b. There are no attempt to estimate probability to get top individual answers by chance. Maybe those selected individuals are no better than average person, but just lucky ones?
3. There are different tasks. Different tasks needs different data processing, different data processing needs different processors. For example, there are tasks that benefits from parallelizing, and there are tasks that do not. Here we come close to (1): if you want hive-mind to outperform individual, you need to find a way how to use superpowers of hive-mind with this task.
Wisdom of the crowds works when you're able to pool a large number of independent, unbiased estimates to produce a distribution. If the crowd communicates, it is no longer independent. This produces a distribution weighted towards the answers of loudest or most persuasive fraction, which is probably going to be less accurate.
In fact, communication between crowd members in many cases significantly reduces the quality of the answers. This has been clearly demonstrated in studies of group brainstorming, where the number and quality of ideas is significantly better when group members come up with ideas independently and pool them at the end than when they come up with ideas together.
Proves?! It isn't even an anecdote in favor of your thesis. The World Team stayed with the world champion until move 54 (something proved 13 years later).
To this point, consider the failed startup Trada. (I worked there for a bit)
They had a marketplace for ad campaigns with Google. Instead of running ad campaigns or using artificial intelligence, a pool of optimizers would work on a campaign and Trada worked as the middle man between optimizers and advertisers.
> A classic demonstration of group intelligence is the jelly-beans-in-the-jar experiment, in which invariably the group’s estimate is superior to the vast majority of the individual guesses. When finance professor Jack Treynor ran the experiment in his class with a jar that held 850 beans, the group estimate was 871. Only one of the fifty-six people in the class made a better guess.
When advertising on Google competition leads to higher prices. For example the keywords "dui lawyer" cost a fortune because the rate of return is so high for each click. Theoretically a creative individual could come up with a set of untapped, low price keywords that could lead to an effective ad campaign at a lower price.
In practice the company had major problems. Incompetent optimizers would chew through a whole budget, leading to advertiser churn, and a lot of intervention. Eventually everything becomes micro-managed which isn't a sustainable business model. (There's a reason agencies target big ad campaigns... its the only way to make it worth it because of the amount of resources you have to invest)
So there's a single data point where the wisdom of the crowds failed.
I'm not sure I understand how incompetent optimizers leads to an example of the wisdom of the crowds failing. It sounds like the failing you describe is the result of individuals. Is there something I'm not getting?
The optimizers were the crowd in this business model. Some of them made good decisions, others made bad. But since the amount of money available was limited, those bad decisions brought down the performance of the campaign and ultimately the advertiser gave up.
An expert would've made better decisions.
I think the wisdom of the crowd can sometimes be effective, but there are a lot of ways it can go wrong. Some things I was thinking of:
1. the crowd wasn't big enough
2. the time frame was too short to be effective
3. the heuristic was too complex or poorly tuned to lead to a good outcome
4. creative endeavors with high rates of failure lack enough signal to properly optimize (a strategy might look terrible till it suddenly works)
5. misaligned incentives undermine the goal
To illustrate the last case, in high school I had a teacher tell us that he would scale the test by the highest score. For example a 95 would give 100 and +5 to every other score in the class. With incentives like that, if everyone in the class answered no questions, everyone would get 100. (but then all it would take would be one individual to screw the whole class)
Crowds are complex, difficult to understand and hard to predict.
On "Wisdom of the Crowds": Group average on its own is no good, but some creative aggregation can be better than either member on its own (and of course better than naive averaging of any sorts). As a practical example, see for instance Metaculus. Excerpt from one comment[1]:
"Since Dec. 1 2018 (sic), the MP has been on the "right" side of 50% every single time, and (in a more meaningful measure) has a mean Brier of 0.048. (Though those are both about to be spoiled by the NASA-LISA question, probably.)" [2]
You can compare metaculus vs. community predictions at [2]. The corresponding community score would be 0.119 during the same time.
I'm not sure this is proving anything except that by developing an algorithm that filters out some predictions you can improve upon those predictions. And that is probably quite obvious given there are some users (presumably) on the site who are just trolling and fairly easy to identify and filter out. Unfortunately, I can't see the data on the Brier score of individuals.
Of course it does not prove anything, but identifying "wisdom of the crowds" only and exactly with "group average" isn't particularly considerate. While public mass media thrives on simplifications, I had hoped that we could iterate on the basic premise.
Yet wisdom of the crowd becomes a successful machine learning paradigm as ensemble learning where a family of weak learners are combined to produce a stronger learner.
Read the comment by "Someone". The rest of the world was playing on both sides of the table, since Kasporov was reading their analysis during the game.
I tend to think democracy is less about finding some optimal solution and more about finding a solution enough people will be happy with that there isn’t revolution.
Yeah, democracy may have a limited best case scenario compared to other forms of decision making, but in many senses it has one of the best worst case scenarios. More centralized power may be able to get more done, but historically we've seen it often results in decision making that is either just-plain-bad (when leadership is clueless) or which only benefits those in power, to the detriment of everyone else (when leadership is malevolent). Whereas democracy is slower and less efficient, but does ensure that most people are in support of the decisions being made
Democracy isn't about getting the best policy each and every time, its primary value lies in the fact that it creates framework that facilitates the peaceful transition of power and avoids the otherwise frequent wars of succession, purges, coups and whatnot.
democracy is not about decision, but decision making. during the process consensus is born and grown. it then generates mandate and enough political capital and willingness for consequent execution.
that being said, democracy often leads to wishes that nobody could execute. after all, it's about what participants want, not what they really need.
I don't think democracy is about increasing intelligence so much as resilience and escaping from failure modes. It provides a way to get rid of bad leadership, and it makes certain that huge issues are hard to ignore. In a pure autocracy a bad autocrat is fatal and there is no way to do anything about it but via extralegal means.
I'm not a great Chess player by any means, but I was good as a child. I've recently found Kingcrusher to give really entertaining youtube videos on it various games, e.g. he covers good GM matches or recently AlphaZero. His take on Kasparov vs world; https://www.youtube.com/watch?v=LJyJCdU6Fh0
I've recently got back into chess playing again after decades away. Recently I stumbled upon Agadmator's channel on Youtube [0] where he gives fast, insightful, no nonsense breakdowns of famous and not so famous chess games. Love his deadpan style, and also how he goes beyond the actual game to analyse 'what if' scenarios or explains why a player resigned in some games.
This came up in the "Twitch Plays Go" discussion here on HN this morning. It's a terrific concept. And am definitely stealing it ;)
Highest vote count amongst N players decides move choice. Player votes can even be weighted by "success" based on past performance.
But rather than chess, which is a fully deterministic, perfect information game. I'd really like to apply it to stochastic games. And see how adept crowds can be at finding an equilibria.
Not a chess expert by any means, but I think ca 1999 was probably a kind of sweet spot for this kind of match because the internet was sufficiently mature to allow for sufficient people to participate, but most participants would not have been able to run a grandmaster level chess engine on their home PCs. I could imagine use of chess engines would make such a "human" crows much more powerful these days?
At the time I was involved in the Computer Chess Team and wrote the post-game summary for slashdot.org. I still like to analyse the game with the latest versions of Stockfish when they come out, with whatever EC2 hardware I can get hold of.
The problem with the Wikipedia article is the subjectivity of how the "?" and "!" annotations on the moves are handed out - sometimes they are quite wrong.
The clearest example in the article is the fact that 37... e6 loses and thus it deserves a double question mark, because 37... e5 draws. Anybody who downloads Stockfish and the six piece table bases can see that very quickly in 2018, but I suppose Wikipedia requires an "authoritative" published source on that before it can go into the article. Similarly on the next move, 38. Rd1 wins, whereas Kasparov's 38. h6 only draws. So that also deserves at least a question mark. (perhaps Chessbase should do an article on it)
Other dubious annotations are 18... f5 when 18... Bd4 is a clear draw (also in the Kasparov and King book) and perhaps 26... f4 when 26... Bc5 was better (in "Reinventing Discovery" the author writes about this move choice quite a bit).
I’m not saying it applies, here, but there also is a psychological aspect to chess annotations. Annotators may mark a move as good even if they think or even know better moves against optimal play exist. It isn’t fair to expect human chess players to play optimally.
I'm no chess player, but that's a cool hobby you have to revisit the game. Throw an analysis up on Medium and post it here, or share it with a chess magazine, and maybe that will be authoritative enough for Wikipedia.
I'd watch if someone created this for Go, only the format is 'the world vs the world'. Have a timeframe for voting on the next move, then make the next most upvoted move, then repeat.
Maybe that would feel to much like playing on your own, and not seem very fun. If you could give each “player” an indentifiable characteristic it might be bring out some weird psychologycal effects. Either something kind of abstract like red v blue, or something more anthropomorphic like boy v girl.
There is Babylon 5 episode about the conflict between Drazi of "green" and "purple" characteristic (2x03, coincidently? one of the more important episodes for the overall story arc). Last week I was reminded of this while thinking about Ingress and it's green vs. blue "conflict" (which is wonderfully designed such that taking the in-game fights into meatspace will gain you nothing in comparison to settling them in-game).
”After the game Kasparov shocked many people on the MSN forum, which was kept open after multiple requests, by announcing he had been reading the World Team strategy board during the game.”
⇒ one could argue this was “the World, including Kasparov versus the World, excluding Kasparov”