Hacker News new | past | comments | ask | show | jobs | submit login
Stockfish Wins Chess.com Computer Championship (chess.com)
225 points by wglb on Nov 18, 2017 | hide | past | web | favorite | 85 comments

Unfortunately, Stockfish is not doing too well in the more recognized TCEC unofficial world computer chess championship at http://tcec.chessdom.com/live.php, due to lackluster performance against the weaker engines. With just over 3 rounds to go, it looks like it's out of the final:

  N Engine           Rtng  Pts  Gm     SB Ho   Ko   St   Fi   Ch   Gi   Bo   An  

 1 Houdini 6.02     3184 17.0  25 187.25 ···· =01  ===  ==1= 111= 1=1= 1=1  =1==
 2 Komodo 1959.00   3232 17.0  25 183.25 =10  ···· ==0  ==1= =1=1 11=  =1=1 1=11
 3 Stockfish 051117 3228 15.0  24 173.25 ===  ==1  ···· =1=  ==== =11= =1=  1===
 4 Fire 6.2         3112 13.5  25 152.25 ==0= ==0= =0=  ···· ==1  ===1 =1=  1==1
 5 Chiron 251017    3013  9.5  25 114.00 000= =0=0 ==== ==0  ···· ===  ==== === 
 6 Ginkgo 2.01      3052  9.5  25 110.00 0=0= 00=  =00= ===0 ===  ···· ==== 1== 
 7 Booot 6.2        3091  9.0  24 104.75 0=0  =0=0 =0=  =0=  ==== ==== ···· === 
 8 Andscacs 0.921   3100  8.5  25 107.25 =0== 0=00 0=== 0==0 ===  0==  ===  ····
It had better win its current game against Booot to keep some very slim chances. Which it now seems to be doing...

Worth noting that Stockfish is actually the only engine that hasn't lost a game. The lackluster performance is it just not beating the weaker engines, something that Komodo and Houdini have done (but they have also lost against stronger engines).

So can we conclude that Stockfish just plays more "drawishly" than the other top-tier engines? Or is this effect just random chance so far?

This is true, Stockfish expects its opponent to play as well as itself (outside of the 'contempt' setting, not sure if that was used or tuned per game). So it won't play edgy lines that invite blunders if they are even a centipawn worse than what it thinks is the 'correct' line.

Funnily enough, I feel there's a lesson for startup people in this. I have seen many people (myself included) talk themselves out of big, bold bets just because they are very good at thinking up reasons something should never work.

Oh man Now I have to add a “contempt” parameter to everything I make!

Scoring conditions for chess tournaments cause forcing a draw to be more favorable than a loss, likely causing the programmers to heavily weigh draws over losses. Contrast this with Go engines where there are no draws, the incentives for finishing chess games in a draw that would otherwise end in a loss is much higher.

It might have something to do with the "contempt" setting used by the developers for this tornaument

I'm not an expert, but it may be the equivalent of a strong human defensive player, e.g. Petrosian (https://en.wikipedia.org/wiki/Tigran_Petrosian).

Although Stockfish beat Booot, it then tied against Komodo, giving

  Komodo 17.5
  Houdini 17
  Stockfish 16.5
It's quite bad for Stockfish now because each of these three top engines has 1 remaining remaining match it's pretty likely to win (against Ginkgo, Booot, and Fire, respectively), which if it happens wouldn't change the relative ranking of the top three at all (they would be at 18.5, 18, and 17.5). Also, Stockfish has a match against Houdini, while Houdini and Komodo have a match against each other.

Stockfish can still in principle reach the final this way if Houdini and Komodo have a draw, because that would give

   Komodo 19.5
   Stockfish 18.5
   Houdini 18.0
assuming all three engines also win their respective remaining matches against non-top-3 engines, and Stockfish defeats Houdini.

Very tough for Stockfish at the moment!

Booot achieved a draw against Houdini and Ginkgo achieved a draw against Komodo! Therefore Stockfish's potential paths to the final have increased; for example, if Stockfish can beat Houdini and draw against Fire, while Komodo beats Houdini, then Stockfish would be in the final against Komodo. Or if Houdini defeats Komodo and Stockfish wins against both Houdini and Fire, Stockfish would be in the final against Houdini.

Edit: But now I'm not sure of the original comment that only the two top engines from this round will go to the final; in previous years there were more rounds before the final, but I don't understand how many of the rounds have been eliminated this year.

People in the TCEC chat definitely seem to agree with the interpretation that only the top two will go to the final.

It's kind of down to the wire now: currently Komodo and Houdini lead Stockfish by exactly one point, while Komodo is playing Houdini in the final game for both of them. Stockfish has exactly one game remaining (against Fire).

In order to reach the final, Stockfish needs to win its game against Fire, and also Komodo and Houdini need not to draw. Then Stockfish would be tied with whichever of Komodo and Houdini loses the current game, and I guess there would be a tiebreaker prior to the final.

If Komodo and Houdini draw now, they're both guaranteed to face each other in the final. People in the TCEC chat are pointing out that if they were human players, they would face a strong incentive to deliberately play for a draw or to accept one if one were offered. But presumably the software has no idea of this!

And they did get their draw, so Stockfish is out of the final regardless of whether it wins its current match against Fire (which seems likely).

Edit: Stockfish did just defeat Fire so the top three are

  Houdini 18.5
  Komodo 18.5
  Stockfish 18.0
So close! But the final will be between Houdini and Komodo.

I suppose I depends on the format. It seems like in a knockout style tournament Stockfish would surely win, as it's the only engine with zero losses. Looks like it might benefit from a more aggressive play style, in order to get more wins against weaker opponents.

There's interesting metagame theory here... It should get more aggressive whenever it's behind its goal rank in the tournament.

You can often trade variance for performance in competing strategies. The amount you're trading depends on your opponent's skill and strategy.

If an engine has a way to learn the skill of its opponent, that might be another way to determine the ideal level of aggressiveness too. Also if you're behind in a game, cranking variance is often your best shot.

Do any engines metagame yet? That could make things crazy.

The rules of the tournament forbid it, it's the same program in all rounds and current standings aren't an input.

So there is also no luck for any engine getting a favourable order of opponents or so.

Even just with the input from the current game, the engine could get more aggressive by estimating the skill of its opponent.

With only one round left, Stockfish is hanging by a thread:

    N Engine           Rtng  Pts  Gm     SB Ho   Ko   St   Fi   Ch   Gi   An   Bo

    1 Houdini 6.02     3184 18.0  27 219.00 ···· =01  ==== ==1= 111= 1=1= =1== 1=1=
    2 Komodo 1959.00   3232 18.0  27 214.75 =10  ···· ==0= ==1= =1=1 11== 1=11 =1=1
    3 Stockfish 051117 3228 17.0  27 215.75 ==== ==1= ···· =1=  ==== =11= 1=== =1=1
    4 Fire 6.2         3112 15.0  27 180.50 ==0= ==0= =0=  ···· ==1= ===1 1==1 =1=1
It needs to beat Fire, while Komodo vs Houdini needs to be decisive. Fingers crossed...

Congratulations to Stockfish! The community is amazing, and the patches keep on flowing. The sheer number of ideas is pretty incredible. If you are interested in contributing, head over to http://tests.stockfishchess.org/tests. You can submit a test, and it will be run by the virtual cluster of user donated machines.

It's been over four years since I put fishtest up, and in that time, there have been over 20,000 tests submitted. The really cool thing is that this distributed testing framework is only possible with an open source engine. So instead of being a disadvantage (everyone can read your ideas), it turns into an advantage!

This is super cool! Are there any posts about how the fishtests work? I love reading about how people solve interesting testing problems.

There are a ton scattered around :).

Here is the announcement of fishtest on the talkchess forum: http://talkchess.com/forum/viewtopic.php?t=47885&highlight=s...

Initial discussion of the introduction of SPRT into fishtest, which led to a dramatic increase in our ability to measure improvements in self-play, in a statistically sound manner: https://groups.google.com/forum/?fromgroups=#!searchin/fishc...

SPRT background here: https://en.wikipedia.org/wiki/Sequential_probability_ratio_t...

Basically, we use a two-phase test to maximize testing resources. First a short time control test (15s/game), using more lenient SPRT termination criteria, then, a long time control (60s/game) test using more stringent criteria. That combined with setting the SPRT bounds to allow us to measure 2-3 ELO improvements has allowed the progress of Stockfish to be almost only improvements. Previously when developing an engine, you'd make 10 changes, and if you were lucky, 2 or 3 would be good enough to make up for the other bad or neutral ones.

If you look at the graphs on http://www.sp-cc.de/, you can see that it just keeps getting better, one small improvement at a time.

Incredible that now the best engine (Stockfish) is open source engine and the best server / app (Lichess) is also open source. Well done chess community!

Lichess is blazing fast and has so many fun variants. I enjoy playing Chess960 on there and I am always impressed with the "training mode" for Standard chess, because the puzzles it throws at you are historical blunders from people actually playing games online!

Lichess is all of the above, with flying colors: very well-designed user experience, pleasing aesthetics, responsive, coded well, FOSS.

You cannot say that Stockfish is the best engine.

Komodo has the highest ELO ranking. Houdini leads the more important http://tcec.chessdom.com/live_mobile.php championship format.

Why is Lichess better than chess.com? I've used both and can't really say one is the "best".

On Lichess there is a patron system for those who can/want to pay, but every feature (deep computer analysis, lessons, etc.) are free and ad-free. Chess.com's free tier is limited in a number of ways and displays ads. (Full disclosure: I am not the primary developer but do work on the Lichess app)

Maybe this is not the right forum but my lichess app has a big where if I am watching a chess game on it, the pieces sometimes disappear or change position. It's been going on for a while but seems to game gotten worse recently. It seems to be triggered by going to the watch games tab, then switching apps and coming back.

Hmmm...I was aware of such a bug that existed until a couple months ago that was related to moves effectively being replayed twice resulting in nonsense positions but thought it was fixed. Are you running the latest version of the app? If so, can you try to derive a set of reproducible steps to cause the bug and submit an issue at github.com/veloce/Lichobile? I'd love to irradicate these sorts of bugs.

I really wish I could customize Lichess to have an aesthetic similar to Chess.com chessboard/pieces. I prefer Lichess, but what I have found with Chess.com's aesthetic it sounds and looks similar to an old classic board but not in a super cheesy way whereas the iconography used on Lichess feel very stockish.

The Lichess mobile app, at least, has themes that can make it as ugly as Chess.com, if that's what you're after. :)

On the desktop you can, I used Greasemonkey to completely customize the appearance and use the pieces from Fritz when I used it a while back.

Don't know if it'll still work but it did a year ago and it's a starting point.


Lichess is great for casual player but for more serious play chess.com is preferable because of stronger competition.

For serious play i'd say it is chess.com, playchess.com(Chessbase parent), ICC and only then Lichess.

In some UI aspects Lichess does feel nicer and lighter.

That's just the nature of the open-source model, the money is sorely lacking for promoting chess professionally.

Maybe some day the Patreon model will transfer to attracting top GMs to Lichess but for now they might check out Lichess and then leave immediately such as was the case with Wesley So earlier this year.

By comparison chess.com has the money to attract strong titled players to its online tournaments and even sponsor strong over the board tournaments such as the recent Isle of Man super Swiss.

in what fantasy world is chess.com more "serious" than the ICC? the one where attracting a couple super GMs to play promotional tournaments is more "serious" than having thousands of titled players online? not to mention the superior autopairing system.

serious: ICC, for the competition. casual: lichess, for the dramatically superior UI and value than any other site edit: the personal analytics alone make lichess superior to chess.com and other long time inferior sites like chesscube.

playchess and chess.com are relics, though chess.com survives on promos and articles and such.

Why do you think chess.com is a relic? It seems to be doing better than ever.

You might be right that ICC is more serious but chess.com has been making great strides in providing nifty web and online apps.

chess.com sponsors online and regular OTB tourneys like the recent Isle of Man super Swiss, not sure what ICC does.

I feel that ICC is the one that is a relic from 1990s.

In all fairness I had not played on ICC in 10 years ever since they tried to blackmail me in ordering a membership lest my 10 years of tens of thousands of stored games on them would vanish. (I started on ICC when Sleator split/took the code from original ICS in mid 1990s).

I just checked ICC online presence and played a few games as a guest, there was 1 GM and one IM online.

Even lichess has more titled players than that.

My bias is that chess.com offered me a full membership as a lowly FM while ICC didn't offer anything but blackmail.

Again, I could be wrong, but the impression to me is that ICC is still sleeping on its laurels from late 1990s.

Another note is that all the other chess sites (lichess, chess.com, playchess) have a notably more international feel.

> Why is Lichess better than chess.com?

Mostly because Lichess is written in Scala[1], but the engine probably helps a bit too ;-)

Love the tagline, "lichess.org: the forever free, adless and open source chess server"

[1] https://github.com/ornicar/lila

”One major facet of the tournament format was that there were no opening books”

Code and data are equivalent (for example, instead of an opening book that says “if you play white, open with pawn E4” one could have an evaluation function that says “after the first move of white, having a pawn on E4 is worth a million points”), so how did they enforce that rule?

Hardcoding an opening would be considered playing with an opening book.

Regarding the enforcement of this rule: As all chess engines that competed are pretty well known (and sometimes even open source), I'm pretty sure that they (rightfully) trust the programmers not to do anything against the rules.

I wonder if someone were to come up with an neural network based, AlphaGo-style engine that was competitive with these other top engines, how they'd deal with these sort of rules. An "opening book" of sorts would likely be baked into the neural nets, unable to be turned on or off.

AlphaGo, for example, has definitely played well-known human openings (and innovated on them) despite having no opening book.

All these top chess engines definitely play well-known human openings and innovate on them, they just happen to be the best sequences of moves in certain situations (as far as we know)

The rule is against "explicit" hardcoded opening books, to force the engines to calculate the best move each time, to encourage variation and (I think) to allow the developers to focus on building stronger engines instead of managing huge opening books.

The encourage variation part didn't really succeed in this format, as they all tend to converge to the same openings

It's sort of funny to consider that our opening books are the product of several hundred years of incredibly slow, inefficient Monte Carlo tree search done by humans playing over wooden boards. Seems odd to deny that to engines that can knock it all out in a weekend anyway.

This isn't true, though. There are a lot of openings where the humans turn out to be right anyway, even though the computer thinks it's found a marginally better move. Humans have a better intuitive understanding of how certain openings create endgame possibilities, and if I remember correctly the combination of a Super GM and an engine is still markedly stronger than an engine alone.

Engines have contributed substantially to modern opening books but they haven't supplanted the existing knowledge. Humans turned out to be wrong about many sharp lines (which were refuted by computer) and the computer can find really interesting ideas in many positions (which would be nearly impossible for a human to find) but the old human-approved Best Openings are still standing tall after the engine revolution.

> I remember correctly the combination of a Super GM and an engine is still markedly stronger than an engine alone.

This may have been true when engines were still only marginally stronger than humans, but I haven't seen any evidence that is currently true. A few years ago Nakamura + Rybka (previous best program) lost to Stockfish.

At the time Rybka was not one of the strongest engines anymore, and "correspondence chess" (human + computers) is still played.

The strongest players are not GMs, as far as I know, and a very important part of those games is trying to force positions where the opponent's engine might make a slight mistake

Here is an interview with the world champion https://en.chessbase.com/post/better-than-an-engine-leonardo...

Do you know of any evidence that these players+engines can beat engines alone, instead of each other?

I can certainly accept that it'll always be the case that computer recommendations don't necessarily chime with human abilities and play style, and so accepting engine recommendations could be detrimental to human results. Beyond that, I'd be intrigued to see the numbers on the types of positions where engine evaluations diverge wildly from results if they're left to play out a position thousands of times. I heard this about the French Defence recently, for example, but without any real evidence.

Anyway, I was just making a slightly facetious point that all our existing openings came about by a slow, semi-random process in which people tried moves, played out the games, and then looked at the results.

I really disagree with the "random" part, though. Human openings need to at least have some "theory" around them to know what to do if your opponent diverges from the main lines.

Usually "real evidence" that the human choices sometimes are better than the machine ones is that the engines after a long time agree with the human choices.

Keep in mind, this was a “Rapid” event with quicker than normal time controls. It was meant to be an exciting event for fans and to test the engines at quicker speeds that humans can both follow and enjoy. It certainly produced exciting games (in my opinion)!

Without wishing to denigrate the achievements of all the people involved in this tournament, nor the entertainment aspect I'd like to ask:

What is the statistical power of a 90-game round robin? Would this be a publishable result with p < 0.05 (or the new 0.005) against the null hypothesis that Stockfish and Houdini (2nd place) were of equal skill?

I don't know if you can really put a p-value on this result without a more specific null hypothesis, but anyway it looks like this tournament result provides extremely weak evidence that Stockfish is better than Houdini. In the round robin component, Stockfish and Houdini played two games against each other, each winning one and losing one. In the "superfinal", they had 15 draws, Stockfish won 3 games, and Houdini won 2 games.

> That said, there was some serious computer science behind the event, as each engine played from a powerful Amazon Web Services computer.

Love the subtle product placement :D

I wonder how a machine learning approach like AlphaGo zero would do.

We actually had an engine which was based on neural networks, called Giraffe. It used an alpha-beta search with a neural network evaluation. The Computer Chess Rating List put it at 2500 Elo, which is a strong human level. It was very slow, searching thousands of positions per second compared to the millions that even weak programs can do, but it's widely agreed that the NNs were worth about 300-400 elo - if Giraffe could search at Stockfish speeds, and given the rule of thumb that a doubling of speed is worth 70 elo, that's 2500 + 70 * log2(3000000/3000) = 3270 elo. That puts it in the top 20.

Sadly the developer was pinched by Google to work in DeepMind. We suspect this was to help work on AlphaGo.

Giraffe is open source.

The author, Matthew Lai, was not part of the AlphaGo team.

Current chess algorithm to evaluate board position is already very fast. A deep learning version needs to be trained for a long time and might still not be as fast. Maybe one day when training can be done faster and cheaper, there will be more interest in deep learning chess engines.

PS: The estimated hardware cost to train AlphaGo Zero is around $25M (2000 TPU over 40 days, or 1700 GPU years).

Just curious, who's "we"?

I was referring to the computer chess community as "we". My apologies.

The machine learning approach was applied to Go because there aren't any good heuristics known for the game of Go.

In chess, a DAMN GOOD heuristic is that Queens are worth 9 points, Rooks are worth 5, Bishops / Knights are worth 3, and Pawns are worth 1.

A few additional heuristics regarding "two bishops bonus", "passed pawn bonus", "Castled King bonus" and the like work really well in Chess.


There's no equivalent to that in Go, so the best researchers could do was throw machine learning at it and hope for the best. EDIT: I'm well aware of Joseki (aka Openings) and Tesuji (Go "Tactics"). But Tesuji are not very easy to program.

The alternative is maybe to apply the Monte-Carlo Search Tree to Chess instead of Go. All games end in Chess (due to the 50-move draw rule). But Chess is incredibly tactical, and its far more important to exhaustively see all possibilities 3 or 4 steps ahead rather than try to figure out how the endgame's position might look like.

So Alpha-Beta pruning is likely a way better algorithm for Chess. A Machine Learning algorithm would have to figure out how to beat the current state of hand-programmed heuristics, which is not only incredibly efficient (due to bitboards and other optimizations), but also incredibly powerful.

Alphago Zero has beat all previous Go engines without examining any external games. I don't see any reason to believe Chess is a special game where humans understand it better (to write AIs) than computers can do "on their own". They are both turn-based deterministic games where the only asymmetry is which player gets the first turn.

> without examining any external games

That's only a big deal in Go circles, where Go computers have only been super-human for less than 2 years.

Chess AIs are well known to have been super-human for decades now. Chess AIs are able to build the Chess game database backwards (ie: Tablebases) AND forwards (Opening Databases).

There's very little human knowledge involved in a 6-move Chess Tablebase: https://en.wikipedia.org/wiki/Endgame_tablebase . Indeed, computer-generation of Tablebases have REFUTED human knowledge, and modern Tablebases are way stronger than the entirety of chess knowledge that existed before computer analysis.

Building Chess AIs "without human knowledge" has been going on for a long time now. The thing is, its very difficult to beat the SPEED of an assembly-language tuned count of bishop or rook moves.

IE: The Magic Multiply / Bitboard method:



The Chess AI game is played very differently from the Go AI game. In Go AI, "superhuman" programming techniques are still in its infancy.

In Chess AIs, the game is mostly settling down upon speed, speed, SPEED. In many cases, a weaker heuristic that sees +1 or +2 moves into the future (due to being a little bit faster) is far better than a stronger heuristic that takes a bit longer to calculate.

As such, a huge amount of Chess AI effort is in optimizing routines at the assembly level. Or perhaps data-structures that allow Opening Databases and Endgame Tablebases to be more efficiently utilized.

In contrast, a lot of Go AI effort is in representing search trees... but not so much in speed or optimization right now.

Lots of the parameters of the evaluation function are (at least in part) automatically tuned. Here https://chessprogramming.wikispaces.com/Automated+Tuning you can fnd some of the most successful methods

I see no reason the two approaches couldn't be combined - use machine learning to identify new patterns and weights to help evaluate positions, but compile them into efficient operations to use them in existing engines.

There's no easy way to turn a set of 10-layer Neural Network weights into faster optimized assembly language.

You can optimize the Neural Network so that maybe it runs faster (using NN specific hardware like maybe TensorCores or whatever), but I don't think anyone has any way to "generalize" a Neural Network into faster code.

I think that on typical desktop hardware, there's little doubt that its cycles are much better spent on alpha beta search (with all its modern enhancements like late move reduction) and hand built evaluation function than on evaluating neural nets. If, however, your hardware were to include a few tensor processing units, then the answer is less clear (but I'd still bet on alpha beta).

AlphaGo Zero is really about the MCTS not neural nets.

More specifically it's about driving MCTS with neural nets rather than pure random sampling as prior strong (but quite far from pro level) go bots were. Of course before AlphaGo was released there were papers on a pure deep net version of a go bot that beat Pachi and Fuego (both MCTS), not sure if it beat CrazyStone of the time.

I think it's really hard to parallelize alpha beta search, there might be a point where other approaches become more useful

Parallelising A-B has been done, but even the state of the art approaches like Lazy SMP (where there is no synchronisation between threads, and only at the root is effort made to disperse threads), YBWC (where only the first node is synchronised, and if it doesn't produce a cut is then parallelised) and DTS (where the threads choose their own work points) produce sub-linear scaling.

I think research needs to be done on algorithms that are designed from the start to be parallel, but I'm not smart enough to do that.

I think the correct way to go with this is to possibly use deep learning to find better patterns and weightings for position evaluation, but then incorporate that into one of the existing (fast) engines. The throughput really matters, but I don't think anyone would state that (outside of tablebases) current evluators have been anywhere near perfected.

If you're into chess, I'd encourage you to check out the last of three annotated games in that article (Stockfish v Houdini). Stockfish mates on move 174. Game would have been declared a draw by human players by move 40.

When I read your comment, I thought the reason Stockfish won was because the computer found an innovative tactic that a human would not see.

Watching the game, it looks like if there was a human player instead of the Houdini computer, the human would be easily able to force the game into a draw instead of a loss.

Nonetheless, as an intermediate chess player, the games were fun to watch because of how different the play style computers can sometimes have versus humans.

That's because it WAS a draw if Black played correctly - by blocking all the pawns from advancing and only moving the rook back occasionally if attacked by the bishop (although probably it would have remained a draw even if Black had given up the exchange. There was no way for white to get in there.

According to the commentators Houdini had more impressive wins and the game that decided the tournament (the third game shown in this article) was supposed to be a draw but somehow got blundered away due to the contempt rule.

Basically it was very close and Houdini may actually be slightly better, especially at the abstract heuristic stuff. Although Houdini creates good positions for itself, Stockfish excels at brute force and somehow always manages to turn things around as it transitions into the endgame.

Is it only this code and nothing else?


Is there no DB or machine learning inside?

It's actually https://github.com/official-stockfish/Stockfish/tree/master

No, SF doesn't use machine learning, it's entirely heuristic.

Doesn't it use a database? From README:

> If the engine is searching a position that is not in the tablebases (e.g. a position with 7 pieces)

and it speaks a lot about tablebases. What is it?

Okay, I'll concede that one, my apologies. A tablebase is essentially a precomputed endgame solution for all positions with that many pieces. Likewise, it has an opening book, which is a database that contains moves for the first few turns of the game.

It's still a heuristic searcher when it's not using those though.

Where did the good old Fritz go ?

Fritz the engine hasn't been very strong for a few years, Fritz the UI (which also allows you to use other engines) is still the best thing out there for learning/play.

They seem to have focussed more on making the Fritz engine a good tool for human analysis rather than the strongest possible engine.

Not sure what the new stuff is like the last one I bought was Fritz 12 some time ago.

Houdini is still my favorite engine though. It plays much more interesting than Stockfish, which plays like a, well, a stockfish.

$1000 prize money? Can't they crowd source something more than that?

These people are all skilled programmers doing high speed analytics. They aren’t doing it for the money.

Good to know I’ve been using the best to destroy my opponents on chess.com

Isn't that cheating?

Yes, and I've already reported him to chess.com. :)

Only sucks for the people that played them on their way to a 3000 rating, I guess.


Personal attacks are not OK on Hacker News. Could you please take a look at the guidelines?


Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact