Hacker News new | past | comments | ask | show | jobs | submit login

Another set of games against an outdated Stockfish which appears to make moves that a recent Stockfish at any reasonably depth disagrees with. I've no doubt at all that AlphaZero has a much stronger evaluation algorithm than Stockfish, but I do wish they'd be a bit more transparent about its actual strength (although presumably they're selling access to it right now if you connect all the dots).

The paper used latest AlphaZero and Stockfish at the time of writing. It's been in peer-review for almost a year.

Well, let me be more blunt: there is zero chance they're playing fair here.

FWIW: Accusations of bad faith with no backup are not generally good commentary.

As someone said, it was in peer review for a year. That means it could not have been compared against stockfish 9 or 10 - they were not released yet. As someone above points out, they used in-development stockfish versions as well (2 weeks before sf9 was released), and from what I can tell, they used the newest version they could have.

If you have some good data that means you can make this statement, can you please cite it?

Otherwise, can you please not make claims of bad faith?

The strength of the language I used was motivated largely by the obviously bad-faith approach employed when they _first_ announced AlphaZero a year ago. They wanted to be able to say they'd created the strongest engine in the world, and they created an environment where it was hard to fail. I will admit the setup used in this paper seems more reasonable on that basis.

That said, I've checked out a few old versions of Stockfish (including the exact commit they used) and analysed the games in Table S6 in the paper. Stockfish still spots multiple blunders in its own play. Obviously these things aren't entirely deterministic, but it seems unlikely there was time trouble.

And again, just to be clear, there's little doubt in my mind that AlphaZero's evaluation of any given position is better than Stockfish's or anyone else's. I'm not even saying it can't reliably beat Stockfish. I just find it sad that the evidence of its overall strength continues to be wobbly.

Each program got 1 minute per move in the original match. Did it find the blunders within 1 minute?

If you really feel like their original setup handicapped stockfish, it seems like the best way to know would be to have that setup play your preferred setup and see what the difference is.

Are you a chess engine developer by any chance?

Yes, at a very amateurish level, and I contribute a lot of CPU/GPU time to LCZero. But I've also played all the published games through Stockfish at the published time controls, and the results make no sense.

Do you often make professionally slanderous statements in public forums?

To be specific, in a written comment on a public forum, it would be "libelous" and not "slanderous". But to be either, it would also have to be false, and at least in the US, known to be false to the author at the time it was written or written with a reckless disregard for the truth.

He says he's analyzed the published games, and found that Stockfish finds obvious errors in the way that Stockfish was said to be playing. Unless he's consciously lying about this (I'd guess unlikely?) it's not libel. And if it's true that a correctly configured Stockfish doesn't play the way it was said to play, this would be a strong indication that either intentionally or accidentally, it was not a fair playing field.

Anyway, if your point was that it might be more productive to be more polite, sure! But one might say the same about accusing someone of slander.

AlphaGo/Zero success has been a major driver of the recent AI hype, it's actual strength is no concern compared to the money to be made.

Why would they be selling access to it? No one actually cares about computer chess other than in the chess world. It's a hobby.

Computer chess is a huge part of modern professional chess, someone like Magnus Carlsen would probably pay for access to the best chess engine available.

The first games won are chess and go, the last games one are for the survival of the human species!

Applications are open for YC Summer 2019

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact