The "fawn pawn" naming comes from fans of kingscrusher on youtube who analyzes Leela Chess Zero games. https://www.youtube.com/user/kingscrusher
His accent makes him saying "thorn pawn" sound like "fawn pawn", and thus the name has been given.
Here is a link to shirts he sells with "fawn pawn" on them. https://teespring.com/fawn-pawn?tsmac=store&tsmic=kingscrush...
Nevertheless, the performance of AlphaZero was impressive, especially the positional knowledge it has acquired is second none. In all existing chess engines, positional knowledge is under-represented through simple heuristics. Acquiring positional knowledge was a longtime dream of chess programmers many generations. The dream was to create an engine which plays a more human-like style of chess. AlphaZero has realized this dream and even goes beyond that: extending the humans knowledge of chess.
I believe the most intriguing question right now is why AlphaZero stopped to improve after 9 hours of training? It’s due to the inherent problem of chess or due to the limits of ANN? If it’s the latter, how we can breakthrough and create a new generation of engines that can even surpass AlphaZero?
The opening book and Syzygy Tablebases were enabled, so we're seeing Stockfish go at full power here. The only last bit of problem is that Stockfish's scaling isn't very good. But there's not much that the admins of the test can do about that.
This test seems fair IMO.
"Stockfish was configured according to its 2016 TCEC world championship superfinal settings: 44 threads on 44 cores (two 2.2GHz Intel Xeon Broadwell CPUs with 22
cores), a hash size of 32GB, syzygy endgame tablebases, at 3 hour time controls with 15 additional seconds per move"
Since repetition is one way to get a draw, an engine with positive contempt (it assumes the opponent to be weaker than itself) will score repetitive moves lower and is more likely to pick something else.
The previous statement was talking about how much faster and more efficient AlphaZero is, but the interpretation I pickup from that sentence is the opposite. Is this a “golf score” situation where lower is better?
IIRC, AlphaZero has two outputs from the neural network. You described the first output.
The 2nd output was absolutely critical to it growing in strength. In effect, this 2nd output value is the difference from AlphaGo and AlphaZero. The 2nd output value guides the monte-carlo tree search.
Naive MCTS looks at board positions randomly. AlphaZero's MCTS looks at board positions the neural network deems "interesting". In effect, the neural network both guides the search (output #2), and evaluates the position (output #1).
MCTS chooses a position based off of the "interesting factor", as well as "how much that position has been evaluated". Ex: if "Knight to c3" has been evaluated 1-million times, MCTS will try to look at other positions. But if the neural network says that "Knight to c3 is really, really interesting", MCTS will still favor to look at that position, more so than other positions.
Etc. etc. down the hierarchy of moves.
With deep search trees as in Chess or even more so in Go, a pretrained evaluator function easily beats any tree search. Applied AI.
a probably somewhat wrong but evocative way to think about it is that what AlphaZero does is closer to “intuition”; it can look at a board and immediately tell very well how good it is. maybe you could argue this is more like how human grandmasters play.
Stockfish, on the other hand, has much simpler guesses for how to evaluate a board, but they can be computed very quickly.
Both of them employ a tree search: “if i do this move then they’ll have three countermoves and i can respond in 16 ways...”. (It’s Monte Carlo because they randomly sample, rather than exploring exhaustively.) But Stockfish is able to explore many more possible outcomes, whereas AlphaZero explores far fewer but is able to focus only on those that really are promising.
tldr: Higher is better.
+155 -6 =839.
That's 839 draws to 155 Wins / 6-losses.
It might be an interesting question what the appropriate bet would be for win vs draw in this case, and how this would change with greater training. Presumably the more you train both sides the more likely they are to draw?
Also interesting would be to quantify the effect a small hardware handicap has, and how this trades off with training. Is more training always better than more hardware? Vice versa?
Then I would bet on a draw.
I feel the comparison is a bit unfair...
Plus it has all of the knowledge from past human/computer chess research, experimentation and tuning that's been done in other chess engines since the 70s helping it.
Stockfish would still be an extremely strong engine even without the training. Alphazero couldn’t even move a single piece without having been trained extensively.