I'm personally a bit more interested in computer chess, but find it very interesting how engines are used for preparation in human chess.
You might think that engines make opening preparation trivial: just play what the engine recommends. However, that misses a critical point. To simplify a bit, engines (whether neural network or classical engines) always play with the implicit assumption that they play against the best response of an equally strong opponent. But in human chess, the strategy is not just to make objectively good moves, but also to give your opponent a chance to make mistakes.
For example, Stockfish's search doesn't distinguish between a move where the opponent has an obvious answer, and a move where you need very deep calculation to find the correct response. Mathematically, both moves are equally good. Strong human players would, of course, always prefer the line that's more "difficult" to play for the opponent.
Therefore, in opening preparation, the goal is to find moves that are both:
* objectively "good enough" (that is, they don't immediately lose if the opponent happens to know or find a good answer)
* difficult/confusing enough for a world championship contender to give you at least the chance to gain an advantage
Finding such lines with engine help is to my understanding an important part of the work of seconds. Not only does that take a lot of time, you also need to be very good yourself to make realistic judgements about what a top player might or might not find over the board.
> To simplify a bit, engines (whether neural network or classical engines) always play with the implicit assumption that they play against the best response of an equally strong opponent.
This is a bit of a simplification! Neural net engines generally use Monte Carlo Tree Search over a pure minimax/alpha-beta search. MCTS rewards moves that lead to wins in all explored variations, not just the best play for each player.
Your point is still true though: computers still aren't able to reliably find "this will be the most testing line in a practical game against a human opponent". They're too far ahead of us: the MCTS leads to better results against computers of similar strength, but doesn't accurately predict where humans will go wrong. Humans (with computer help) are still better at that.
I considered clarifying that, but it's a bit subtle.
First of all, neural networks are typically trained from (millions of) self-play games. So you have the "equally strong opponent" assumption right there in the weights of the network.
Second, when actually playing, the variations that are explored with high probability in MCTS are (recursively) informed by the same neural networks that are applied at the root. This means that there's no extra reward for moves that lead to complications in lines that only a "stupid human" would play.
In effect, neither alpha-beta search nor Monte-Carlo tree search have any concept of "tricking" a weaker opponent, admittedly for very different reasons and more probalistically in the MCTS case.
That's what I meant with "the implicit assumption that they play against the best response of an equally strong opponent" -- yes it's a simplification, but directionally true I think.
If you win, yes. But perhaps you can find two moves that both draw against best play (overwhelmingly likely in most chess positions you will prepare, though even computers aren't good enough to tell you this for certain). The important thing then is to pick the one that wins against less good play, or perhaps that you won't lose when you inevitably show some less good play.
I've been wondering how much they automate that kind of search. Does the team of seconds now have some programmers taking Stockfish and altering it to search for lines where you always have multiple options but your opponent is on a tight path for example? That kind of work may be easier to hide than the grandmaster seconds which are drawn from a very small pool of possibilities.
> the strategy is not just to make objectively good moves, but also to give your opponent a chance to make mistakes.
That's not true in classical chess, making sup-par move in classical in order to throw off your opponent is a loosing strategy.
That's why rapid and bullet formats are much more interesting, because those moves are viable. In those formats aggressive play is considered advantageous. In classical you will bounce off a hard wall defence with disadvantage.
Current games are described as most accurate games in history of classical chess. Those guys are like machines in this time format.
I said "not just", not "not". It's a matter of degree. Picking a top three engine move instead of the top one, when it's only 0.01 worse, isn't a losing strategy. Even at this level, and especially if it takes a human opponent a lot of time to counter the move accurately. And from the analyses I've seen, that's precisely what seems to be happening in this world championship.
hmmm... I am not sure if the two concepts really square.
The thing is they don't see the exal table, and their candidate moves don't eval positions in 0.1 increments. So they cannot be sure if their weaker move will result in 0.1 or 1.0 advantage down the line.
Maybe you are referring to playing off book continuations of known openings? Those kind of fit your comment.
Though, those moves are heavily prepared before the games and advantage there is results of pre-game strategy, rather than a tactical decision to throw off/confuse the opponent.
Anyway I think we are splitting hair at this stage
We’re talking about the same thing I think: the context was pre-game preparation, where the teams of the contenders can (and according to every top GM commentary I’ve seen, extensively do) use engines to look for “off book” but good moves that pose some problems.
> They need to be such excellent chess players that they can still possibly teach something to someone like Carlsen, which makes finding one the chess equivalent of hiring a physics tutor for Albert Einstein.
Chessbase, mostly. This part is pretty much a solved problem: there just aren't that many games for any given player (thousands).
For a world championship match, players have so much time to prepare and put so much focus on it that preparing for your opponent's previous openings is not as important - he's not that likely to stick with the same repertoire for this match. There are one or two exceptions: one wonders if France's Maxime Vachier-Lagrave would have stuck with his rather narrow repertoire, especially with Black, if he'd managed to reach this match.
>For a world championship match, players have so much time to prepare and put so much focus on it that preparing for your opponent's previous openings is not as important - he's not that likely to stick with the same repertoire for this match
To emphasize this, Nepo got a draw with black yesterday playing the Petroff, an opening that, according to the commentators, he had never before played in a competitive match.
Asked again whether he was surprised by Nepomniachtchi’s Petrov, Carlsen re-states that it was one of the openings he was well-prepared for. “It was one of the main openings that I expected seeing that he played it in the Candidates and also, in the first black game, he went for a more classical approach rather than a sharp one,” he says. “So it was very much expected. Couldn’t know obviously which exact Petrov line he was going to go for, but the Petrov in itself was very much expected.”
Carlsen's memory was a bit better than the commentators: Nepo's win with the Petroff against Wang Hao in the Candidates was literally the game that sealed his qualification to this match.
You might think that engines make opening preparation trivial: just play what the engine recommends. However, that misses a critical point. To simplify a bit, engines (whether neural network or classical engines) always play with the implicit assumption that they play against the best response of an equally strong opponent. But in human chess, the strategy is not just to make objectively good moves, but also to give your opponent a chance to make mistakes.
For example, Stockfish's search doesn't distinguish between a move where the opponent has an obvious answer, and a move where you need very deep calculation to find the correct response. Mathematically, both moves are equally good. Strong human players would, of course, always prefer the line that's more "difficult" to play for the opponent.
Therefore, in opening preparation, the goal is to find moves that are both:
* objectively "good enough" (that is, they don't immediately lose if the opponent happens to know or find a good answer)
* difficult/confusing enough for a world championship contender to give you at least the chance to gain an advantage
Finding such lines with engine help is to my understanding an important part of the work of seconds. Not only does that take a lot of time, you also need to be very good yourself to make realistic judgements about what a top player might or might not find over the board.