You are correct, but you are playing the game. You see and process the possibilities based on your understanding of the chess ruleset.
With machine learning with brute force you are simply trying X possibilities until something sticks and gives a high % of win state. That's different to playing the game using knowledge of the ruleset, even though, most of the time, the end result is the same.
This is what killed AI research in the 80s. That moment when everyone collectively saw they were simply working on a more powerful culled brute force (pruned tree as you call it) when they all thought it was true AI.
True AI is hard. The required computational resources are immense even for something simple. Take a Bishop on a chess board. How would you tell an AI the ruleset that the Bishop moves diagonally only? It must first understand what it is looking at, then what diagonally means, then what diagonally means in this particular context. All with nodes of pattern matches and an input stream.
I feel these young guns are falling into the same trap of calling machine learning AI without the benefit of experience an older researcher would have, having been through this situation before.
Actually, teaching AlphaGo the rules was easy. And what you call brute force is in fact intuition based search. It learns to guess by intuition (policy net) what moves to try and to give up (value net) the bad ones. It's far from brute search, and that's why AlphaGo is so much better than the other Go software.
With machine learning with brute force you are simply trying X possibilities until something sticks and gives a high % of win state. That's different to playing the game using knowledge of the ruleset, even though, most of the time, the end result is the same.
This is what killed AI research in the 80s. That moment when everyone collectively saw they were simply working on a more powerful culled brute force (pruned tree as you call it) when they all thought it was true AI.
True AI is hard. The required computational resources are immense even for something simple. Take a Bishop on a chess board. How would you tell an AI the ruleset that the Bishop moves diagonally only? It must first understand what it is looking at, then what diagonally means, then what diagonally means in this particular context. All with nodes of pattern matches and an input stream.
I feel these young guns are falling into the same trap of calling machine learning AI without the benefit of experience an older researcher would have, having been through this situation before.