Hats off to you Mr. Feynman. Your output may have been finite, but its effect is limitless.
Intriguingly, Turing posed the question, "Could one make a machine to play chess, and to improve its play, game by game, profiting from its experience?" This reinforcement learning approach to chess did not enjoy much success--until AlphaZero. That story that has been well-told in many places, but perhaps best so by David Silver in this recently released lecture by DeepMind: https://www.youtube.com/watch?v=ld28AU7DDB4. The first ~40 mins are a lucid explanation of the classical methods, and the rest covers RL/MCTS/AlphaZero.
Don't forget Samuel's computer checkers program from 1959. It was among the world's first successful self-learning programs.
It is a bit more nuanced. There are heuristics ( breadth and depth searches ) which assigns positional values and also opening and end game database searches. Using remaining material values is the most basic form of chess engine. If that's all stockfish did, most chess players would beat stockfish.
I built a very simple chess engine for my AI class. I started off with the basic "material values". Then added basic heuristics. Then added database lookups.
Now with neural networks and machine learning, chess engines are even more sophisticated.