Board Game Engines Are About Trees, Not Evaluation Functions
Posted on Mon 31 July 2023 in Technology
Lately, I've been flirting with the idea of making a game engine to evaluate Brazilian Draughts, which I plan to do with alpha-beta pruning. Since I might be spending a bit of time working on tree searches, I want to write about something that I've noticed for a while: whenever people talk about game engines for perfect information board games, they only focus on the evaluation function, which makes it seem like the evaluation function is the most important part. AlphaZero is a good example of this, since most articles focused on the neural network while completely ignoring how it worked with the Monte Carlo Tree Search, which is the important part. In this article, I want to explain why the evaluation function isn't as important as the tree search and why you might even want to weaken your evaluation function sometimes.
For a perfect information board game, the optimal algorithm is a brute-force search of the entire game tree. This perfectly solves every single position, but it requires too much time and memory to actually be practical for anything but the smallest games. That means that we have to make an approximation for our brute-force tree search, which we do by creating an evaluation function. That's why focusing too much on the evaluation function is misguided: it's just there to help approximate the exhaustive tree search, not actually tell you what your next move should be.
When somebody tries to create a game engine without remembering that they should be focusing on the tree search, they sometimes end up over-engineering the evaluation function to be as strong as possible, at the cost of making the evaluation function very expensive to call. Since the evaluation function isn't there to tell you what the best move is, this focus on power over cost can actually make your engine weaker. You obviously don't want to make your evaluation function be bad, but if you increase the cost of calling your evaluation function, you might end up not being able to search enough of the game tree. You can get around that by using a tree search that doesn't require the evaluation to be called many times, like Monte Carlo Tree Search, but you still need to stay focused on the tree search instead of the evaluation function.
Another way of looking at this is that the further into the game tree you can look, the cheaper you can make your evaluation function. In a brute-force search, you just have to check if a position is won, lost, or drawn after the game has ended. The more shallow your search is, the more the evaluation function has to do to compensate for the lack of depth. The catch is that positions are more complicated the earlier into the game you are. That's because every piece (or empty space on the board, if you're playing Go or Tic-Tac-Toe) is a degree of freedom, and the number of pieces (or empty spaces) is highest at the beginning of the game. That means your evaluation function will have to become exponentially more powerful the more shallow your search is, which means the cost of your evaluation function will exponentially increase, unless you can figure out how to solve the game mathematically. Going to extremes and trying to completely replace the tree search with an evaluation function gives you the same problem of requiring too much time and memory like the brute-force approach.
The temptation here (and the reason why I'm writing this article for me just as much as for everybody else) is that there's a lot of ways to make strong but expensive evaluation functions. If you have the resources, it wouldn't be difficult to feed millions of sample games to a big neural network and use that as the evaluation function, but that would only worsen your engine unless you use a tree search that's designed for expensive evaluation functions. These resources would be much better spent optimizing the tree search by creating endgame tablebases and transposition tables, which reduce the number of lines that need to be evaluated.