MCTS or Monte-Carlo Tree Search is a method for making optimal decisions in problems with many possible outcomes. It is the best heuristic search technique that uses stochastic simulations with the precision of tree search.

This algorithm involves four major steps. A search tree is built, node by node, according to the outcomes of simulated playouts from these steps.

( In the pictures, each node contain two important pieces of information: an estimated value based on simulation results / the number of times it has been visited.)

  1. Selection : Starting at root node R, recursively select optimal child nodes until a leaf node L is reached. This means choosing the node that maximizes some quantity, analogous to the multi-armed bandit problem in which a player must choose the slot machine (bandit) that maximizes their estimated reward each turn.   mcts_selection
  2. Expansion : If L is a not a terminal node (i.e. it does not end the game) then create one or more child nodes and select one C. An unvisited child position is randomly chosen, and a new record node is added to the tree of statistics.mcts_expansion
  3. Simulation : Run a simulated playout from C until a result is achieved. This is done as a typical Monte Carlo simulation, either purely random or with some simple weighting heuristics if a light playout is desired, or by using some computationally expensive heuristics and evaluations for a heavy playout. For games with a lower branching factor, a light playout can give good results.mcts_simulation
  4. Back-propagation : Update the current move sequence with the simulation result. This occurs when the playout reaches the end of the game. All of the positions visited during this playout have their play count incremented, and if the player for that position won the playout, the win count is also incremented.mcts_backprop

Now, the question arises when to stop ?

This algorithm may be configured to stop after any desired length of time, or on some other condition. As more and more playouts are run, the tree of statistics grows in memory and the move that will finally be chosen will converge towards the actual optimal play, though that may take a very long time, depending on the game.

 

It is useful for large state spaces. Also, it needs a lot of samples to get a said estimate. One of the advantage of MCTS is planning time is independent of | S | (number of states). The running time for this algorithm is exponential in the horizon that means how far we want to look in the future as well as the branching.

Advantages of MCTS :

  • No strategic or tactical knowledge about a given game (or other problem domain) to make reasonable move decisions is required.
  • Very suitable for large branching factor which causes issues for other depth or breadth based search algorithms.
  • Algorithm can be stopped anytime to return the current best estimate.
  • It is very simple to implement.

Disadvantages of MCTS:

There may be possibility in medium complexity problem that algorithm fails to find optimal solution in given time. But this performance issue can be improved depending on the domain in which they are used or even without domain-specific expert knowledge.

Leave a Reply