Simulated Games: performance considerations

The MCTS algorithm used in Simulated Games has been designed as to be used online to solve a problem such as choosing the next action(s) in the game. There is no general formula of how many iterations or computational time are required to obtain meaningful level of solving the game. The method will work out-of-the-box under strict time constraints for simpler games, but for more complex ones, the number of required iterations can only be derived experimentally by looking at how smart the bot behaves.

Depending on the complexity of the game, we discuss four possible use cases of Simulated Games.

1-1 mapping

We call 1-1 mapping a situation, in which an action in the SimulatedGame does the same thing as Behavior in Grail. The SimulatedGame reflects the actual game’s logic - there are no simplifications required. The implementation of '1-1 mapping' variant and 'Abstraction' variant is based on the same classes and principles. The only difference is that in the latter case you have to think how to design a simplified model of the game. That’s it.

When this approach is typically used for:

  1. For the AI in finite combinatorial games such as Checkers, Chess or Bridge.

  2. For certain subgames of your regular video game, e.g., puzzles.

  3. For very simple video games - with small number of possible actions.

If you are using SimulatedGamePlayingModel then provide a simple ISimulatedActionTranslator. If you are using SimulatedGame directly, then you can either implement logic in a class derived both from ISimulatedGameAction and Behavior or just use ISimulatedGameAction directly.

By not using Behaviors you lose access to most Grail functionalities, but in combinatorial games, using just SimulatedGame module is enough.


The general idea here is to design a simplified model of a game (or certain aspect of your game). The most important features to accomplish:

  1. Enapsulate bigger portion of logic within an action (bigger than atomic Behavior)

  2. Reduce the average number of possible actions in the game

  3. Reduce the length of the game measured in an average number of actions performed from start to finish

Ideas for simplifying your game:

  1. When you have lots of possible options to do in the game: group them. Then group your groups. Let your higher level actions choose bigger groups first. Then the next actions should choose smaller groups etc.
    For example:

    • First group - decide whether to attack or retreat; actions = {attack, retreat}

    • Second group - choose region where to attack or retreat depending on the action chosen before.

  2. Introduce scripted 'greedy actions' - maybe not every choice in the game should be considered. Sometimes it is relatively obvious what to do.
    In the example from the previous paragraph - once you select the attack action in a particular region, a script could choose the most adequate enemy to attack, be it the most threatening one or the easiest to elliminate.

  3. Introduce quick solvers/estimators - imagine a strategy game (real-time or turn-based) that consists of many repeating battles/encounters. Unless you are defining an SimulatedGame to solve the battle problem as a subgame, consider implementing a heuristic solver that will estimate the outcome of a battle. This will immensely reduce the combinatorial complexity of the game.

  4. Try to elliminate as many variables as you can. In general, the only strictly required parameters of your game are:

    • Parameters that affect the available actions

    • Parameters that are required to determine if the game has ended

    • Parameters with significant impact on the desired behavior in the game

  5. Limit the possible options in any other way you can. For example: if your games involves moving units, then consider providing a graph with only sensible positions.

1-1 mapping or Abstraction paired with offline learning

If your game is still too complex to be simulated effectively in a timely fashion, there is the offline learning approach. The good thing is, that you can use the same code you would write in other approaches.

The idea of the offline learning with SimulatedGames is as follows:

  • Prepare the game as you would normally.

  • Choose considerations (nominal or numerical parameters). They will become the context of evaluating decisions.

  • Run training sessions that serialize results.

  • Train a decision tree based on the serialized results.

  • Use the decision tree in the actual shipped game.

A training session is to be performed using bots only. Bots should receive much more time for making a decision. Such a time would be unacceptable in the actual game, because human players would have to sit and wait for the AI to respond. So the goal is to run the MCTS algorithm on SimulatedGame and save the results for future use. Because storing the whole tree would take too much memory (and probably time to use it effectively too), there is knowledge compression when translating MCTS tree onto Decision Tree.

The decision tree is constructed for you based on the considerations given.

The best way to understand how offline learning works is to see the Offline Learning Sample.