Simulated Games - defining a game in 10 steps

Step 1: choose the level of accuracy for the model of your game

The first thing you should do is to think about the complexity of your game:

The time allowed for simulations is measured from the moment an SimulatedGame is instatiated to the moment when decision(s) in the actual game have to be done. You can use the maxTime limit and/or maxIterations limit. With maximum number of iterations limit, you ensure that the strength of the AI agent is independent from the performance of the CPU. The simulations can either be performed in one go or timesliced into many frames using (SimulatedGameReasoner).
The number of simulations required to gather enough data to make a good decision depends on the nature of the game.
In many games, the borderline is around 10 000 simulations per actual decision the game.

When less simulations should suffice:

  • if your game is short - not many actions are needed to reach the termination

  • if the number of distinct possible actions is small, on average

  • if there is usually the clear best action to perform in a given state

  • if the game has converging nature - each action limits the possible options. Example: placing objects in a finite room without option to remove them.

Depending on how complex your game is, you have various options to design it. Take a look at the dedicated document ("granularity") on that.
You do not have to know the complexity upfront. You can test it experimentally and choose the proper approach later. Most of the code will be the same no matter which approach will you end up having in the game. However, it is good to start with some educated guess.

Step 2: design your units (and their actions)

Units in the Simulated Game module are special objects that:

  1. Have internal state, which can be altered during simulation. All units together can be regarded as formal representation of the state of the game.

  2. Perform actions

A unit may simply represent the complete player in the game just like a human player or an AI bot. A unit can also be more 'local' such as soldier in a strategy game. Units can be of any granularity and hierarchy. For example:

  1. General - with actions that roughly define strategy such as rush, regular attack, retreat, tech-up…​

  2. Squad former - with actions to form groups of soldiers

  3. Squad leader - with actions to move the whole group of soldiers into particular point of interest

  4. Soldier - with actions such as shoot at particular enemy or secure a particular point of interest

If you can think of an hierarchy of actions starting from the most general towards progressively more detailed ones, then consider implementing a hierarchy of units. For example - a general may give strategic orders, then a commander may interpret them and assign roles to single soldiers and finally a soldier may have only such actions available that conform to the order from above. For the efficiency reasons, the best scenario is when some action of a unit limits the possible options (actions) for the next unit in hierarchy.

There are two types of units in Simulated Game module. These are the base classes to derive from:

  • SimulatedGameThinkingUnit - the main type of unit that represents some intelligent entity that performs actions. Units of this type have goals they try to achieve. The simulation module will assume that they what to maximize their score (which is received at the end of each simulation).

  • SimulatedGameStochasticUnit - units that always perform random actions according to their probability distribution, which by default is uniform random. You can override its methods to provide custom logic. Units of this type do not maximize any score in the game. You will typically implement SimulatedGameStochasticUnits in such way that their actions represent various random outcomes. For example, a 6-sided die can be such a player with actions roll 1, rool 2 up to roll 6.

Teams

Each SimulatedGameThinkingUnit unit must implement, among other methods, a property GetTeamIndex() [in C++] or TeamIndex [in C#]) which returns a number. Units that share this number are on the same side. They receive the same final score when the game ends. This allows for cooperation and coordination of actions. The index for SimulatedGameStochasticUnit can be overriden, but it is completely ignored.

Team indices must be numbers from 0 (inclusive) to the number of teams (exclusive) given in the Simulated Game constructor.

Once you have your units design in mind, you can implement their classes. Inherit from SimulatedGameThinkingUnit SimulatedGameThinkingUnit API.

Do not inherit from ISimulatedGameUnit. This is a base class for the two types of units used in Grail’s SimulatedGame module: SimulatedGameThinkingUnit and SimulatedGameStochasticUnit. This class is not hidden, because certain function may return objecs of this type. However, you should never derive from it directly.

The important functions of SimulatedGameThinkingUnit are:

  • C++

  • C#

virtual void Reset() = 0; (1)
virtual std::vector<std::unique_ptr<const ISimulatedGameAction>> GetAvailableActions() const = 0; (2)
virtual std::unique_ptr<const ISimulatedGameAction> GetRandomAvailableAction(RandomGenerator& rand_gen) const; (3)
virtual uint GetTeamIndex() const = 0; (4)
virtual void AfterAction(SimulatedGameRuntime&); (5)
public abstract void Reset(); (1)
public abstract List<ISimulatedGameAction> GetAvailableActions(); (2)
public abstract ISimulatedGameAction GetRandomAvailableAction(Random random); (3)
public abstract uint TeamIndex { get; } (4)
public virtual void AfterAction(in SimulatedGameRuntime gameRuntime) (5)
1 - Revert state of the unit to the starting one. The starting state should reflect the current state in the actual game. After all, simulations try different playouts from this starting state.
2 - Return all the available actions in the current state of the simulation. This collection should be given ownership to the algorithm (so do not change it afterwards). Do not change the action objects once returned, either. They will be cached. This method should always return the same actions in the same state. It is either recommended to create a new list of actions and return it but reuse existing actions on this list if you never modify them. This is especiallly useful, if a unit has always the same actions available or there is some set of actions shared among units. In such a case, it is recommended to initialize these actions (e.g. in constructor).
3 - This is an optional function given for performance reasons. This method is used in the simulation phase where no caching of actions is required. The default implementation first calls [2] and then returns a uniform random action from the available ones. If you can provide a random available action faster e.g., if (2) function is costly and you can avoid it, then consider overriding this function.
4 - Return an integer number from 0 (inclusive) to the number of teams (exclusive). Units with the same index share the game results. This means that they perform actions for their common benefit.
5 - An optional function with default empty implementation. You may implement logic that needs to be executed after action and no matter what kind of action of this unit was performed. For instance, this is a good place to check if the simulation reached the terminal state.

Actions are objects that encapsulate the logic of changing the game state. As a rule of thumb, if any change in the game is not fully deterministic (scripted or constant) then it should be performed using an action. In particular, any modifications that are result of players' choices should be implemented as actions.

The action’s interface is really simple:

  • C++

  • C#

class ISimulatedGameAction
{
  public:
    virtual ~SimulatedGameAction();

    virtual ISimulatedGameUnit* Apply(ISimulatedGameUnit& currentUnit, SimulatedGameRuntime& runtimeControl) const = 0; (1)
    virtual std::string ToString() const;
};
public interface ISimulatedGameAction
{
  ISimulatedGameUnit Apply(in ISimulatedGameUnit currentUnit,in SimulatedGameRuntime runtimeControl); (1)
}
1 - Implement the effects of the action here. These code will be executed both in selection and simulation phases of the MCTS algorithm. You should modify variables/properties of the units here. The parameter currentUnit is the unit that executed this action. If you want to use this parameter, just cast it to the expected unit type.
Your actions should be immutable. Their effect should change the state of units not the actions themselves.

A generic currentUnit parameter is particularly useful if you reuse action objects, so they can be used with various units' types. However, if your actions are strictly bound to certain unit types, you might consider the following pattern:

  • C++

  • C#

class UnitAwareAction : public ISimulatedGameAction
{
  std::unique_ptr<DiePlayer> unit;
  UnitAwareAction(std::unique_ptr<DiePlayer> unit) :
    unit { std::move(unit)}
  {

  }

  SimulatedGameUnit* Apply(ISimulatedGameUnit& currentUnit, SimulatedGameRuntime & runtimeControl) const override
  {
    //we ignore currentUnit
    unit.DoSomething();
    ...
        return nextUnit;
  }
};
public class UnitAwareAction : ISimulatedGameAction
{
  MyUnit unit;
  public UnitAwareAction(MyUnit owner)
  {
    this.unit = owner;
  }

  public ISimulatedGameUnit Apply(in ISimulatedGameUnit currentUnit, in SimulatedGameRuntime runtimeControl)
  {
    //we ignore currentUnit
    unit.DoSomething();
    ...
        return nextUnit;
  }
}

The reasoning behind the second parameter - runtimeControl is explained in Step 7.

Finally, return the unit, which is next to make an action. In two-player games with one-to-one mapping between units and players, it will probably be the opponent. If you have a complex order, consider introducing a queue-like object defining the order to act and shared between units.

If the order in which units should make their actions is complex, then considered using the pattern from the example above and store reference to some kind of OrderManager object in each action.

Step 5: decide on randomness

Randomness raises the complexity of a game. In some cases, it can be avoided for the sake of simulations.

For example, if you may weapons in the game with damage calculated randomly from an interval, e.g. [80-100]. We recommend using the expected value in the SimulatedGame model. In this case, it would be 90.

If you eliminated randomness from your model, you may skip this step.

If randomness is inevitable and cannot be reduced to expected value, then you define actions that implement each possible outcome or at least a representative variety of outcomes that will be modelled in the simulation.

For example, an action of rolling a six-sided die may look as follows:

  • C++

  • C#

class RollAction : public ISimulatedGameAction
{
public:
  int dieValue;

  RollAction(int value)
    {
      this->dieValue = value;
    }

  ISimulatedGameUnit* Apply(ISimulatedGameUnit& currentUnit, SimulatedGameRuntime& runtimeControl) const override
    {
      //do something assuming that roll = DieValue
    }
};
 public class RollAction : ISimulatedGameAction
 {
  public int DieValue { get; private set; }

  public RollAction(int value)
  {
  this.DieValue = value;
  }

  public ISimulatedGameUnit Apply(ISimulatedGameUnit currentUnit, SimulatedGameRuntime runtimeControl)
  {
    //do something assuming that roll = DieValue
  }
 }

Such actions - representing results of random outcomes - have to be performed by a special kind of unit called SimulatedGameStochasticUnit. Derive from this class instead of SimulatedGameThinkingUnit. This will tell the algorithm to choose actions of this unit according to the random distribution and not by choosing the most promising option to investigate.

The most related functions of SimulatedGameStochasticUnit are:

  • C++

  • C#

virtual std::vector<std::unique_ptr<const ISimulatedGameAction>> GetAvailableActions() const = 0; (1)
virtual size_t GetRandomActionIndex(std::vector<std::unique_ptr<const ISimulatedGameAction>>& actions, RandomGenerator& rand_gen) const; (2)
virtual std::unique_ptr<const ISimulatedGameAction> GetRandomAvailableAction(RandomGenerator& rand_gen) const; (3)
public abstract List<ISimulatedGameAction> GetAvailableActions(); (1)
public int GetRandomActionIndex(in IEnumerable<ISimulatedGameAction> actions, Random random) (2)
public virtual ISimulatedGameAction GetRandomAvailableAction(Random random) (3)
1 This method will be called once and the available actions will be cached. The same case as with SimulatedGameThinkingUnit.
2 The parameter passed to this method is the cached list. It is included just in case, if you need to look at the available actions to return the index of the chosen one. This method has default implementation that returns a uniform random integer between zero and the length of actions.
3 This method is used in the simulation phase where no caching of actions is required. However, the default implementation first calls (1) and then returns a uniform random action. The same case as with SimulatedGameThinkingUnit.
You must override (2) if not all actions have the same probability. Use your custom logic here.
You must override (3) if not all actions have the same probability.
You can always override (3) if you are able to provide a random action faster than the default implementation that calls (1) first and then returns a random element. This is typically the case, when you do not need to compute the available actions in advance to choose a random one, because you know beforehand how to map a number from 0 to availableActionsCount to a particular action.

Step 6: decide on continuity of actions

The whole idea of simulations works best if actions have immediate effects. If the game environment was a database then an action could be regarded as a transaction in such a database. An example of an action that has immediate effect:

  • C++

  • C#

class MoveAction: public ISimulatedGameAction
{
public:
  Vector3 position;

  MoveAction(Vector3 pos) : position {pos}
  {

  }

  ISimulatedGameUnit* Apply(ISimulatedGameUnit& currentUnit, SimulatedGameRuntime& runtimeControl) const override
  {
    static_cast<MyUnit&>(currentUnit).position = position;
  }
};
public class MoveAction : ISimulatedGameAction
{
  public Vector3 Position;
  public MoveAction(Vector3 pos)
  {
    this.Position = pos;
  }
  public ISimulatedGameUnit Apply(in ISimulatedGameUnit currentUnit, in SimulatedGameRuntime runtimeControl)
  {
    (currentUnit as MyUnit).Position = Position;
  }
}

However, it is possible to model actions that are continuous. We give freedom in how this mechanism is implemented to ensure the highest performance for a particular game. An example how to implement continuous actions is given below:

First, change the logic of a unit so it no longer moves immediately:

  • C++

  • C#

struct MyUnit: public SimulatedGameThinkingUnit
{
  Vector3 position;
  Vector3 desiredPosition;
  double speed;
  ...

  void Tick(double deltaTime)
  {
    //movement logic here according to deltaTime
  }
};

class MoveAction: public ISimulatedGameAction
{
public:
  Vector3 position;

  MoveAction(Vector3 pos) : position {pos} { }

  ISimulatedGameUnit* Apply(ISimulatedGameUnit& currentUnit, SimualtedGameRuntime& runtimeControl) const override
  {
    static_cast<MyUnit&>(currentUnit).desiredPosition = position;  /*we only set the target without affecting the current position*/
    return this;
  }
};
public class MyUnit : SimulatedGameThinkingUnit
{
  public Vector3 Position;
  public Vector3 DesiredPosition;
  double speed;
  ...

  public void Tick(double deltaTime)
  {
    //movement logic here according to deltaTime
  }
}

public class MoveAction : ISimulatedGameAction
{
  public Vector3 Position;
  public MoveAction(Vector3 pos)
  {
    this.Position = pos;
  }
  public ISimulatedGameUnit Apply(in ISimulatedGameUnit currentUnit, in SimulatedGameRuntime runtimeControl)
  {
    (currentUnit as MyUnit).DesiredPosition = pos; /*we only set the target without affecting the current position*/
    return this;
  }
}

Organize the order in which the units will be periodically chosing actions.

Then, define a TimeManagerUnit acting after all other units in a periodic sequence. The manager will only have one action to update the time.

  • C++

  • C#

class AdvanceTimeAction: public ISimulatedGameAction
{
  std::unique_ptr<TimeManager> manager;

public:
  AdvanceTimeAction(std::unique_ptr<TimeManager> timeManager) :
  manager { std::move(timeManager)}
  {

  }

  ISimulatedGameUnit* Apply(ISimulatedGameUnit& currentUnit, SimulaedGameRuntime& runtimeControl) const override
  {
    manager->tick();
    return manager->Units[0]; /*the sequence of action choices starts again*/
  }
}

class TimeManager: public ISimulatedGameUnit
{
  std::vector<std::unique_ptr<const ISimulatedGameAction>> actions;   /*the TimeManager actions*/
  std::vector<std::unique_ptr<ISimulatedGameUnit> units;   /*all tickable units*/

public:
  virtual uint GetTeamIndex() const override { return 0; }

  std::vector<std::unique_ptr<const ISimulatedGameAction>> Player::GetAvailableActions() const override
  {
    /// We can do this, because we never change this list nor its conents
    return actions;
  }

  void Tick()
  {
    deltaTime = getDeltaTime(); /*can be fixed or dynamically computed*/
    for(auto& unit : units)
      unit->Tick(deltaTime);
  }
}
public class AdvanceTimeAction : ISimulatedGameAction
{
  TimeManager manager;
  public AdvanceTimeAction(TimeManager manager)
  {
    this.manager = manager;
  }

  public ISimulatedGameUnit Apply(in ISimulatedGameUnit currentUnit, in SimulatedGameRuntime runtimeControl)
  {
    manager.Tick();
    return manager.Units.First(); /*the sequence of action choices starts again*/
  }
}

public class TimeManager : ISimulatedGameUnit
{
  readonly List<ISimulatedGameAction> actions;   /*the TimeManager actions*/
  public List<ISimulatedGameUnit> Units { get; private set; }  /*all tickable units*/

  public override uint TeamIndex => 0;  /*this does not matter*/

  /// We can do this, because we never change this list nor its conents
  public override List<ISimulatedGameAction> GetAvailableActions() => actions;


  public void Tick()
  {
    deltaTime = getDeltaTime(); /*can be fixed or dynamically computed*/
    foreach(var unit in Units)
      unit.Tick(deltaTime);
  }

  ...
}

Step 7: implement stop condition

Depending on the granularity of your game, you might be able to simulate it to the natural end or only with a fixed time look-ahead. At some point, the game has to end. Otherwise it will fall into an infinite loop.

In order to terminate the game, you should set requestForTermination (C++) / RequestForTermination flag of the SimulatedGameRuntime object to true. This object is passed for you in two functions - in action’s logic and a function that is called after any action. If a particular action ends the game, consider using the first option i.e., with the unit’s AfterAction implementation. If you just check for terminal condition after every action, consider using the latter option.

  • C++

  • C#

class DummyTerminalAction: public ISimulatedGameAction
{
public:
  ISimulatedGameUnit* Apply(ISimulatedGameUnit& currentUnit, SimulatedGameRuntime& runtimeControl) const override
  {
    runtimeControl.SetTerminationRequest();
    runtimeControl.SetScore(currentUnit.GetTeamIndex(), 1.0f);
    runtimeControl.SetScore(1-currentUnit.GetTeamIndex(), 0.0f);
    return nullptr;  // you are allowed to return null IF AND ONLY IF you request for termination
  }
}

//another option:
class DummyTerminalUnit: public ISimulatedGameUnit
{
  ...
public:
  void AfterAction(SimulatedGameRuntime& runtimeControl)
  {
    runtimeControl.SetTerminationRequest();
    runtimeControl.SetScore(currentUnit.GetTeamIndex(), 1.0f);
    runtimeControl.SetScore(1-currentUnit.GetTeamIndex(), 0.0f);
  }
}
public class DummyTerminalAction : ISimulatedGameAction
{
  public ISimulatedGameUnit Apply(in ISimulatedGameUnit currentUnit, in SimulatedGameRuntime runtimeControl)
  {
    runtimeControl.TerminationRequest = true;
    runtimeControl.Scores[currentUnit.TeamIndex] = 1.0f;
    runtimeControl.Scores[1 - currentUnit.TeamIndex] = 0.f;
    return null;  // you are allowed to return null IF AND ONLY IF you request for termination
  }
}

//another option:
public class DummyTerminalUnit : SimulatedGameThinkingUnit
{
  ...

  public override void AfterAction(in SimulatedGameRuntime gameRuntime)
  {
    runtimeControl.TerminationRequest = true;
    runtimeControl.Scores[currentUnit.TeamIndex] = 1.0;
    runtimeControl.Scores[1 - currentUnit.TeamIndex] = 0;
  }
}
After requesting for termination, you should also give scores to each team. This reflects how well each team fared in the game. For example, it can be the number of kills, the amount of points accumulated or just two numerical values representing win and loss, respectively. In the example, above we assume that 1.0 represents a win and 0.0 represents a loss. Scores of the game assigned to teams are what gives feedback to the agents how to act. The scores are crucial element of Simulated Games.
The idea of assigning scores to teams is referred to as an evaluation function. If the model of your game allows to simulation to the end, then you have a perfect evaluation function. Otherwise, if you need to cut it off at some point - you provide a heuristic evaluation function.

Step 8: optionally include action-selection heuristics

Action-selection heuristic is used in the selection phase of the MCTS algorithm. Let’s look at the interface for action heuristics:

  • C++

  • C#

class SimulatedGameHeuristic
{
  public:

  virtual bool isHeuristicSituation(const ISimulatedGameUnit& unit) = 0; (1)
    virtual std::unique_ptr<ISimulatedGameAction> getAction() = 0; (2)

    virtual ~SimulatedGameHeuristic() = default;
};
public interface SimulatedGameHeuristic
{
  bool IsHeuristicSituation(in ISimulatedGameUnit unit); (1)
  ISimulatedGameAction GetAction(); (2)

}
1 - The idea of this function is to check if the heuristic should be applied. The heuristic can be defined for certain situations in the game. If you return false, then the regular MCTS selection will be applied unless there is another heuristic that returns true by this function.
2 - Here, just return the action to be played in the situation identified in (1).

Action selection heuristic is bound with a unit. It will choose actions for this unit. If you want the algorithm to consider it, just add it like this:

  • C++

  • C#

myUnit.heuristicReasoners.push_back(std::make_unique<MyHeuristic>(params));
myUnit.HeuristicReasoners.Add(new MyHeuristic(params));

Hand-crafted heuristics

In this case, just prepare your custom object derived from SimulatedGameHeuristic and add it to a unit as in the example above.

Offline-learning

This is an advanced topic. Please visit The offline learning for more information.

Step 9: configure the game object and simulation parameters

Create the game object

Create the game object that will act as an interface of your SimulatedGame.

  • C++

  • C#

SimulatedGame game(teamCount,             (1)
           maxScore,              (2)
           explorationBoost,      (3)
           freezeVisitsTreshold,  (4)
           RandomGenerator::result_type seed = std::random_device{}()) //random generator
game = new SimulatedGame(teamCount,             (1)
             maxScore,              (2)
             explorationBoost,      (3)
             freezeVisitsTreshold)  (4)
1 - This is the number of sides (players) competing in the game. Each side has its ultimate victory conditions. The unit’s method that returns the team index must return the number from 0 to teamCount. Therefore, this parameter defines the possible indices to be returned. The default value is 2.
2 - Remember the stop condition and setting scores to each team? If not, see: Step 7. Try providing the best approximation of the maximum score that will be set in a terminal state. For example if your score is the number of HP left of some unit, then provide the maximum possible value of HP. The default value is 1.0.
3 - This is a modifier to the exploration ratio parameter. It is advised to tinker with it if you are not satistfied with the evaluation of actions provided by the algorithm. The default value is 1.0.
4 - When a certain state is visited at least this number of times, then the best action so far will be fixed as the final chosen action in this state. The default value is a very large number, so there is effectively no threshold.
The maxScore (2) parameter does not need to be given perfectly. However, the closer it is to the actual maximu, the closer to optimality the algorithm works. It is used to properly balance the exploitation (of the most promising actions) and exploration (of unknown actions).

Add units

The game object has add methods. They return the reference to the added unit for convenience as it enables to do something with the unit in the same line:

  • C++

  • C#

SimulatedGameThinkingUnit& AddUnit(std::shared_ptr<SimulatedGameThinkingUnit> unit);
void AddUnits(std::vector<std::shared_ptr<SimulatedGameThinkingUnit>> units);
SimulatedGameStochasticUnit& AddUnit(std::shared_ptr<SimulatedGameStochasticUnit> unit);
SimulatedGameThinkingUnit void AddUnit(SimulatedGameThinkingUnit unit)
SimulatedGameStochasticUnit void AddUnit(SimulatedGameStochasticUnit unit)

You only need to add the starting unit and all units that have nonempty Reset() function implementation. The game stores them to reset them on new iteration start. In other cases, adding units will have no effect. It won’t cause any errors - just unnecessary calls to the reset function.

The starting unit is very important. If it is not set, the first iteration will raise an error. By default the first unit added will be the starting unit until you explicitly set the starting unit.

  • C++

  • C#

void SetStartingUnit(SimulatedGameThinkingUnit& unit);
game.StartingUnit = myUnit;

Run simulations

Once you create the game object, you are able to perform iterations that will gather statistics. The statistics are gathered incrementaly. For example, the effect of running 1000 iterations will be the same as running 500 iterations twice (in two batches). Experiment with time required for certain number of simulations.

  • C++

  • C#

game.Run(milisecondsTotal,   (1)
     maxIterationCount,  (2)
     SimulatedGameObserverForGUI* observer = nullptr) (3)
game.Run(milisecondsTotal,   (1)
     maxIterationCount,  (2)
     SimulatedGameObserverForGUI observer = null) (3)
1 - The number of miliseconds allowed for making simulations. When this time runs out, the method will return regardless of the other parameter value.
2 - The number of iterations to perform. After this number is reached, the method will return regardless of the other parameter value. Both parameters are conditions that define how long the algorithm should run. It stops whenever one of the condition is met.
3 - For the purpose of debugging, Grail uses the third parameter when to gather debug data from iterations.

If the state of the actual game changed, the gathered statistics might become no longer relevant or invalid. More iterations need to be run. In this case you can either create a new game object (and run iterations) or reuse the existing one and call:

  • C++

  • C#

game.ClearStatistics();
game.ClearStatistics();

If you reuse an existing game object, you may sometimes need to remove units (for example: if they died in the actual game):

  • C++

  • C#

void RemoveUnit(const SimulatedGameThinkingUnit* unit);
void RemoveUnit(const SimulatedGameStochasticUnit* unit);
public void RemoveUnit(SimulatedGameThinkingUnit unit)
public void RemoveUnit(SimulatedGameStochasticUnit unit)
Dead units can also be modelled as having just one action available - "do nothing".
When the actual game state changed, make sure that the Reset methods of units synchronize with the current state of the actual game.
While you always have to create the game object, you don’t need to run iterations manually if you use SimulatedGameReasoner. The reasoner is the layer above and requires you only to provide the game object. Running iterations, getting results and clearing data is automated.

Step 10: Use the results

The final thing to do is to take advantage of the whole effort you have done so far. This page explains how to get and interpret results from running SimulatedGame.