The lecture emphasized that Artificial Neural Network is one of the examples of nature inspired metaheuristic algorithms. Other interesting examples of this kind of approach are:

  • Genetic and evolutional algorithms (genetics, evolution).
  • Ant algorithms (simulations mimicking the behavior of self-organizing ant colony).
  • Other swarm algorithms (behavior of artificial life program BOIDS, reflecting the swarm of bees, or flocks of birds).
  • Cell automata (cell biological processes).
  • Artificial immunologic systems (systems reacting on pathogen existence).
  • Morphogenesis (patterned after DNA evolution).
  • Quantum computations (inspired by quantum physics).
  • Simulated annealing (inspired by technologies applied in metallurgy).

A comparison of human brain and artificial neural network was given, as well a simple introduction on how ANN work under the hood. A concept of Deep Neural Networks (DNN) was also introduced. Also, several examples of ANN and DNN application to different aspects of classification modelling and data mining were given. Neural Networks are not generally considered as a most obvious choice for Artificial Intelligence in games. There are however, very inspiring examples of a successful NN application in games, for instance:

TD Gammon – Backgammon developed by Tesauro (1992), exploited the NN with TD[l] (Temporal Difference) learning. It was a convincing example of implementation of unsupervised learning applied to games.

Colin McRae Rally 2 – Codemasters (2000) where NN was trained to learn NPC (non-person characters) to drive a race car.

Black & White 2 – Richard Evans (2005) where a combined technique of Neural Networks and Decision Trees was implemented to steer computer avatars.

MarI/O – Super Mario World game; Seth Bling (2015) where a composite (Neural Network plus Genetic Algorithms) approach was applied.

AlphaGO – go game; Google DeepMind (2016) Neural Networks coupled with Monte Carlo Tree Search.

The second lecture part was dedicated to Monte Carlo Tree Search – a recently proposed (2006) probabilistic game tree search algorithm. MCTS achieved a lot of success in domains with vast search space where deterministic approach get stuck.  The most spectacular one, was an application of MCTS to go computer game by Google/DeepMind AlphaGo computer program, where DNN and MCTS were combined.

During a lecture a general introduction of tree search methods (min-max, alpha-beta pruning, MTD-f, and NegaScout) was given basing on example of chess game. Games were classified according to their characteristic such as:

  • deterministic vs. non-deterministic
  • perfect vs. imperfect information
  • simultaneous vs. sequential
  • finite vs. infinite (with respect to number of players, possible game states, possible game actions in given state, number of steps required to finish the game)
  • strongly winnable vs. weakly winnable
  • Complexity of state space
  • Branching factor

An introduction to General Game Playing (GGP) and Game Description Language (GDL) was given, along with the detailed description of different approaches to MCTS implementation.

It was concluded, that the application of MCTS with all its variants augmented by Neural Networks represents currently one of the most promising approaches to AI implementation in games.