Object-model transfer in the general video game domain
Abstract
Reinforcement learning agents often benefit from learning models that predict their
environment. However, learned models may not generalize well to novel situations. This
thesis investigates the potential for a transfer learning approach to address the challenge in
the video game domain. The approach helps agents learn models of new games by transferring
knowledge from previously learned games. Transfer is facilitated by decomposing
games into the objects they contain. The assumption is that it is easier to relate features
between objects from different games than features between whole environments of different
games. Experiments show that predictions made by this method are more accurate than
predictions made without transferred knowledge, and this improvement is demonstrated to
result in increased efficiency in a task where an agent explores a maze-like game. The
conclusion is that model learning can be enhanced by transferring object models from previously
learned environments.