Evolving multimodal behavior through modular multiobjective neuroevolution
MetadataShow full item record
Intelligent organisms do not simply perform one task, but exhibit multiple distinct modes of behavior. For instance, humans can swim, climb, write, solve problems, and play sports. To be fully autonomous and robust, it would be advantageous for artificial agents, both in physical and virtual worlds, to exhibit a similar diversity of behaviors. This dissertation develops methods for discovering such behavior automatically using multiobjective neuroevolution. First, sensors are designed to allow multiple different interpretations of objects in the environment (such as predator or prey). Second, evolving networks are given ways of representing multiple policies explicitly via modular architectures. Third, the set of objectives is dynamically adjusted in order to lead the population towards the most promising areas of the search space. These methods are evaluated in five domains that provide examples of three different types of task divisions. Isolated tasks are separate from each other, but a single agent must solve each of them. Interleaved tasks are distinct, but switch back and forth within a single evaluation. Blended tasks do not have clear barriers, because an agent may have to perform multiple behaviors at the same time, or learn when to switch between opposing behaviors. The most challenging of the domains is Ms. Pac-Man, a popular classic arcade game with blended tasks. Methods for developing multimodal behavior are shown to achieve scores superior to other Ms. Pac-Man results previously published in the literature. These results demonstrate that complex multimodal behavior can be evolved automatically, resulting in robust and intelligent agents.