• Automated domain analysis and transfer learning in general game playing 

      Kuhlmann, Gregory John (2010-08)
      Creating programs that can play games such as chess, checkers, and backgammon, at a high level has long been a challenge and benchmark for AI. Computer game playing is arguably one of AI's biggest success stories. ...
    • Autonomous inter-task transfer in reinforcement learning domains 

      Taylor, Matthew Edmund (2008-08)
      Reinforcement learning (RL) methods have become popular in recent years because of their ability to solve complex tasks with minimal feedback. While these methods have had experimental successes and have been shown to ...
    • Autonomous qualitative learning of distinctions and actions in a developing agent 

      Mugan, Jonathan William (2010-08)
      How can an agent bootstrap up from a pixel-level representation to autonomously learn high-level states and actions using only domain general knowledge? This thesis attacks a piece of this problem and assumes that an agent ...
    • Autonomous trading in modern electricity markets 

      Urieli, Daniel (2015-12)
      The smart grid is an electricity grid augmented with digital technologies that automate the management of electricity delivery. The smart grid is envisioned to be a main enabler of sustainable, clean, efficient, reliable, ...
    • Cooperation and communication in multiagent deep reinforcement learning 

      Hausknecht, Matthew John (2016-12)
      Reinforcement learning is the area of machine learning concerned with learning which actions to execute in an unknown environment in order to maximize cumulative reward. As agents begin to perform tasks of genuine interest ...
    • Learning from human-generated reward 

      Knox, William Bradley (2012-12)
      Robots and other computational agents are increasingly becoming part of our daily lives. They will need to be able to learn to perform new tasks, adapt to novel situations, and understand what is wanted by their human ...
    • Multilayered skill learning and movement coordination for autonomous robotic agents 

      MacAlpine, Patrick Madeira; 0000-0001-6763-2625 (2017-09-06)
      With advances in technology expanding the capabilities of robots, while at the same time making robots cheaper to manufacture, robots are rapidly becoming more prevalent in both industrial and domestic settings. An increase ...
    • Parameterized modular inverse reinforcement learning 

      Zhang, Shun, 1990-; 0000-0002-8073-3276 (2015-08)
      Reinforcement learning and inverse reinforcement learning can be used to model and understand human behaviors. However, due to the curse of dimensionality, their use as a model for human behavior has been limited. Inspired ...
    • Segbot : a multipurpose robotic platform for multi-floor navigation 

      Unwala, Ali Ishaq (2014-12)
      The goal of this work is to describe a robotics platform called the Building Wide Intelligence Segbot (segbot). The segbot is a two wheeled robot that can robustly navigate our building, perform obstacle avoidance, and ...
    • Structured exploration for reinforcement learning 

      Jong, Nicholas K. (2010-12)
      Reinforcement Learning (RL) offers a promising approach towards achieving the dream of autonomous agents that can behave intelligently in the real world. Instead of requiring humans to determine the correct behaviors or ...
    • Texplore : temporal difference reinforcement learning for robots and time-constrained domains 

      Hester, Todd (2012-12)
      Robots have the potential to solve many problems in society, because of their ability to work in dangerous places doing necessary jobs that no one wants or is able to do. One barrier to their widespread deployment is that ...
    • The development of bias in perceptual and financial decision-making 

      Chen, Mei-Yen, Ph. D. (2014-08)
      Decisions are prone to bias. This can be seen in daily choices. For instance, when the markets are plunging, investors tend to sell stocks instead of purchasing them with lower prices because people in general are more ...