Autonomous qualitative learning of distinctions and actions in a developing agent
dc.contributor.advisor | Kuipers, Benjamin | en |
dc.contributor.advisor | Stone, Peter, 1971- | en |
dc.contributor.committeeMember | Ballard, Dana | en |
dc.contributor.committeeMember | Cohen, Leslie | en |
dc.contributor.committeeMember | Mooney, Raymond | en |
dc.creator | Mugan, Jonathan William | en |
dc.date.accessioned | 2010-11-23T22:16:15Z | en |
dc.date.available | 2010-11-23T22:16:15Z | en |
dc.date.available | 2010-11-23T22:16:21Z | en |
dc.date.issued | 2010-08 | en |
dc.date.submitted | August 2010 | en |
dc.date.updated | 2010-11-23T22:16:21Z | en |
dc.description | text | en |
dc.description.abstract | How can an agent bootstrap up from a pixel-level representation to autonomously learn high-level states and actions using only domain general knowledge? This thesis attacks a piece of this problem and assumes that an agent has a set of continuous variables describing the environment and a set of continuous motor primitives, and poses a solution for the problem of how an agent can learn a set of useful states and effective higher-level actions through autonomous experience with the environment. There exist methods for learning models of the environment, and there also exist methods for planning. However, for autonomous learning, these methods have been used almost exclusively in discrete environments. This thesis proposes attacking the problem of learning high-level states and actions in continuous environments by using a qualitative representation to bridge the gap between continuous and discrete variable representations. In this approach, the agent begins with a broad discretization and initially can only tell if the value of each variable is increasing, decreasing, or remaining steady. The agent then simultaneously learns a qualitative representation (discretization) and a set of predictive models of the environment. The agent then converts these models into plans to form actions. The agent then uses those learned actions to explore the environment. The method is evaluated using a simulated robot with realistic physics. The robot is sitting at a table that contains one or two blocks, as well as other distractor objects that are out of reach. The agent autonomously explores the environment without being given a task. After learning, the agent is given various tasks to determine if it learned the necessary states and actions to complete them. The results show that the agent was able to use this method to autonomously learn to perform the tasks. | en |
dc.description.department | Computer Science | |
dc.format.mimetype | application/pdf | en |
dc.identifier.uri | http://hdl.handle.net/2152/ETD-UT-2010-08-1726 | en |
dc.language.iso | eng | en |
dc.subject | Artificial intelligence | en |
dc.subject | Robotics | en |
dc.subject | Machine learning | en |
dc.subject | Reinforcement learning | en |
dc.subject | Discretization | en |
dc.subject | Qualitative learning | en |
dc.title | Autonomous qualitative learning of distinctions and actions in a developing agent | en |
dc.type.genre | thesis | en |
thesis.degree.department | Computer Sciences | en |
thesis.degree.discipline | Computer Sciences | en |
thesis.degree.grantor | University of Texas at Austin | en |
thesis.degree.level | Doctoral | en |
thesis.degree.name | Doctor of Philosophy | en |