Autonomous qualitative learning of distinctions and actions in a developing agent

dc.contributor.advisorKuipers, Benjaminen
dc.contributor.advisorStone, Peter, 1971-en
dc.contributor.committeeMemberBallard, Danaen
dc.contributor.committeeMemberCohen, Leslieen
dc.contributor.committeeMemberMooney, Raymonden
dc.creatorMugan, Jonathan Williamen
dc.date.accessioned2010-11-23T22:16:15Zen
dc.date.available2010-11-23T22:16:15Zen
dc.date.available2010-11-23T22:16:21Zen
dc.date.issued2010-08en
dc.date.submittedAugust 2010en
dc.date.updated2010-11-23T22:16:21Zen
dc.descriptiontexten
dc.description.abstractHow can an agent bootstrap up from a pixel-level representation to autonomously learn high-level states and actions using only domain general knowledge? This thesis attacks a piece of this problem and assumes that an agent has a set of continuous variables describing the environment and a set of continuous motor primitives, and poses a solution for the problem of how an agent can learn a set of useful states and effective higher-level actions through autonomous experience with the environment. There exist methods for learning models of the environment, and there also exist methods for planning. However, for autonomous learning, these methods have been used almost exclusively in discrete environments. This thesis proposes attacking the problem of learning high-level states and actions in continuous environments by using a qualitative representation to bridge the gap between continuous and discrete variable representations. In this approach, the agent begins with a broad discretization and initially can only tell if the value of each variable is increasing, decreasing, or remaining steady. The agent then simultaneously learns a qualitative representation (discretization) and a set of predictive models of the environment. The agent then converts these models into plans to form actions. The agent then uses those learned actions to explore the environment. The method is evaluated using a simulated robot with realistic physics. The robot is sitting at a table that contains one or two blocks, as well as other distractor objects that are out of reach. The agent autonomously explores the environment without being given a task. After learning, the agent is given various tasks to determine if it learned the necessary states and actions to complete them. The results show that the agent was able to use this method to autonomously learn to perform the tasks.en
dc.description.departmentComputer Science
dc.format.mimetypeapplication/pdfen
dc.identifier.urihttp://hdl.handle.net/2152/ETD-UT-2010-08-1726en
dc.language.isoengen
dc.subjectArtificial intelligenceen
dc.subjectRoboticsen
dc.subjectMachine learningen
dc.subjectReinforcement learningen
dc.subjectDiscretizationen
dc.subjectQualitative learningen
dc.titleAutonomous qualitative learning of distinctions and actions in a developing agenten
dc.type.genrethesisen
thesis.degree.departmentComputer Sciencesen
thesis.degree.disciplineComputer Sciencesen
thesis.degree.grantorUniversity of Texas at Austinen
thesis.degree.levelDoctoralen
thesis.degree.nameDoctor of Philosophyen

Access full-text files

Original bundle

Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
MUGAN-DISSERTATION.pdf
Size:
1.94 MB
Format:
Adobe Portable Document Format

License bundle

Now showing 1 - 1 of 1
No Thumbnail Available
Name:
license.txt
Size:
2.12 KB
Format:
Plain Text
Description: