Hierarchical policy design for sample-efficient learning of robot table tennis through self-play

dc.contributor.advisorMiikkulainen, Risto
dc.contributor.committeeMemberLevine, Sergey
dc.contributor.committeeMemberSentis, Luis
dc.contributor.committeeMemberNiekum, Scott
dc.contributor.committeeMemberMok, Aloysius
dc.creatorMahjourian, Reza
dc.date.accessioned2019-02-06T17:11:03Z
dc.date.available2019-02-06T17:11:03Z
dc.date.created2018-12
dc.date.issued2019-02-01
dc.date.submittedDecember 2018
dc.date.updated2019-02-06T17:11:04Z
dc.description.abstractTraining robots with physical bodies requires developing new methods and action representations that allow the learning agents to explore the space of policies efficiently. This work studies sample-efficient learning of complex policies in the context of robot table tennis. It incorporates learning into a hierarchical control framework using a model-free strategy layer (which requires complex reasoning about opponents that is difficult to do in a model-based way), model-based prediction of external objects (which are difficult to control directly with analytic control methods, but governed by learnable and relatively simple laws of physics), and analytic controllers for the robot itself. Human demonstrations are used to train dynamics models, which together with the analytic controller allow any robot that is physically capable to play table tennis without training episodes. Using only about 7000 demonstrated trajectories, a striking policy can hit ball targets with about 20 cm error. Self-play is used to train cooperative and adversarial strategies on top of model-based striking skills trained from human demonstrations. After only about 24000 strikes in self-play the agent learns to best exploit the human dynamics models for longer cooperative games. Further experiments demonstrate that more flexible variants of the policy can discover new strikes not demonstrated by humans and achieve higher performance at the expense of lower sample-efficiency. Experiments are carried out in a virtual reality environment using sensory observations that are obtainable in the real world. The high sample-efficiency demonstrated in the evaluations show that the proposed method is suitable for learning directly on physical robots without transfer of models or policies from simulation.
dc.description.departmentComputer Sciences
dc.format.mimetypeapplication/pdf
dc.identifierdoi:10.15781/T24J0BJ23
dc.identifier.urihttp://hdl.handle.net/2152/72812
dc.language.isoen
dc.subjectRobotics
dc.subjectTable tennis
dc.subjectSelf-play
dc.subjectReinforcement learning
dc.subjectHierarchical policy
dc.titleHierarchical policy design for sample-efficient learning of robot table tennis through self-play
dc.typeThesis
dc.type.materialtext
thesis.degree.departmentComputer Sciences
thesis.degree.disciplineComputer Science
thesis.degree.grantorThe University of Texas at Austin
thesis.degree.levelDoctoral
thesis.degree.nameDoctor of Philosophy

Access full-text files

Original bundle

Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
MAHJOURIAN-DISSERTATION-2018.pdf
Size:
7.69 MB
Format:
Adobe Portable Document Format

License bundle

Now showing 1 - 1 of 1
No Thumbnail Available
Name:
LICENSE.txt
Size:
1.84 KB
Format:
Plain Text
Description: