Using natural language to aid task specification in sequential decision making problems

dc.contributor.advisorMooney, Raymond J. (Raymond Joseph)
dc.contributor.advisorNiekum, Scott David
dc.contributor.committeeMemberStone, Peter
dc.contributor.committeeMemberArtzi, Yoav
dc.creatorGoyal, Prasoon
dc.creator.orcid0000-0003-3121-1241
dc.date.accessioned2022-11-21T22:37:44Z
dc.date.available2022-11-21T22:37:44Z
dc.date.created2022-08
dc.date.issued2022-09-12
dc.date.submittedAugust 2022
dc.date.updated2022-11-21T22:37:45Z
dc.description.abstractBuilding intelligent agents that can help humans accomplish everyday tasks, such as a personal robot at home or a robot in a work environment, is a long-standing goal of artificial intelligence. One of the requirements for such general-purpose agents is the ability to teach them new tasks or skills relatively easily. Common approaches to teaching agents new skills include reinforcement learning (RL) and imitation learning (IL). However, specifying the task to the learning agent, i.e. designing effective reward functions for reinforcement learning and providing demonstrations for imitation learning, are often cumbersome and time-consuming. Further, designing reward functions and providing a set of demonstrations that sufficiently disambiguates the desired task may not be particularly accessible for end users without a technical background. In this dissertation, we explore using natural language as an auxiliary signal to aid task specification, which reduces the burden on the end user. To make reward design easier, we propose a novel framework that is used to generate language-based rewards in addition to the extrinsic rewards from the environment for faster policy training using RL. We show that using our framework, very simple extrinsic rewards along with a natural language description of the task are sufficient to teach new tasks to the learning agent. To ameliorate the problem of providing demonstrations, we propose a new setting that enables an agent to learn a new task without demonstrations in an IL setting, given a demonstration from a related task and a natural language description of the difference between the desired task and the demonstrated task. The techniques we develop for this setting would enable teaching multiple related tasks to learning agents by providing a small set of demonstrations and several natural language descriptions, thereby reducing the burden of providing demonstrations for each task. The primary contributions of this dissertation include novel problem settings, benchmarks, and algorithms that allow using natural language as an auxiliary modality for task specification in RL and IL. We believe this dissertation will serve as a foundation for future research along these lines, to make progress toward having intelligent agents that can conveniently be taught new tasks by end users.
dc.description.departmentComputer Sciences
dc.format.mimetypeapplication/pdf
dc.identifier.urihttps://hdl.handle.net/2152/116769
dc.identifier.urihttp://dx.doi.org/10.26153/tsw/43664
dc.language.isoen
dc.subjectReinforcement learning
dc.subjectNatural language
dc.subjectImitation learning
dc.subjectLanguage grounding
dc.titleUsing natural language to aid task specification in sequential decision making problems
dc.typeThesis
dc.type.materialtext
thesis.degree.departmentComputer Sciences
thesis.degree.disciplineComputer Science
thesis.degree.grantorThe University of Texas at Austin
thesis.degree.levelDoctoral
thesis.degree.nameDoctor of Philosophy

Access full-text files

Original bundle

Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
GOYAL-DISSERTATION-2022.pdf
Size:
5.29 MB
Format:
Adobe Portable Document Format

License bundle

Now showing 1 - 2 of 2
No Thumbnail Available
Name:
PROQUEST_LICENSE.txt
Size:
4.45 KB
Format:
Plain Text
Description:
No Thumbnail Available
Name:
LICENSE.txt
Size:
1.84 KB
Format:
Plain Text
Description: