Reducing sampling error in batch temporal difference learning
dc.contributor.advisor | Stone, Peter, 1971- | |
dc.creator | Pavse, Brahma Suneil | |
dc.date.accessioned | 2021-09-20T16:59:06Z | |
dc.date.available | 2021-09-20T16:59:06Z | |
dc.date.created | 2020-05 | |
dc.date.issued | 2020-05-05 | |
dc.date.submitted | May 2020 | |
dc.date.updated | 2021-09-20T16:59:06Z | |
dc.description.abstract | Temporal difference (TD) learning is one of the main foundations of modern reinforcement learning. This thesis studies the use of TD(0), a canonical TD algorithm, to estimate the value function of a given evaluation policy from a batch of data. In this batch setting, we show that TD(0) may converge to an inaccurate value function because the update following an action is weighted according to the number of times that action occurred in the batch -- not the true probability of the action under the evaluation policy. To address this limitation, we introduce policy sampling error corrected-TD(0) (PSEC-TD(0)). PSEC-TD(0) first estimates the empirical distribution of actions in each state in the batch and then uses importance sampling to correct for the mismatch between the empirical weighting and the correct weighting for updates following each action. We refine the concept of a certainty-equivalence estimate and argue that PSEC-TD(0) converges to a more desirable fixed-point than TD(0) for a fixed batch of data. Finally, we conduct a thorough empirical evaluation of PSEC-TD(0) on three batch value function learning tasks in a variety of settings and show that PSEC-TD(0) produces value function estimates with lower mean squared error than the standard TD(0) algorithm | |
dc.description.department | Computer Science | |
dc.format.mimetype | application/pdf | |
dc.identifier.uri | https://hdl.handle.net/2152/87909 | |
dc.identifier.uri | http://dx.doi.org/10.26153/tsw/14853 | |
dc.language.iso | en | |
dc.subject | Reinforcement learning | |
dc.subject | Machine learning | |
dc.subject | Artificial intelligence | |
dc.subject | Temporal difference learning | |
dc.subject | Importance sampling | |
dc.subject | Off-policy learning | |
dc.subject | Value function learning | |
dc.subject | Batch machine learning | |
dc.title | Reducing sampling error in batch temporal difference learning | |
dc.type | Thesis | |
dc.type.material | text | |
thesis.degree.department | Computer Sciences | |
thesis.degree.discipline | Computer Science | |
thesis.degree.grantor | The University of Texas at Austin | |
thesis.degree.level | Masters | |
thesis.degree.name | Master of Science in Computer Sciences |
Access full-text files
Original bundle
1 - 1 of 1