Natural-language video description with deep recurrent neural networks
dc.contributor.advisor | Mooney, Raymond J. (Raymond Joseph) | |
dc.contributor.committeeMember | Grauman, Kristen | |
dc.contributor.committeeMember | Stone, Peter | |
dc.contributor.committeeMember | Saenko, Kate | |
dc.contributor.committeeMember | Darrell, Trevor | |
dc.creator | Venugopalan, Subhashini | |
dc.creator.orcid | 0000-0003-3729-8456 | |
dc.date.accessioned | 2017-12-13T15:43:14Z | |
dc.date.available | 2017-12-13T15:43:14Z | |
dc.date.created | 2017-08 | |
dc.date.issued | 2017-08 | |
dc.date.submitted | August 2017 | |
dc.date.updated | 2017-12-13T15:43:14Z | |
dc.description.abstract | For most people, watching a brief video and describing what happened (in words) is an easy task. For machines, extracting meaning from video pixels and generating a sentence description is a very complex problem. The goal of this thesis is to develop models that can automatically generate natural language descriptions for events in videos. It presents several approaches to automatic video description by building on recent advances in “deep” machine learning. The techniques presented in this thesis view the task of video description akin to machine translation, treating the video domain as a source “language” and uses deep neural net architectures to “translate” videos to text. Specifically, I develop video captioning techniques using a unified deep neural network with both convolutional and recurrent structure, modeling the temporal elements in videos and language with deep recurrent neural networks. In my initial approach, I adapt a model that can learn from paired images and captions to transfer knowledge from this auxiliary task to generate descriptions for short video clips. Next, I present an end-to-end deep network that can jointly model a sequence of video frames and a sequence of words. To further improve grammaticality and descriptive quality, I also propose methods to integrate linguistic knowledge from plain text corpora. Additionally, I show that such linguistic knowledge can help describe novel objects unseen in paired image/video-caption data. Finally, moving beyond short video clips, I present methods to process longer multi-activity videos, specifically to jointly segment and describe coherent event sequences in movies. | |
dc.description.department | Computer Science | |
dc.format.mimetype | application/pdf | |
dc.identifier | doi:10.15781/T2QR4P68H | |
dc.identifier.uri | http://hdl.handle.net/2152/62987 | |
dc.language.iso | en | |
dc.subject | Video | |
dc.subject | Captioning | |
dc.subject | Description | |
dc.subject | LSTM | |
dc.subject | RNN | |
dc.subject | Recurrent | |
dc.subject | Neural networks | |
dc.subject | Image captioning | |
dc.subject | Video captioning | |
dc.subject | Language and vision | |
dc.title | Natural-language video description with deep recurrent neural networks | |
dc.type | Thesis | |
dc.type.material | text | |
thesis.degree.department | Computer Sciences | |
thesis.degree.discipline | Computer Science | |
thesis.degree.grantor | The University of Texas at Austin | |
thesis.degree.level | Doctoral | |
thesis.degree.name | Doctor of Philosophy |
Access full-text files
Original bundle
1 - 1 of 1
Loading...
- Name:
- VENUGOPALAN-DISSERTATION-2017.pdf
- Size:
- 16.7 MB
- Format:
- Adobe Portable Document Format