Understanding & predicting the skills needed to answer a visual question
dc.contributor.advisor | Gurari, Danna | |
dc.contributor.committeeMember | Karadkar, Unmil P | |
dc.creator | Zeng, Xiaoyu, M.S. in Information Studies | |
dc.date.accessioned | 2019-09-16T20:23:35Z | |
dc.date.available | 2019-09-16T20:23:35Z | |
dc.date.created | 2019-05 | |
dc.date.issued | 2019-06-20 | |
dc.date.submitted | May 2019 | |
dc.date.updated | 2019-09-16T20:23:35Z | |
dc.description.abstract | We proposed a method to automatically identify the relevant cognitive skills to perform a visual question answering (VQA) task. Focusing on a subset of VizWiz 1.0 and VQA 2.0 data, we collected labels for five skill categories, extracted multimodal features from images and their corresponding question, and trained a recurrent neural network with LSTM encoders to perform binary multi-label classification for the two main cognitive skills to answer a visual question: text recognition and color recognition. Our results demonstrated the potential of using a skill predictor to improve current visual question answering frameworks. This project made two main contributions. First, we provided an in-depth analysis of our data and highlighted the fact that, as a popular traditional benchmark dataset, VQA 2.0 cannot sufficiently model the information needs of visually impaired users in the context of visual question answering for accessibility. Secondly, we developed a skill prediction algorithm that can potentially help to label and route tasks for automated or human-in-the-loop systems of vision-assistive technologies | |
dc.description.department | Information | |
dc.format.mimetype | application/pdf | |
dc.identifier.uri | https://hdl.handle.net/2152/75855 | |
dc.identifier.uri | http://dx.doi.org/10.26153/tsw/2957 | |
dc.language.iso | en | |
dc.subject | Visual question answering | |
dc.subject | Multimodal machine learning | |
dc.subject | Accessibility | |
dc.title | Understanding & predicting the skills needed to answer a visual question | |
dc.title.alternative | Understanding and predicting the skills needed to answer a visual question | |
dc.type | Thesis | |
dc.type.material | text | |
thesis.degree.department | Information | |
thesis.degree.discipline | Information Studies | |
thesis.degree.grantor | The University of Texas at Austin | |
thesis.degree.level | Masters | |
thesis.degree.name | Master of Science in Information Studies |
Access full-text files
Original bundle
1 - 1 of 1
Loading...
- Name:
- ZENG-MASTERSREPORT-2019.pdf
- Size:
- 5.51 MB
- Format:
- Adobe Portable Document Format