Understanding & predicting the skills needed to answer a visual question

Date

2019-06-20

Authors

Zeng, Xiaoyu, M.S. in Information Studies

Journal Title

Journal ISSN

Volume Title

Publisher

Abstract

We proposed a method to automatically identify the relevant cognitive skills to perform a visual question answering (VQA) task. Focusing on a subset of VizWiz 1.0 and VQA 2.0 data, we collected labels for five skill categories, extracted multimodal features from images and their corresponding question, and trained a recurrent neural network with LSTM encoders to perform binary multi-label classification for the two main cognitive skills to answer a visual question: text recognition and color recognition. Our results demonstrated the potential of using a skill predictor to improve current visual question answering frameworks. This project made two main contributions. First, we provided an in-depth analysis of our data and highlighted the fact that, as a popular traditional benchmark dataset, VQA 2.0 cannot sufficiently model the information needs of visually impaired users in the context of visual question answering for accessibility. Secondly, we developed a skill prediction algorithm that can potentially help to label and route tasks for automated or human-in-the-loop systems of vision-assistive technologies

Department

Description

LCSH Subject Headings

Citation