Comparing human and machine attention in visuomotor tasks

Access full-text files

Date

2021-07-24

Authors

Guo, Sihang

Journal Title

Journal ISSN

Volume Title

Publisher

Abstract

The emergence of deep learning has transformed the way researchers approach complex machine perception problems, and has resulted in models with (super)human-level performance in various perception and motor tasks. Originally rooted in the human visual system, deep learning methods have only recently been adapted to understand human perception: in both vision and language tasks, layers and regions of the cerebral cortex have been identified to share similar learned representations with deep models. In this thesis, we take a step further into the domain of visuomotor decision-making tasks, exploring the possibility of using deep reinforcement learning (RL) algorithms to model human perceptual representations. By comparing the learned representations between human and RL models in terms of attention, we investigate the effect of learning and different hyperparameters on the resulting attention similarity. We found a positive correlation between RL performance and attention similarity, and make observations about human visuomotor behaviors from the comparison.

Description

LCSH Subject Headings

Citation