Cross-domain adaptation and geometric data synthesis for near-eye to remote gaze tracking
Access full-text files
Date
Authors
Journal Title
Journal ISSN
Volume Title
Publisher
Abstract
Learning-based Gaze Estimation leveraging deep neural networks has shown remarkable progress in performing direct mapping of images of human subjects to their directions of gaze. In Human-Robot Interaction, it is desirable to perform such estimates in relatively unconstrained settings with diverse subject appearances, for which data generation and ground-truth annotation have remained a costly process. This dependence on data continues to limit the generalization of gaze models for unseen situations. This work contributes a new data adaption approach of combining the comparably economical annotated gaze data from head-mounted eye trackers into the training of a remote gaze estimator, by way employing classic 3D computer vision as a technique for data synthesis. Experimentally, we use TEyeD as the source domain in conjunction with UnityEyes as the intermediate domain to test their generalization and adaption performance on MPIIGaze. Results demonstrate that a head-mounted gaze dataset can be adopted across domains to further the generalization of gaze models trained on synthetic gaze data, yielding a method that can complement other state-of-the-art gaze adaptation approaches.