Robust 3-D Gaze Estimation via Data Optimization and Saliency Aggregation for Mobile Eye-Tracking Systems
In order to precisely predict 3-D gaze points, calibration is needed for each subject prior to first use the mobile gaze tracking system. However, traditional calibration methods normally expect the user to stare at predefined targets in the scene, which is troublesome and time-consuming. In this st...
Saved in:
Published in | IEEE transactions on instrumentation and measurement Vol. 70; pp. 1 - 10 |
---|---|
Main Authors | , , |
Format | Journal Article |
Language | English |
Published |
New York
IEEE
2021
The Institute of Electrical and Electronics Engineers, Inc. (IEEE) |
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | In order to precisely predict 3-D gaze points, calibration is needed for each subject prior to first use the mobile gaze tracking system. However, traditional calibration methods normally expect the user to stare at predefined targets in the scene, which is troublesome and time-consuming. In this study, we proposed a novel method to remove the explicit user calibration and achieve robust 3-D gaze estimation in the room-scale area. Our proposed framework treats salient regions in the scene as possible 3-D locations of gaze points. To improve the efficiency of predicting 3-D gaze from visual saliency, the bag-of-word algorithm is adopted for eliminating redundant scene image data based on their similarities. After the elimination, saliency maps are generated from those scene images, and the geometrical relationship among the scene and eye cameras is obtained through aggregating 3-D salient targets with eye visual directions. Finally, we calculate the 3-D point of regard (PoR) by utilizing 3-D structures of the scene. The experimental results indicate that our method enhances the reliability of saliency maps and achieves promising performances on 3-D gaze estimation with different subjects. |
---|---|
Bibliography: | ObjectType-Article-1 SourceType-Scholarly Journals-1 ObjectType-Feature-2 content type line 14 |
ISSN: | 0018-9456 1557-9662 |
DOI: | 10.1109/TIM.2021.3065437 |