Natural Grasp Intention Recognition Based on Gaze in Human-Robot Interaction
Objective: While neuroscience research has established a link between vision and intention, studies on gaze data features for intention recognition are absent. The majority of existing gaze-based intention recognition approaches are based on deliberate long-term fixation and suffer from insufficient...
Saved in:
Published in | IEEE Journal of Biomedical and Health Informatics Vol. 27; no. 4; pp. 2059 - 2070 |
---|---|
Main Authors | , , , , |
Format | Journal Article |
Language | English |
Published |
United States
IEEE
01.04.2023
Institute of Electrical and Electronics Engineers (IEEE) The Institute of Electrical and Electronics Engineers, Inc. (IEEE) |
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | Objective: While neuroscience research has established a link between vision and intention, studies on gaze data features for intention recognition are absent. The majority of existing gaze-based intention recognition approaches are based on deliberate long-term fixation and suffer from insufficient accuracy. In order to address the lack of features and insufficient accuracy in previous studies, the primary objective of this study is to suppress noise from human gaze data and extract useful features for recognizing grasp intention. Methods: We conduct gaze movement evaluation experiments to investigate the characteristics of gaze motion. The target-attracted gaze movement model (TAGMM) is proposed as a quantitative description of gaze movement based on the findings. A Kalman filter (KF) is used to reduce the noise in the gaze data based on TAGMM. We conduct gaze-based natural grasp intention recognition evaluation experiments to collect the subject's gaze data. Four types of features describing gaze point dispersion (<inline-formula><tex-math notation="LaTeX">f_{var}</tex-math></inline-formula>), gaze point movement (<inline-formula><tex-math notation="LaTeX">f_{gm}</tex-math></inline-formula>), head movement (<inline-formula><tex-math notation="LaTeX">f_{hm}</tex-math></inline-formula>), and distance from the gaze points to objects (<inline-formula><tex-math notation="LaTeX">f_{d_{j}}</tex-math></inline-formula>) are then proposed to recognize the subject's grasp intentions. With the proposed features, we perform intention recognition experiments, employing various classifiers, and the results are compared with different methods. Results: The statistical analysis reveals that the proposed features differ significantly across intentions, offering the possibility of employing these features to recognize grasp intentions. We demonstrated the intention recognition performance utilizing the TAGMM and the proposed features in within-subject and cross-subject experiments. The results indicate that the proposed method can recognize the intention with accuracy improvements of 44.26% (within-subject) and 30.67% (cross-subject) over the fixation-based method. The proposed method also consumes less time (34.87 ms) to recognize the intention than the fixation-based method (about 1 s). Conclusion: This work introduces a novel TAGMM for modeling gaze movement and a variety of practical features for recognizing grasp intentions. Experiments confirm the effectiveness of our approach. Significance: The proposed TAGMM is capable of modeling gaze movements and can be utilized to process gaze data, and the proposed features can reveal the user's intentions. These results contribute to the development of gaze-based human-robot interaction. |
---|---|
Bibliography: | ObjectType-Article-1 SourceType-Scholarly Journals-1 ObjectType-Feature-2 content type line 14 content type line 23 |
ISSN: | 2168-2194 2168-2208 2168-2208 |
DOI: | 10.1109/JBHI.2023.3238406 |