Effects of Self-Learning and Exploration for XR-based Interactions

This research explores the overtime learning trends of multimodal gaze-based interactions in tasks involving the movement of augmented objects within extended reality (XR) environments. This study employs three interactions, including two multimodal gaze-based approaches, and compares them with an u...

Full description

Saved in:
Bibliographic Details
Published inProceedings of the Human Factors and Ergonomics Society Annual Meeting
Main Authors Ghasemi, Yalda, Chattopadhyay, Debaleena, Jeong, Heejin, Kim, Hyungil, Huang, Jida
Format Journal Article
LanguageEnglish
Published 09.09.2024
Online AccessGet full text

Cover

Loading…
More Information
Summary:This research explores the overtime learning trends of multimodal gaze-based interactions in tasks involving the movement of augmented objects within extended reality (XR) environments. This study employs three interactions, including two multimodal gaze-based approaches, and compares them with an unimodal hand-based interaction. The underlying hypothesis posits that gaze-based interactions outperform other modalities, promising improved performance, lower learnability rates, and enhanced efficiency. These assertions serve as the foundation for investigating the dynamics of self-learning and exploration within XR-based environments. To this end, the study addresses questions related to the temporal evolution of learnability, post-learning efficiency, and users’ subjective preferences regarding these interaction modalities. This research shows that gaze-based interactions enhance performance, exhibit a lower learnability rate, and demonstrate higher efficiency compared to an unimodal hand-based interaction. Our results contribute to the design and refinement of more effective, user-friendly, and adaptive XR user interfaces.
ISSN:1071-1813
2169-5067
DOI:10.1177/10711813241265075