Comparison of Eye-Based and Controller-Based Selection in Virtual Reality
Eye tracking, or pointing, in head-mounted displays enables new input modalities for point-select tasks. The goal of this paper is to explore the Fitts' modeling of the eye-based selection in a virtual reality environment with controller-based input, providing the baseline for two types of eye-...
Saved in:
Published in | International journal of human-computer interaction Vol. 37; no. 5; pp. 484 - 495 |
---|---|
Main Authors | , |
Format | Journal Article |
Language | English |
Published |
Norwood
Taylor & Francis
16.03.2021
Lawrence Erlbaum Associates, Inc |
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | Eye tracking, or pointing, in head-mounted displays enables new input modalities for point-select tasks. The goal of this paper is to explore the Fitts' modeling of the eye-based selection in a virtual reality environment with controller-based input, providing the baseline for two types of eye-based interaction (dwell and physical trigger) in both three-dimensional and two-dimensional environment. In general, the controller-based interaction offered the highest throughput, best accuracy, and preferences of most participants. The eye-trigger interaction performed roughly between the other two. However, the performance difference between the three interaction modes had become smaller when it comes to three-dimensional targets. The performance of eye-movement interactions had been slightly better in terms of accuracy. Generally speaking, eye-based interaction still has a long way to go before becoming one of the mainstream interaction modalities in virtual reality due to the absence of a more stable and precise eye-tracking device with better calibration methods. Still, in some specific virtual reality environments, eye-based interaction has irreplaceable potential. |
---|---|
ISSN: | 1044-7318 1532-7590 1044-7318 |
DOI: | 10.1080/10447318.2020.1826190 |