Artificial reality multi-modal input switching model

Embodiments described herein disclose methods and systems directed to input mode selection in artificial reality. In some implementations, various input modes enable a user to perform precise interactions with a target object without occluding the target object. Some input modes can include rays tha...

Full description

Saved in:
Bibliographic Details
Main Authors Baker, Christopher Alan, Aschenbach, Nathan, Martinez, Roger Ibars, Spurlock, Jennifer Lynn, Rojas, Chris, Pinchon, Etienne, Hofmeester, Gerrit Hendrik
Format Patent
LanguageEnglish
Published 05.04.2022
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Embodiments described herein disclose methods and systems directed to input mode selection in artificial reality. In some implementations, various input modes enable a user to perform precise interactions with a target object without occluding the target object. Some input modes can include rays that extend along a line that intersects an origin point, a control point, and an interaction point. An interaction model can specify when the system switches between input modes, such as modes based solely on gaze, using long or short ray input, or with direct interaction between the user's hand(s) and objects. These transitions can be performed by evaluating rules that take context factors such as whether a user's hands are in view of the user, what posture the hands are in, whether a target object is selected, and whether a target object is within a threshold distance from the user.
Bibliography:Application Number: US202117170825