Radar-Based Recognition of Static Hand Gestures in American Sign Language
In the fast-paced field of human-computer interaction (HCI) and virtual reality (VR), automatic gesture recognition has become increasingly essential. This is particularly true for the recognition of hand signs, providing an intuitive way to effortlessly navigate and control VR and HCI applications....
Saved in:
Main Authors | , , , , , |
---|---|
Format | Journal Article |
Language | English |
Published |
20.02.2024
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | In the fast-paced field of human-computer interaction (HCI) and virtual
reality (VR), automatic gesture recognition has become increasingly essential.
This is particularly true for the recognition of hand signs, providing an
intuitive way to effortlessly navigate and control VR and HCI applications.
Considering increased privacy requirements, radar sensors emerge as a
compelling alternative to cameras. They operate effectively in low-light
conditions without capturing identifiable human details, thanks to their lower
resolution and distinct wavelength compared to visible light.
While previous works predominantly deploy radar sensors for dynamic hand
gesture recognition based on Doppler information, our approach prioritizes
classification using an imaging radar that operates on spatial information,
e.g. image-like data. However, generating large training datasets required for
neural networks (NN) is a time-consuming and challenging process, often falling
short of covering all potential scenarios. Acknowledging these challenges, this
study explores the efficacy of synthetic data generated by an advanced radar
ray-tracing simulator. This simulator employs an intuitive material model that
can be adjusted to introduce data diversity.
Despite exclusively training the NN on synthetic data, it demonstrates
promising performance when put to the test with real measurement data. This
emphasizes the practicality of our methodology in overcoming data scarcity
challenges and advancing the field of automatic gesture recognition in VR and
HCI applications. |
---|---|
DOI: | 10.48550/arxiv.2402.12800 |