AirDraw: Leveraging Smart Watch Motion Sensors for Mobile Human Computer Interactions
Wearable computing is one of the fastest growing technologies today. Smart watches are poised to take over at least of half the wearable devices market in the near future. Smart watch screen size, however, is a limiting factor for growth, as it restricts practical text input. On the other hand, wear...
Saved in:
Published in | arXiv.org |
---|---|
Main Authors | , , |
Format | Paper |
Language | English |
Published |
Ithaca
Cornell University Library, arXiv.org
07.05.2017
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | Wearable computing is one of the fastest growing technologies today. Smart watches are poised to take over at least of half the wearable devices market in the near future. Smart watch screen size, however, is a limiting factor for growth, as it restricts practical text input. On the other hand, wearable devices have some features, such as consistent user interaction and hands-free, heads-up operations, which pave the way for gesture recognition methods of text entry. This paper proposes a new text input method for smart watches, which utilizes motion sensor data and machine learning approaches to detect letters written in the air by a user. This method is less computationally intensive and less expensive when compared to computer vision approaches. It is also not affected by lighting factors, which limit computer vision solutions. The AirDraw system prototype developed to test this approach is presented. Additionally, experimental results close to 71% accuracy are presented. |
---|---|
ISSN: | 2331-8422 |