Mouth gesture and voice command based robot command interface
In this paper we present a voice command and mouth gesture based robot command interface which is capable of controlling three degrees of freedom. The gesture set was designed in order to avoid head rotation and translation, and thus relying solely in mouth movements. Mouth segmentation is performed...
Saved in:
Published in | 2009 IEEE International Conference on Robotics and Automation pp. 333 - 338 |
---|---|
Main Authors | , , , |
Format | Conference Proceeding |
Language | English |
Published |
IEEE
01.05.2009
|
Subjects | |
Online Access | Get full text |
ISBN | 1424427886 9781424427888 |
ISSN | 1050-4729 |
DOI | 10.1109/ROBOT.2009.5152858 |
Cover
Loading…
Summary: | In this paper we present a voice command and mouth gesture based robot command interface which is capable of controlling three degrees of freedom. The gesture set was designed in order to avoid head rotation and translation, and thus relying solely in mouth movements. Mouth segmentation is performed by using the normalized a* component, as in J. Gomez, et al., (October 2008). The gesture detection process is carried out by a Gaussian mixture model (GMM) based classifier. After that, a state machine stabilizes the system response by restricting the number of possible movements depending on the initial state. Voice commands are modeled using a hidden Markov model (HMM) isolated word recognition scheme. The interface was designed taking into account the specific pose restrictions found in the DaVinci assisted surgery command console. |
---|---|
ISBN: | 1424427886 9781424427888 |
ISSN: | 1050-4729 |
DOI: | 10.1109/ROBOT.2009.5152858 |