Interactive display using depth and RGB sensors for face and gesture control
This paper introduces an interactive display system guided by a human observer's gesture, facial pose, and facial expression. The Kinect depth sensor is used to detect and track an observer's skeletal joints while the RGB camera is used for detailed facial analysis. The display consists of...
Saved in:
Published in | 2011 IEEE Western New York Image Processing Workshop pp. 1 - 4 |
---|---|
Main Authors | , , |
Format | Conference Proceeding |
Language | English |
Published |
IEEE
01.11.2011
|
Subjects | |
Online Access | Get full text |
ISBN | 1467304204 9781467304207 |
DOI | 10.1109/WNYIPW.2011.6122883 |
Cover
Loading…
Summary: | This paper introduces an interactive display system guided by a human observer's gesture, facial pose, and facial expression. The Kinect depth sensor is used to detect and track an observer's skeletal joints while the RGB camera is used for detailed facial analysis. The display consists of active regions that the observer can manipulate with body gestures and secluded regions that are activated through head pose and facial expression. The observer receives realtime feedback allowing for intuitive navigation of the interface. A storefront interactive display was created and feedback was collected from over one hundred subjects. Promising results demonstrate the potential of the proposed approach for human-computer interaction applications. |
---|---|
ISBN: | 1467304204 9781467304207 |
DOI: | 10.1109/WNYIPW.2011.6122883 |