Interactive display using depth and RGB sensors for face and gesture control

This paper introduces an interactive display system guided by a human observer's gesture, facial pose, and facial expression. The Kinect depth sensor is used to detect and track an observer's skeletal joints while the RGB camera is used for detailed facial analysis. The display consists of...

Full description

Saved in:
Bibliographic Details
Published in2011 IEEE Western New York Image Processing Workshop pp. 1 - 4
Main Authors Bellmore, C., Ptucha, R., Savakis, A.
Format Conference Proceeding
LanguageEnglish
Published IEEE 01.11.2011
Subjects
Online AccessGet full text
ISBN1467304204
9781467304207
DOI10.1109/WNYIPW.2011.6122883

Cover

Loading…
More Information
Summary:This paper introduces an interactive display system guided by a human observer's gesture, facial pose, and facial expression. The Kinect depth sensor is used to detect and track an observer's skeletal joints while the RGB camera is used for detailed facial analysis. The display consists of active regions that the observer can manipulate with body gestures and secluded regions that are activated through head pose and facial expression. The observer receives realtime feedback allowing for intuitive navigation of the interface. A storefront interactive display was created and feedback was collected from over one hundred subjects. Promising results demonstrate the potential of the proposed approach for human-computer interaction applications.
ISBN:1467304204
9781467304207
DOI:10.1109/WNYIPW.2011.6122883