Towards Automated Understanding of Student-Tutor Interactions Using Visual Deictic Gestures

In this paper, we present techniques for automated understanding of tutor-student behavior through detecting visual deictic gestures, in the context of one-to-one mathematics tutoring. To the best knowledge of the authors, this is the first work in the area of intelligent tutoring systems, which foc...

Full description

Saved in:
Bibliographic Details
Published in2014 IEEE Conference on Computer Vision and Pattern Recognition Workshops pp. 480 - 487
Main Authors Sathayanarayana, Suchitra, Satzoda, Ravi Kumar, Carini, Amber, Lee, Monique, Salamanca, Linda, Reilly, Judy, Forster, Deborah, Bartlett, Marian, Littlewort, Gwen
Format Conference Proceeding Journal Article
LanguageEnglish
Published IEEE 01.06.2014
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:In this paper, we present techniques for automated understanding of tutor-student behavior through detecting visual deictic gestures, in the context of one-to-one mathematics tutoring. To the best knowledge of the authors, this is the first work in the area of intelligent tutoring systems, which focuses on spatial localization of deictic gestural activity, i.e. where the deictic gesture is pointing on the workspace. A new dataset called SDMATH is first introduced. The motivation for detecting deictic gestures and their spatial properties is established, followed by techniques for automatic localization of deictic gestures in a workspace. The techniques employ computer vision and machine learning steps such as GBVS saliency, binary morphology and HOG-SVM classification. It is shown that the method localizes the deictic tip with an accuracy of over 85 % accuracy for a cut off distance of 12 pixels. Furthermore, a detailed discussion using examples from the proposed dataset is presented on high-level inferences about the student-tutor interactions that can be derived from the integration of spatial and temporal localization of the deictic gestural activity using the proposed techniques.
Bibliography:ObjectType-Article-2
SourceType-Scholarly Journals-1
ObjectType-Conference-1
ObjectType-Feature-3
content type line 23
SourceType-Conference Papers & Proceedings-2
ISSN:2160-7508
2160-7516
DOI:10.1109/CVPRW.2014.77