Speech, Gaze and Head Motion in a Face-to-Face Collaborative Task
In the present work we observe two subjects interacting in a collaborative task on a shared environment. One goal of the experiment is to measure the change in behavior with respect to gaze when one interactant is wearing dark glasses and hence his/her gaze is not visible by the other one. The resul...
Saved in:
Published in | Toward Autonomous, Adaptive, and Context-Aware Multimodal Interfaces. Theoretical and Practical Issues pp. 256 - 264 |
---|---|
Main Authors | , |
Format | Book Chapter |
Language | English |
Published |
Berlin, Heidelberg
Springer Berlin Heidelberg
2011
Springer-Verlag |
Series | Lecture Notes in Computer Science |
Subjects | |
Online Access | Get full text |
ISBN | 9783642181832 364218183X |
ISSN | 0302-9743 1611-3349 |
DOI | 10.1007/978-3-642-18184-9_21 |
Cover
Loading…
Summary: | In the present work we observe two subjects interacting in a collaborative task on a shared environment. One goal of the experiment is to measure the change in behavior with respect to gaze when one interactant is wearing dark glasses and hence his/her gaze is not visible by the other one. The results show that if one subject wears dark glasses while telling the other subject the position of a certain object, the other subject needs significantly more time to locate and move this object. Hence, eye gaze – when visible – of one subject looking at a certain object speeds up the location of the cube by the other subject. The second goal of the currently ongoing work is to collect data on the multimodal behavior of one of the subjects by means of audio recording, eye gaze and head motion tracking in order to build a model that can be used to control a robot in a comparable scenario in future experiments. |
---|---|
ISBN: | 9783642181832 364218183X |
ISSN: | 0302-9743 1611-3349 |
DOI: | 10.1007/978-3-642-18184-9_21 |