Visual attention in spoken human-robot interaction

Psycholinguistic studies of situated language processing have revealed that gaze in the visual environment is tightly coupled with both spoken language comprehension and production. It has also been established that interlocutors monitor the gaze of their partners, a phenomenon called "joint at...

Full description

Saved in:
Bibliographic Details
Published in2009 4th ACM/IEEE International Conference on Human-Robot Interaction (HRI) pp. 77 - 84
Main Authors Staudte, Maria, Crocker, Matthew W.
Format Conference Proceeding
LanguageEnglish
Published New York, NY, USA ACM 09.03.2009
IEEE
SeriesACM Conferences
Subjects
Online AccessGet full text
ISBN1605584045
9781605584041
ISSN2167-2121
DOI10.1145/1514095.1514111

Cover

Loading…
More Information
Summary:Psycholinguistic studies of situated language processing have revealed that gaze in the visual environment is tightly coupled with both spoken language comprehension and production. It has also been established that interlocutors monitor the gaze of their partners, a phenomenon called "joint attention", as a further means for facilitating mutual understanding. We hypothesise that human-robot interaction will benefit when the robot's language-related gaze behaviour is similar to that of people, potentially providing the user with valuable non-verbal information concerning the robot's intended message or the robot's successful understanding. We report findings from two eye-tracking experiments demonstrating (1) that human gaze is modulated by both the robot speech and gaze, and (2) that human comprehension of robot speech is improved when the robot's real-time gaze behaviour is similar to that of humans.
ISBN:1605584045
9781605584041
ISSN:2167-2121
DOI:10.1145/1514095.1514111