Visual attention in spoken human-robot interaction
Psycholinguistic studies of situated language processing have revealed that gaze in the visual environment is tightly coupled with both spoken language comprehension and production. It has also been established that interlocutors monitor the gaze of their partners, a phenomenon called "joint at...
Saved in:
Published in | 2009 4th ACM/IEEE International Conference on Human-Robot Interaction (HRI) pp. 77 - 84 |
---|---|
Main Authors | , |
Format | Conference Proceeding |
Language | English |
Published |
New York, NY, USA
ACM
09.03.2009
IEEE |
Series | ACM Conferences |
Subjects | |
Online Access | Get full text |
ISBN | 1605584045 9781605584041 |
ISSN | 2167-2121 |
DOI | 10.1145/1514095.1514111 |
Cover
Loading…
Abstract | Psycholinguistic studies of situated language processing have revealed that gaze in the visual environment is tightly coupled with both spoken language comprehension and production. It has also been established that interlocutors monitor the gaze of their partners, a phenomenon called "joint attention", as a further means for facilitating mutual understanding. We hypothesise that human-robot interaction will benefit when the robot's language-related gaze behaviour is similar to that of people, potentially providing the user with valuable non-verbal information concerning the robot's intended message or the robot's successful understanding. We report findings from two eye-tracking experiments demonstrating (1) that human gaze is modulated by both the robot speech and gaze, and (2) that human comprehension of robot speech is improved when the robot's real-time gaze behaviour is similar to that of humans. |
---|---|
AbstractList | Psycholinguistic studies of situated language processing have revealed that gaze in the visual environment is tightly coupled with both spoken language comprehension and production. It has also been established that interlocutors monitor the gaze of their partners, a phenomenon called "joint attention", as a further means for facilitating mutual understanding. We hypothesise that human-robot interaction will benefit when the robot's language-related gaze behaviour is similar to that of people, potentially providing the user with valuable non-verbal information concerning the robot's intended message or the robot's successful understanding. We report findings from two eye-tracking experiments demonstrating (1) that human gaze is modulated by both the robot speech and gaze, and (2) that human comprehension of robot speech is improved when the robot's real-time gaze behaviour is similar to that of humans. |
Author | Crocker, Matthew W. Staudte, Maria |
Author_xml | – sequence: 1 givenname: Maria surname: Staudte fullname: Staudte, Maria organization: Saarland University, Saarbrücken, Germany – sequence: 2 givenname: Matthew W. surname: Crocker fullname: Crocker, Matthew W. organization: Saarland University, Saarbrücken, Germany |
BookMark | eNqNkL1PwzAQxY0oEm3pzMCSkSXB5_grI6ooIFViAVbLdhxh2thV7A7896RqBkZuebp7793wW6BZiMEhdAu4AqDsARhQ3LDqpABwgRbAMWOSYsou_y4zNCfARUmAwDVapfSNxxESxsgckU-fjnpf6JxdyD6GwociHeLOheLr2OtQDtHEPF6zG7Q9JW7QVaf3ya0mXaKPzdP7-qXcvj2_rh-3pQZOcskxYdaApIy0BDTH2DaOAwcjre5qYU0LVDRGCmHrhrStxg3uGKPSSM4Nq5fo7vzXO-fUYfC9Hn4UJ4zjRoxudXa17ZWJcZcUYHVCoyY0akKjzOBdNxbu_1mofwGSKV-z |
ContentType | Conference Proceeding |
Copyright | 2009 ACM |
Copyright_xml | – notice: 2009 ACM |
DBID | 6IE 6IL CBEJK RIE RIL |
DOI | 10.1145/1514095.1514111 |
DatabaseName | IEEE Electronic Library (IEL) Conference Proceedings IEEE Proceedings Order Plan All Online (POP All Online) 1998-present by volume IEEE Xplore All Conference Proceedings IEEE Electronic Library (IEL) IEEE Proceedings Order Plans (POP All) 1998-Present |
DatabaseTitleList | |
Database_xml | – sequence: 1 dbid: RIE name: IEEE Electronic Library (IEL) url: https://proxy.k.utb.cz/login?url=https://ieeexplore.ieee.org/ sourceTypes: Publisher |
DeliveryMethod | fulltext_linktorsrc |
Discipline | Engineering |
EISBN | 1605584045 9781605584041 |
EndPage | 84 |
ExternalDocumentID | 6256097 |
Genre | orig-research |
GroupedDBID | 6IE 6IF 6IK 6IL 6IN AAJGR AARBI ACM ADPZR ALMA_UNASSIGNED_HOLDINGS APO BEFXN BFFAM BGNUA BKEBE BPEOZ CBEJK GUFHI IERZE OCL RIE RIL AAWTH ABLEC ADZIZ CHZPO IEGSK |
ID | FETCH-LOGICAL-a162t-6025cb18452d21a600c9e6161b8caf37cbd1479b877c392dda090f5548b866b53 |
IEDL.DBID | RIE |
ISBN | 1605584045 9781605584041 |
ISSN | 2167-2121 |
IngestDate | Wed Aug 27 03:24:37 EDT 2025 Wed Jan 31 06:36:44 EST 2024 |
IsPeerReviewed | false |
IsScholarly | false |
Keywords | visual attention gaze user study experimental methods |
Language | English |
License | Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from Permissions@acm.org |
LinkModel | DirectLink |
MeetingName | HRI09: International Conference on Human Robot Interaction |
MergedId | FETCHMERGED-LOGICAL-a162t-6025cb18452d21a600c9e6161b8caf37cbd1479b877c392dda090f5548b866b53 |
PageCount | 8 |
ParticipantIDs | acm_books_10_1145_1514095_1514111_brief acm_books_10_1145_1514095_1514111 ieee_primary_6256097 |
PublicationCentury | 2000 |
PublicationDate | 20090309 2009-March |
PublicationDateYYYYMMDD | 2009-03-09 2009-03-01 |
PublicationDate_xml | – month: 03 year: 2009 text: 20090309 day: 09 |
PublicationDecade | 2000 |
PublicationPlace | New York, NY, USA |
PublicationPlace_xml | – name: New York, NY, USA |
PublicationSeriesTitle | ACM Conferences |
PublicationTitle | 2009 4th ACM/IEEE International Conference on Human-Robot Interaction (HRI) |
PublicationTitleAbbrev | HRI |
PublicationYear | 2009 |
Publisher | ACM IEEE |
Publisher_xml | – name: ACM – name: IEEE |
SSID | ssj0000781160 ssj0003204102 |
Score | 1.571335 |
Snippet | Psycholinguistic studies of situated language processing have revealed that gaze in the visual environment is tightly coupled with both spoken language... |
SourceID | ieee acm |
SourceType | Publisher |
StartPage | 77 |
SubjectTerms | Applied computing -- Law, social and behavioral sciences -- Psychology Computer systems organization -- Embedded and cyber-physical systems -- Robotics -- External interfaces for robotics experimental methods gaze Human-centered computing -- Human computer interaction (HCI) -- HCI design and evaluation methods Human-centered computing -- Human computer interaction (HCI) -- Interaction paradigms Human-centered computing -- Human computer interaction (HCI) -- Interaction paradigms -- Natural language interfaces Humans Monitoring Robots Speech Time factors user study Videos visual attention Visualization |
Title | Visual attention in spoken human-robot interaction |
URI | https://ieeexplore.ieee.org/document/6256097 |
hasFullText | 1 |
inHoldings | 1 |
isFullTextHit | |
isPrint | |
link | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwjV1NS8QwEB3cPenFrxXXLyoIXuzadJs0PYvLIigeVLyVTJrConal21789c60df1A0FOb0kP7CJk3ybw3ACeZDZ1M2GM_0MaPcq18E2PuiwQdygwDI1jvfH2jpvfR1aN8XIGzpRbGOdcUn7kR3zZn-dnc1rxVdq44PidxD3o0zVqt1nI_hU1rREf1eTwOg0g0JYchW3vTCi1Y10XsnUIu8ZjO7uljLDrbHxHJcwqClPXIEV8FtxbqGfvyrftKE3wm63D98dltzcnTqK5wZN9-ODr-9782YPAp8_NulwFsE1ZcsQVrXxwKtyF8mC1q8-yxC2dTF-nNCo8y4SdXeE17P7-c47zy2HWibDUSA7ifXN5dTP2uzYJvhAorXxHtsUiZngyzUBhiQDZxipggamvycWwxE1GcoI5jS2wqy0yQBDnREI1aKZTjHegX88LtgmdRWraLCQKjI2dpcci0tsqhIh6JSg7hmKBMOX9YpK0kWqYd3GkH9xBO_3wnxXLm8iFsM5bpa-vLkXYw7v3-eB9W25Mgrh87gH5V1u6QCEWFR81MegepVbwX |
linkProvider | IEEE |
linkToHtml | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwjV3JTsMwEB1BOQAXdlHWICFxISVOY8c5I1BZijgA4hZ5HEeqgBSV9MLXM5OkZRESnBJHOSQjy--NPe8NwGFmQycT9tgPtPGjXCvfxJj7IkGHMsPACNY7929U7z66fJSPM3A81cI456riM9fh2-osPxvaMW-VnSjG5ySehTnC_UjWaq3pjgrb1oiG7PO4GwaRqIoOQzb3pjVasLKL-DuBLjGZxvBpMhaN8Y-I5AnBIOU9ssNXwc2FZo19-dZ_pYKf8yXoTz68rjp56oxL7Nj3H56O__2zZVj_FPp5t1MIW4EZV6zC4hePwjUIHwZvY_PssQ9nVRnpDQqPcuEnV3hVgz9_NMRh6bHvxKhWSazD_fnZ3WnPbxot-EaosPQVER-LlOvJMAuFIQ5kE6eIC6K2Ju_GFjMRxQnqOLbEp7LMBEmQExHRqJVC2d2AVjEs3CZ4FqVlw5ggMDpylpaHTGurHCpikqhkGw4olClnEG9pLYqWaRPutAl3G47-fCfF0cDlbVjjWKavtTNH2oRx6_fH-zDfu-tfp9cXN1fbsFCfC3E12Q60ytHY7RK9KHGvmlUfwEy_ZA |
openUrl | ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Abook&rft.genre=proceeding&rft.title=Proceedings+of+the+4th+ACM%2FIEEE+international+conference+on+Human+robot+interaction&rft.atitle=Visual+attention+in+spoken+human-robot+interaction&rft.au=Staudte%2C+Maria&rft.au=Crocker%2C+Matthew+W.&rft.series=ACM+Conferences&rft.date=2009-03-09&rft.pub=ACM&rft.isbn=1605584045&rft.spage=77&rft.epage=84&rft_id=info:doi/10.1145%2F1514095.1514111 |
thumbnail_l | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=2167-2121&client=summon |
thumbnail_m | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=2167-2121&client=summon |
thumbnail_s | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=2167-2121&client=summon |