Multiparty conversation via multirobot system: incorporation of nonverbal user responses for continued conversation

Recent years have seen the advent of conversational humanoid robots. Implementing multiple robots is a promising approach for continuing human-robot conversations when a robot encounters speech-recognition errors. This strategy is effective for the user's verbal responses; however, people gener...

Full description

Saved in:
Bibliographic Details
Published inAdvanced robotics Vol. 38; no. 7; pp. 482 - 491
Main Authors Sakai, Kazuki, Hsieh, TingHao, Yoshikawa, Yuichiro, Ishiguro, Hiroshi
Format Journal Article
LanguageEnglish
Published Taylor & Francis 02.04.2024
Subjects
Online AccessGet full text
ISSN0169-1864
1568-5535
DOI10.1080/01691864.2024.2326969

Cover

Loading…
More Information
Summary:Recent years have seen the advent of conversational humanoid robots. Implementing multiple robots is a promising approach for continuing human-robot conversations when a robot encounters speech-recognition errors. This strategy is effective for the user's verbal responses; however, people generally use nonverbal responses, such as nods or smiles, to respond to the interlocutor. In this study, we proposed a conversational strategy for twin robots in which the second robot recognized a human's nonverbal responses and interrupted the conversation by mentioning these responses. Moreover, we developed an interrogative dialogue system using simple nonverbal recognition modules. To verify the effectiveness of this conversational strategy, a subject-based experiment was conducted in which humans conversed with the two robots, and their impressions were evaluated through a questionnaire. We compared the outcomes under two conditions: whether the second robot's interruption was inserted or otherwise. Our results indicate that language-only ineffectiveness in responding to ambiguous responses regarding satisfaction and comprehension was mitigated. Thus, such a dialogue system is advantageous because it can facilitate robust conversation without relying solely on speech recognition.
ISSN:0169-1864
1568-5535
DOI:10.1080/01691864.2024.2326969