Multi-party, multi-role comprehensive listening behavior

Realizing effective listening behavior in virtual humans has become a key area of research, especially as research has sought to realize more complex social scenarios involving multiple participants and bystanders. A human listener’s nonverbal behavior is conditioned by a variety of factors, from cu...

Full description

Saved in:
Bibliographic Details
Published inAutonomous agents and multi-agent systems Vol. 27; no. 2; pp. 218 - 234
Main Authors Wang, Zhiyang, Lee, Jina, Marsella, Stacy
Format Journal Article
LanguageEnglish
Published Boston Springer US 01.09.2013
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Realizing effective listening behavior in virtual humans has become a key area of research, especially as research has sought to realize more complex social scenarios involving multiple participants and bystanders. A human listener’s nonverbal behavior is conditioned by a variety of factors, from current speaker’s behavior to the listener’s role and desire to participate in the conversation and unfolding comprehension of the speaker. Similarly, we seek to create virtual humans able to provide feedback based on their participatory goals and their unfolding understanding of, and reaction to, the relevance of what the speaker is saying as the speaker speaks. Based on a survey of existing psychological literature as well as recent technological advances in recognition and partial understanding of natural language, we describe a model of how to integrate these factors into a virtual human that behaves consistently with these goals. We then discuss how the model is implemented into a virtual human architecture and present an evaluation of behaviors used in the model.
ISSN:1387-2532
1573-7454
DOI:10.1007/s10458-012-9215-8