Towards multimodal expression of information reliability in HRI
In this paper, we discuss preliminary studies on expressive presentation in the context of human-robot interaction. The focus is on the robot conveying its attitude towards the information it is presenting to a human partner using non-verbal means, in this case facial expressions. The goal of the re...
Saved in:
Published in | 2022 10th International Conference on Affective Computing and Intelligent Interaction Workshops and Demos (ACIIW) pp. 1 - 5 |
---|---|
Main Authors | , |
Format | Conference Proceeding |
Language | English |
Published |
IEEE
18.10.2022
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | In this paper, we discuss preliminary studies on expressive presentation in the context of human-robot interaction. The focus is on the robot conveying its attitude towards the information it is presenting to a human partner using non-verbal means, in this case facial expressions. The goal of the research is to understand better how natural social behaviour and emotional stance of the speaker can manifest itself in practical information providing settings. We present a small prototype of the Furhat robot application where the robot interacts with human partners and provides information which it judges reliable or not, and conveys its attitude with facial expressions. The assumption is that the users are more likely to consider the information that the robot presents as reliable and trustworthy if it is accompanied by the robot's positive and supporting facial expressions (e.g. smile) and consequently, accept and adopt the information as part of their own knowledge, whereas if the robot accompanies its presentation with a frowning or disgusted facial expression, the user is likely to associate the content with negative connotations and decrease the reliability and trustworthiness of the information. |
---|---|
DOI: | 10.1109/ACIIW57231.2022.10085997 |