How people talk when teaching a robot

We examine affective vocalizations provided by human teachers to robotic learners. In unscripted one-on-one interactions, participants provided vocal input to a robotic dinosaur as the robot selected toy buildings to knock down. We find that (1) people vary their vocal input depending on the learner...

Full description

Saved in:
Bibliographic Details
Published in2009 4th ACM/IEEE International Conference on Human-Robot Interaction (HRI) pp. 23 - 30
Main Authors Kim, Elizabeth S., Leyzberg, Dan, Tsui, Katherine M., Scassellati, Brian
Format Conference Proceeding
LanguageEnglish
Published New York, NY, USA ACM 09.03.2009
IEEE
SeriesACM Conferences
Subjects
Online AccessGet full text
ISBN1605584045
9781605584041
ISSN2167-2121
DOI10.1145/1514095.1514102

Cover

Loading…
More Information
Summary:We examine affective vocalizations provided by human teachers to robotic learners. In unscripted one-on-one interactions, participants provided vocal input to a robotic dinosaur as the robot selected toy buildings to knock down. We find that (1) people vary their vocal input depending on the learner's performance history, (2) people do not wait until a robotic learner completes an action before they provide input and (3) people naively and spontaneously use intensely affective prosody. Our findings suggest modifications may be needed to traditional machine learning models to better fit observed human tendencies. Our observations of human behavior contradict the popular assumptions made by machine learning algorithms (in particular, reinforcement learning) that the reward function is stationary and path-independent for social learning interactions. We also propose an interaction taxonomy that describes three phases of a human-teacher's vocalizations: direction, spoken before an action is taken; guidance, spoken as the learner communicates an intended action; and feedback, spoken in response to a completed action.
ISBN:1605584045
9781605584041
ISSN:2167-2121
DOI:10.1145/1514095.1514102