Co-GAIL: Learning Diverse Strategies for Human-Robot Collaboration

We present a method for learning a human-robot collaboration policy from human-human collaboration demonstrations. An effective robot assistant must learn to handle diverse human behaviors shown in the demonstrations and be robust when the humans adjust their strategies during online task execution....

Full description

Saved in:
Bibliographic Details
Published inarXiv.org
Main Authors Wang, Chen, Pérez-D'Arpino, Claudia, Xu, Danfei, Li, Fei-Fei, Liu, C Karen, Savarese, Silvio
Format Paper
LanguageEnglish
Published Ithaca Cornell University Library, arXiv.org 20.09.2023
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:We present a method for learning a human-robot collaboration policy from human-human collaboration demonstrations. An effective robot assistant must learn to handle diverse human behaviors shown in the demonstrations and be robust when the humans adjust their strategies during online task execution. Our method co-optimizes a human policy and a robot policy in an interactive learning process: the human policy learns to generate diverse and plausible collaborative behaviors from demonstrations while the robot policy learns to assist by estimating the unobserved latent strategy of its human collaborator. Across a 2D strategy game, a human-robot handover task, and a multi-step collaborative manipulation task, our method outperforms the alternatives in both simulated evaluations and when executing the tasks with a real human operator in-the-loop. Supplementary materials and videos at https://sites.google.com/view/co-gail-web/home
ISSN:2331-8422