Enabling Multi-Robot Collaboration from Single-Human Guidance

Learning collaborative behaviors is essential for multi-agent systems. Traditionally, multi-agent reinforcement learning solves this implicitly through a joint reward and centralized observations, assuming collaborative behavior will emerge. Other studies propose to learn from demonstrations of a gr...

Full description

Saved in:
Bibliographic Details
Published inarXiv.org
Main Authors Ji, Zhengran, Zhang, Lingyu, Sajda, Paul, Chen, Boyuan
Format Paper
LanguageEnglish
Published Ithaca Cornell University Library, arXiv.org 30.09.2024
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Learning collaborative behaviors is essential for multi-agent systems. Traditionally, multi-agent reinforcement learning solves this implicitly through a joint reward and centralized observations, assuming collaborative behavior will emerge. Other studies propose to learn from demonstrations of a group of collaborative experts. Instead, we propose an efficient and explicit way of learning collaborative behaviors in multi-agent systems by leveraging expertise from only a single human. Our insight is that humans can naturally take on various roles in a team. We show that agents can effectively learn to collaborate by allowing a human operator to dynamically switch between controlling agents for a short period and incorporating a human-like theory-of-mind model of teammates. Our experiments showed that our method improves the success rate of a challenging collaborative hide-and-seek task by up to 58$% with only 40 minutes of human guidance. We further demonstrate our findings transfer to the real world by conducting multi-robot experiments.
ISSN:2331-8422