Policy Distillation and Value Matching in Multiagent Reinforcement Learning

Multiagent reinforcement learning (MARL) algorithms have been demonstrated on complex tasks that require the coordination of a team of multiple agents to complete. Existing works have focused on sharing information between agents via centralized critics to stabilize learning or through communication...

Full description

Saved in:
Bibliographic Details
Published in2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) pp. 8193 - 8200
Main Authors Wadhwania, Samir, Kim, Dong-Ki, Omidshafiei, Shayegan, How, Jonathan P.
Format Conference Proceeding
LanguageEnglish
Published IEEE 01.11.2019
Online AccessGet full text

Cover

Loading…
More Information
Summary:Multiagent reinforcement learning (MARL) algorithms have been demonstrated on complex tasks that require the coordination of a team of multiple agents to complete. Existing works have focused on sharing information between agents via centralized critics to stabilize learning or through communication to improve performance, but do not generally consider how information can be shared between agents to address the curse of dimensionality in MARL. We posit that a multiagent problem can be decomposed into a multi-task problem where each agent explores a subset of the state space instead of exploring the entire state space. This paper introduces a multiagent actor-critic algorithm for combining knowledge from homogeneous agents through distillation and value-matching that outperforms policy distillation alone and allows further learning in discrete and continuous action spaces.
ISSN:2153-0866
DOI:10.1109/IROS40897.2019.8967849