Graph Convolutional Value Decomposition in Multi-Agent Reinforcement Learning
We propose a novel framework for value function factorization in multi-agent deep reinforcement learning (MARL) using graph neural networks (GNNs). In particular, we consider the team of agents as the set of nodes of a complete directed graph, whose edge weights are governed by an attention mechanis...
Saved in:
Main Authors | , , , |
---|---|
Format | Journal Article |
Language | English |
Published |
09.10.2020
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | We propose a novel framework for value function factorization in multi-agent
deep reinforcement learning (MARL) using graph neural networks (GNNs). In
particular, we consider the team of agents as the set of nodes of a complete
directed graph, whose edge weights are governed by an attention mechanism.
Building upon this underlying graph, we introduce a mixing GNN module, which is
responsible for i) factorizing the team state-action value function into
individual per-agent observation-action value functions, and ii) explicit
credit assignment to each agent in terms of fractions of the global team
reward. Our approach, which we call GraphMIX, follows the centralized training
and decentralized execution paradigm, enabling the agents to make their
decisions independently once training is completed. We show the superiority of
GraphMIX as compared to the state-of-the-art on several scenarios in the
StarCraft II multi-agent challenge (SMAC) benchmark. We further demonstrate how
GraphMIX can be used in conjunction with a recent hierarchical MARL
architecture to both improve the agents' performance and enable fine-tuning
them on mismatched test scenarios with higher numbers of agents and/or actions. |
---|---|
DOI: | 10.48550/arxiv.2010.04740 |