Multi-Agent Actor-Critic with Hierarchical Graph Attention Network

Cited 0 time in webofscience Cited 0 time in scopus
  • Hit : 34
  • Download : 0
Most previous studies on multi-agent reinforcement learning focus on deriving decentralized and cooperative policies to maximize a common reward and rarely consider the transferability of trained policies to new tasks. This prevents such policies from being applied to more complex multi-agent tasks. To resolve these limitations, we propose a model that conducts both representation learning for multiple agents using hierarchical graph attention network and policy learning using multi-agent actor-critic. The hierarchical graph attention network is specially designed to model the hierarchical relationships among multiple agents that either cooperate or compete with each other to derive more advanced strategic policies. Two attention networks, the inter-agent and inter-group attention layers, are used to effectively model individual and group level interactions, respectively. The two attention networks have been proven to facilitate the transfer of learned policies to new tasks with different agent compositions and allow one to interpret the learned strategies. Empirically, we demonstrate that the proposed model outperforms existing methods in several mixed cooperative and competitive tasks.
Publisher
AAAI
Issue Date
2020-02-08
Language
English
Citation

AAAI-20 (Thirty-Fourth AAAI Conference on Artificial Intelligence), pp.7236 - 7243

ISSN
2374-3468
DOI
10.1609/aaai.v34i05.6214
URI
http://hdl.handle.net/10203/275587
Appears in Collection
IE-Conference Papers(학술회의논문)
Files in This Item
There are no files associated with this item.

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0