Disentangling Sources of Risk for Distributional Multi-Agent Reinforcement Learning

Cited 0 time in webofscience Cited 0 time in scopus
  • Hit : 156
  • Download : 0
In cooperative multi-agent reinforcement learning, the outcomes of agent-wise policies are highly stochastic due to the two sources of risk: (a) random actions taken by teammates and (b) random transition and rewards. Although the two sources have very distinct characteristics, existing frameworks are insufficient to control the risk-sensitivity of agent-wise policies in a disentangled manner. To this end, we propose Disentangled RIsk-sensitive Multi-Agent reinforcement learning (DRIMA) to separately access the risk sources. For example, our framework allows an agent to be optimistic with respect to teammates (who can prosocially adapt) but more risk-neutral with respect to the environment (which does not adapt). Our experiments demonstrate that DRIMA significantly outperforms prior state-of-the-art methods across various scenarios in the StarCraft Multi-agent Challenge environment. Notably, DRIMA shows robust performance where prior methods learn only a highly suboptimal policy, regardless of reward shaping, exploration scheduling, and noisy (random or adversarial) agents.
Publisher
International Conference on Machine Learning
Issue Date
2022-07-18
Language
English
Citation

The 39th International Conference on Machine Learning, ICML 2022

ISSN
2640-3498
URI
http://hdl.handle.net/10203/301178
Appears in Collection
EE-Conference Papers(학술회의논문)AI-Conference Papers(학술대회논문)
Files in This Item
There are no files associated with this item.

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0