Factored value functions for cooperative multi-agent reinforcement learning협력 다중 에이전트 강화 학습을 위한 가치 분리 함수

Cited 0 time in webofscience Cited 0 time in scopus
  • Hit : 1
  • Download : 0
In cooperative multi-agent reinforcement learning, the outcomes of agent-wise policies are highly stochastic due to the two sources of risk: (a) random actions taken by teammates and (b) random transition and rewards. Although the two sources have very distinct characteristics, existing frameworks are insufficient to control the risk-sensitivity of agent-wise policies in a disentangled manner. To this end, we propose Disentangled RIsk-sensitive Multi-Agent reinforcement learning (DRIMA) to separately access the risk sources. For example, our framework allows an agent to be optimistic with respect to teammates (who can prosocially adapt) but more risk-neutral with respect to the environment (which does not adapt). Our experiments demonstrate that DRIMA significantly outperforms prior state-of-the-art methods across various scenarios in the StarCraft Multi-agent Challenge environment. Notably, DRIMA shows robust performance where prior methods learn only a highly suboptimal policy, regardless of reward shaping, exploration scheduling, and noisy (random or adversarial) agents.
Advisors
신진우researcher
Description
한국과학기술원 :전기및전자공학부,
Publisher
한국과학기술원
Issue Date
2024
Identifier
325007
Language
eng
Description

학위논문(박사) - 한국과학기술원 : 전기및전자공학부, 2024.2,[vi, 65 p. :]

Keywords

기계학습▼a심층학습▼a강화학습▼a다중-에이전트 강화학습; Machine learning▼aDeep learning▼aReinforcement learning▼aMulti-agent reinforcement learning

URI
http://hdl.handle.net/10203/322170
Link
http://library.kaist.ac.kr/search/detail/view.do?bibCtrlNo=1100076&flag=dissertation
Appears in Collection
EE-Theses_Ph.D.(박사논문)
Files in This Item
There are no files associated with this item.

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0