Toward risk-based optimistic exploration for cooperative multi-agent reinforcement learning협력적 다중 에이전트 강화학습을 위한 위험도 기반의 낙천적 탐색방법

Cited 0 time in webofscience Cited 0 time in scopus
  • Hit : 87
  • Download : 0
Multi-agent setting is intricate and unpredictable since the behaviors of multiple agents influence one another. To address this environmental uncertainty, distributional reinforcement learning algorithms that incorporate uncertainty via distributional output have been integrated with multi-agent reinforcement learning methods, achieving state-of-the-art performance. However, distributional multi-agent reinforcement learning algorithms still rely on the traditional ε-greedy, which does not take cooperative strategy into account. In this paper, we present a risk-based exploration that leads to collaboratively optimistic behavior by shifting the sampling region of distribution. Initially, we take expectations from the upper quantiles of state-action values, which are optimistic actions, and gradually shift the sampling region of quantiles to the full distribution for exploitation. By ensuring that each agent is exposed to the same level of risk, we can force them to take cooperatively optimistic actions. Our method shows remarkable performance in multi-agent settings requiring cooperative exploration based on quantile regression by virtue of risk property.
Advisors
Yun, Seyoungresearcher윤세영researcher
Description
한국과학기술원 :김재철AI대학원,
Publisher
한국과학기술원
Issue Date
2023
Identifier
325007
Language
eng
Description

학위논문(석사) - 한국과학기술원 : 김재철AI대학원, 2023.2,[iv, 28 p. :]

Keywords

Distributional reinforcement learning▼aExploration▼aMulti-agent learning▼aRisk▼aUncertainty; 분포강화학습▼a탐색▼a다중 에이전트 학습▼a위험도▼a불확실성

URI
http://hdl.handle.net/10203/308197
Link
http://library.kaist.ac.kr/search/detail/view.do?bibCtrlNo=1032332&flag=dissertation
Appears in Collection
AI-Theses_Master(석사논문)
Files in This Item
There are no files associated with this item.

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0