DC Field | Value | Language |
---|---|---|
dc.contributor.advisor | Sung, Youngchul | - |
dc.contributor.advisor | 성영철 | - |
dc.contributor.author | Kim, Woojun | - |
dc.date.accessioned | 2019-09-04T02:42:17Z | - |
dc.date.available | 2019-09-04T02:42:17Z | - |
dc.date.issued | 2018 | - |
dc.identifier.uri | http://library.kaist.ac.kr/search/detail/view.do?bibCtrlNo=733996&flag=dissertation | en_US |
dc.identifier.uri | http://hdl.handle.net/10203/266814 | - |
dc.description | 학위논문(석사) - 한국과학기술원 : 전기및전자공학부, 2018.2,[iii, 24 p. :] | - |
dc.description.abstract | We consider the problem of cooperative multi-agent reinforcement learning in partially observable environment. In particular, it is essential to coordinate between agents in partially observable environment. In this paper, we introduce the compressed feature vectors for communicate between agents and how to design the decentralized network using them. We also introduce and apply the group dropout layer to train the ensemble of sub-network efficiently, and evaluate the proposed network on pursuit, which is a standard task in multi-agent systems. | - |
dc.language | eng | - |
dc.publisher | 한국과학기술원 | - |
dc.subject | Multi-Agent Reinforcement Learning▼aCompressed feature vector▼aGroup dropout | - |
dc.subject | 다중 에이전트 강화학습▼a축약된 특직 벡터▼a그룹 드롭아웃 | - |
dc.title | (The) architecture of decentralized multi-agent reinforcement learning with communication | - |
dc.title.alternative | 통신을 활용한 분산 다중 에이전트 강화학습 구조 | - |
dc.type | Thesis(Master) | - |
dc.identifier.CNRN | 325007 | - |
dc.description.department | 한국과학기술원 :전기및전자공학부, | - |
dc.contributor.alternativeauthor | 김우준 | - |
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.