DC Field | Value | Language |
---|---|---|
dc.contributor.advisor | Kim, Kee-Eung | - |
dc.contributor.advisor | 김기응 | - |
dc.contributor.author | Seo, Seokin | - |
dc.date.accessioned | 2021-05-11T19:34:09Z | - |
dc.date.available | 2021-05-11T19:34:09Z | - |
dc.date.issued | 2019 | - |
dc.identifier.uri | http://library.kaist.ac.kr/search/detail/view.do?bibCtrlNo=875463&flag=dissertation | en_US |
dc.identifier.uri | http://hdl.handle.net/10203/283087 | - |
dc.description | 학위논문(석사) - 한국과학기술원 : 전산학부, 2019.8,[iii, 16 p. :] | - |
dc.description.abstract | For recent years, deep reinforcement learning, which leverages deep neural networks for learning reinforcement learning agents, has achieved successful results in control tasks, and the need for understanding and explaining learning processes or outcomes of deep reinforcement learning has also been raised. In this thesis, we try to extend the notion of minimal sufficient explanation, which explains behaviors of an agent learned in tabular reinforcement learning, into continuous state spaces, in order to explain behaviors of deep reinforcement learning agent. Further, we propose a novel explanation generation algorithm for deep reinforcement learning policies with the extended notion, and show how our method is efficient for explaining behaviors of reinforcement learning agents, by comparing naive baseline algorithms. | - |
dc.language | eng | - |
dc.publisher | 한국과학기술원 | - |
dc.subject | Reinforcement learning▼ainterpretable machine learning▼aexplainability for machine learning | - |
dc.subject | 강화학습▼a기계학습의 이해가능성▼a설명가능한 기계학습 | - |
dc.title | (A) study on generating explanations for reinforcement learning policies in control tasks | - |
dc.title.alternative | 제어 문제를 위한 강화학습 정책의 설명 생성에 관한 연구 | - |
dc.type | Thesis(Master) | - |
dc.identifier.CNRN | 325007 | - |
dc.description.department | 한국과학기술원 :전산학부, | - |
dc.contributor.alternativeauthor | 서석인 | - |
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.