DC Field | Value | Language |
---|---|---|
dc.contributor.advisor | Kim, Kee-Eung | - |
dc.contributor.advisor | 김기응 | - |
dc.contributor.author | Ham, Donghoon | - |
dc.date.accessioned | 2022-04-27T19:31:51Z | - |
dc.date.available | 2022-04-27T19:31:51Z | - |
dc.date.issued | 2021 | - |
dc.identifier.uri | http://library.kaist.ac.kr/search/detail/view.do?bibCtrlNo=948469&flag=dissertation | en_US |
dc.identifier.uri | http://hdl.handle.net/10203/296096 | - |
dc.description | 학위논문(석사) - 한국과학기술원 : 전산학부, 2021.2,[iii, 14 p. :] | - |
dc.description.abstract | In this paper, we propose a method of combining generalized neural network-based policy and the existing search-based planner when solving probabilistic planning problems with large states and action spaces. The policy based on the graph neural network structure is learned by mimicking the existing search-based planner in small-sized planning problems, and the learned policy guides the search direction of the planner in large planning problems that the original planner cannot solve. Comparing the proposed framework with the original planner and policy learning based on reinforcement learning, the proposed methodology has been shown to help improve the performance of the planner. Also, our work can be used as a baseline in the field of automatic planning based on deep learning. | - |
dc.language | eng | - |
dc.publisher | 한국과학기술원 | - |
dc.subject | Probabilistic Planning▼aMonte-Carlo Tree Search▼aReinforcement Learning | - |
dc.subject | 확률적 계획▼a몬테칼로 트리 탐색▼a강화 학습 | - |
dc.title | Improving probabilistic planner with generalized neural policy | - |
dc.title.alternative | 일반화된 신경망 정책 기반 확률적 계획법 | - |
dc.type | Thesis(Master) | - |
dc.identifier.CNRN | 325007 | - |
dc.description.department | 한국과학기술원 :전산학부, | - |
dc.contributor.alternativeauthor | 함동훈 | - |
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.