DC Field | Value | Language |
---|---|---|
dc.contributor.advisor | Shin, Hayong | - |
dc.contributor.advisor | 신하용 | - |
dc.contributor.author | Kim, Donghyun | - |
dc.date.accessioned | 2019-08-22T02:42:52Z | - |
dc.date.available | 2019-08-22T02:42:52Z | - |
dc.date.issued | 2018 | - |
dc.identifier.uri | http://library.kaist.ac.kr/search/detail/view.do?bibCtrlNo=734301&flag=dissertation | en_US |
dc.identifier.uri | http://hdl.handle.net/10203/264741 | - |
dc.description | 학위논문(박사) - 한국과학기술원 : 산업및시스템공학과, 2018.2,[v, 70 p. :] | - |
dc.description.abstract | This paper deals with the optimal decision problem in various dynamic, nonlinear, and stochastic operation management problems. In particular, sequential decision making is modeled by the Markov Decision Process, and we discuss how to solve it in each problem. This study covers the following four issues. 1) Optimizing Multistage University Admission Decision Process. 2) An Efficient Approximate Solution for Stochastic Lanchester Models. 3) The Optimal Fire Allocation Strategy in Network Centric Warfare. 4) Learning Job Dispatching using Supervised Learning. Each study has its own characteristics, and most of the applied fields are different, but commonly the modeling and the decision problems are solved, new modeling proposed for each problem, and new decision-making solutions have better performance than existing methods . | - |
dc.language | eng | - |
dc.publisher | 한국과학기술원 | - |
dc.subject | Markov Decision Process▼aDynamic Programming▼aReinforcement Learning▼aStochastic Modeling▼aSupervised Learning▼aSimulation Optimization | - |
dc.subject | 마르코브 의사결정 모형▼a동적 계획법▼a강화 학습▼a확률적 모델링▼a지도 학습▼a시뮬레이션 최적화 | - |
dc.title | Optimal decision making under uncertainty in various operations management problem | - |
dc.title.alternative | 다양한 운영 관리 문제에 대한 불확실성 하에서의 최적 의사 결정 | - |
dc.type | Thesis(Ph.D) | - |
dc.identifier.CNRN | 325007 | - |
dc.description.department | 한국과학기술원 :산업및시스템공학과, | - |
dc.contributor.alternativeauthor | 김동현 | - |
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.