DC Field | Value | Language |
---|---|---|
dc.contributor.advisor | Yun, Se-Young | - |
dc.contributor.advisor | 윤세영 | - |
dc.contributor.advisor | Yun, Chulhee | - |
dc.contributor.advisor | 윤철희 | - |
dc.contributor.author | Lee, Junghyun | - |
dc.date.accessioned | 2023-06-22T19:31:18Z | - |
dc.date.available | 2023-06-22T19:31:18Z | - |
dc.date.issued | 2023 | - |
dc.identifier.uri | http://library.kaist.ac.kr/search/detail/view.do?bibCtrlNo=1032335&flag=dissertation | en_US |
dc.identifier.uri | http://hdl.handle.net/10203/308198 | - |
dc.description | 학위논문(석사) - 한국과학기술원 : 김재철AI대학원, 2023.2,[v, 114 p. :] | - |
dc.description.abstract | We consider the problem of model estimation in episodic Block MDPs. In these MDPs, the decision maker has access to rich observations or contexts generated from a small number of latent states. We are interested in estimating the latent state decoding function (the mapping from the observations to latent states) based on data generated under a fixed behavior policy. We derive an information-theoretical lower bound on the error rate for estimating this function and present an algorithm approaching this fundamental limit. In turn, our algorithm also provides estimates of all the components of the MDP. We apply our results to the problem of learning near-optimal policies in the reward-free setting. Based on our efficient model estimation algorithm, we show that one can infer a policy converging (as the number of collected samples grows large) to the optimal policy at the best possible asymptotic rate. Our analysis provides necessary and sufficient conditions under which exploiting the block structure yields improvements in the sample complexity for identifying near-optimal policies. When these conditions are met, the sample complexities in the offline reward-free setting are improved by a multiplicative factor $n$, where $n$ is the number of contexts. | - |
dc.language | eng | - |
dc.publisher | 한국과학기술원 | - |
dc.subject | Block Markov decision process▼aClustering▼aInformation theory▼aChange-of-measure▼aMarkov chain▼aMixing time▼aConcentration inequality▼aAsymptotic analyses▼aOffline RL▼aReward-free RL | - |
dc.subject | 블록 마르코프 결정 과정▼a군집화▼a정보이론▼a측도 변동▼a마르코프 체인▼a혼합 시간▼a집중 부등식▼a점근적 분석▼a오프라인 강화학습▼a보상없는 강화학습 | - |
dc.title | Near-optimal clustering in block Markov decision processes | - |
dc.title.alternative | 블락 마르코프 결정 과정에서의 거의 최적의 군집화 알고리즘 | - |
dc.type | Thesis(Master) | - |
dc.identifier.CNRN | 325007 | - |
dc.description.department | 한국과학기술원 :김재철AI대학원, | - |
dc.contributor.alternativeauthor | 이정현 | - |
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.