DC Field | Value | Language |
---|---|---|
dc.contributor.advisor | Kim, Daeyoung | - |
dc.contributor.advisor | 김대영 | - |
dc.contributor.author | Hong, Inpyo | - |
dc.date.accessioned | 2021-05-12T19:44:07Z | - |
dc.date.available | 2021-05-12T19:44:07Z | - |
dc.date.issued | 2020 | - |
dc.identifier.uri | http://library.kaist.ac.kr/search/detail/view.do?bibCtrlNo=924407&flag=dissertation | en_US |
dc.identifier.uri | http://hdl.handle.net/10203/284368 | - |
dc.description | 학위논문(박사) - 한국과학기술원 : 전산학부, 2020.8,[v, 73 p. :] | - |
dc.description.abstract | In this thesis, we address two important estimation frameworks, sequential Monte Carlo and deep neural networks for the problem of recursive Bayesian estimation and natural image denoising, and propose ensemble strategies for improving their estimation efficiency. For sequential Monte Carlo, its previous resampling schemes have two drawbacks. First, they remove so many samples that ensemble diversity becomes reduced. Second, they lose so large weight information that approximation error becomes significant. As a result, sequential Monte Carlo methods suffer from low sampling efficiency. Hence, we introduce a novel resampling scheme designed to overcome the two drawbacks. Our proposed scheme employs a deterministic approach to decrease the number of removed samples and keeps weights such that the loss of weight information is minimized. By doing so, our scheme increases the diversity of samples and approximation performance, and so, improves the sampling efficiency of the methods. We will show these results through two recursive Bayesian estimation examples. For deep neural networks, we propose a novel ensemble strategy of exploiting multiple deep neural networks for efficient deep learning of image denoising. To learn the high diversity of natural image patches and noise distributions in image denoising, we divide the denoising task into several local subtasks according to the complexity of image patches and conquer each subtask using a network trained on its local space. We then combine the local subtasks at test time by applying the set of networks to each noisy patch as a weighted mixture. Our methodology of using locally-learned networks based on patch complexity effectively decreases the diversity of image patches at each single network, and their adaptively-weighted mixture to the input combines the local subtasks efficiently. Extensive experimental results demonstrate that our strategy outperforms previous methods with much smaller training samples and trainable parameters, and so, improves the learning efficiency. | - |
dc.language | eng | - |
dc.publisher | 한국과학기술원 | - |
dc.subject | Sequential Monte Carlo▼aResampling▼aRecursive Bayesian Estimation▼aAutoencoders▼aDeep Neural Networks▼aLocal Experts▼aImage Denoising▼aPatch Complexity▼aEnsemble Selection▼aEfficiency | - |
dc.subject | 순차 몬테 칼로법 | - |
dc.subject | 표본 재추출 | - |
dc.subject | 재귀 베이지안 추정 | - |
dc.subject | 오토인코더 | - |
dc.subject | 심층 신경망 | - |
dc.subject | 국소 심층 학습 | - |
dc.subject | 영상 잡음 제거 | - |
dc.subject | 영상 패치 복잡도 | - |
dc.subject | 앙상블 선택 | - |
dc.subject | 효율성 | - |
dc.title | Ensemble strategies for efficient deep learning of image denoising and improving sequential Monte Carlo methods | - |
dc.title.alternative | 영상 잡음 제거의 효율적 심층 학습과 순차 몬테 칼로법 향상을 위한 앙상블 기법 | - |
dc.type | Thesis(Ph.D) | - |
dc.identifier.CNRN | 325007 | - |
dc.description.department | 한국과학기술원 :전산학부, | - |
dc.contributor.alternativeauthor | 홍인표 | - |
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.