Ensemble strategies for efficient deep learning of image denoising and improving sequential Monte Carlo methods영상 잡음 제거의 효율적 심층 학습과 순차 몬테 칼로법 향상을 위한 앙상블 기법

Cited 0 time in webofscience Cited 0 time in scopus
  • Hit : 206
  • Download : 0
In this thesis, we address two important estimation frameworks, sequential Monte Carlo and deep neural networks for the problem of recursive Bayesian estimation and natural image denoising, and propose ensemble strategies for improving their estimation efficiency. For sequential Monte Carlo, its previous resampling schemes have two drawbacks. First, they remove so many samples that ensemble diversity becomes reduced. Second, they lose so large weight information that approximation error becomes significant. As a result, sequential Monte Carlo methods suffer from low sampling efficiency. Hence, we introduce a novel resampling scheme designed to overcome the two drawbacks. Our proposed scheme employs a deterministic approach to decrease the number of removed samples and keeps weights such that the loss of weight information is minimized. By doing so, our scheme increases the diversity of samples and approximation performance, and so, improves the sampling efficiency of the methods. We will show these results through two recursive Bayesian estimation examples. For deep neural networks, we propose a novel ensemble strategy of exploiting multiple deep neural networks for efficient deep learning of image denoising. To learn the high diversity of natural image patches and noise distributions in image denoising, we divide the denoising task into several local subtasks according to the complexity of image patches and conquer each subtask using a network trained on its local space. We then combine the local subtasks at test time by applying the set of networks to each noisy patch as a weighted mixture. Our methodology of using locally-learned networks based on patch complexity effectively decreases the diversity of image patches at each single network, and their adaptively-weighted mixture to the input combines the local subtasks efficiently. Extensive experimental results demonstrate that our strategy outperforms previous methods with much smaller training samples and trainable parameters, and so, improves the learning efficiency.
Advisors
Kim, Daeyoungresearcher김대영researcher
Description
한국과학기술원 :전산학부,
Publisher
한국과학기술원
Issue Date
2020
Identifier
325007
Language
eng
Description

학위논문(박사) - 한국과학기술원 : 전산학부, 2020.8,[v, 73 p. :]

Keywords

Sequential Monte Carlo▼aResampling▼aRecursive Bayesian Estimation▼aAutoencoders▼aDeep Neural Networks▼aLocal Experts▼aImage Denoising▼aPatch Complexity▼aEnsemble Selection▼aEfficiency; 순차 몬테 칼로법; 표본 재추출; 재귀 베이지안 추정; 오토인코더; 심층 신경망; 국소 심층 학습; 영상 잡음 제거; 영상 패치 복잡도; 앙상블 선택; 효율성

URI
http://hdl.handle.net/10203/284368
Link
http://library.kaist.ac.kr/search/detail/view.do?bibCtrlNo=924407&flag=dissertation
Appears in Collection
CS-Theses_Ph.D.(박사논문)
Files in This Item
There are no files associated with this item.

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0