Robust training with ensemble consensus네트워크 앙상블의 합의를 이용한 잡음에 강인한 심층 신경망 학습법

Cited 0 time in webofscience Cited 0 time in scopus
  • Hit : 127
  • Download : 0
Deep neural networks optimized with gradient-based method exhibit two different behaviors when trained on poorly-annotated datasets: generalization in the early stage and memorization in the later stage. We analyze these two behaviors by measuring the similarity of the learned patterns between an ensemble of networks. From the analysis, we find that some correctly-annotated examples incur small training losses on all networks in the ensemble during generalization, while wrongly-annotated examples do not. Based on the finding, we propose a robust training method, termed learning with ensemble consensus (LEC) where an ensemble of networks is trained using the examples incurring small training losses on all the networks in the ensemble. The proposed method removes effectively noisy examples from training batches, resulting in robustness on the highly corrupted datasets.
Advisors
Chung, Sae-Youngresearcher정세영researcher
Description
한국과학기술원 :전기및전자공학부,
Publisher
한국과학기술원
Issue Date
2019
Identifier
325007
Language
eng
Description

학위논문(석사) - 한국과학기술원 : 전기및전자공학부, 2019.8,[iii, 19 p. :]

Keywords

Deep neural network▼alabel corruption▼aensemble▼arepresentational similarity▼agradient-based optimization; 심층 신경망▼a잡음 데이터▼a앙상블▼a표현 유사도▼a경사 하강법

URI
http://hdl.handle.net/10203/283050
Link
http://library.kaist.ac.kr/search/detail/view.do?bibCtrlNo=875344&flag=dissertation
Appears in Collection
EE-Theses_Master(석사논문)
Files in This Item
There are no files associated with this item.

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0