Continual learning of deep neural networks under strict constraints엄격한 제약 조건 하에서의 심층 신경망 지속 학습

Cited 0 time in webofscience Cited 0 time in scopus
  • Hit : 117
  • Download : 0
In this dissertation, we discuss continual learning methods that can be practically used not only in common situations, but also under several strict constraints. Specifically, (1) source task information should not be stored in external memory, (2) the network size should remain constant, and (3) past and future task data should not be used for hyperparameter selection. First, we introduce a residual continual learning method that satisfies the first and second conditions. The residual continual learning method avoids unnecessary weight changes using residual reparameterization and a decay loss, but past data has to be used for hyperparameter selection. To solve the hyperparameter problem, we construct a quadratic penalty method that satisfies the second and third conditions. The quadratic penalty method approximates the original loss function by the extended Kronecker-factored approximate curvature. Finally, by reconstructing data from a trained network through statistics matching and regularizing the output and weight together, we propose a standalone continual learning method that satisfies all of the three conditions.
Advisors
Kim, Junmoresearcher김준모researcher
Description
한국과학기술원 :전기및전자공학부,
Publisher
한국과학기술원
Issue Date
2021
Identifier
325007
Language
eng
Description

학위논문(박사) - 한국과학기술원 : 전기및전자공학부, 2021.8,[v, 52 p. :]

Keywords

Deep learning▼aContinual learning▼aMachine learning▼aCurvature approximation; 심층 학습▼a지속 학습▼a기계 학습▼a곡률 근사

URI
http://hdl.handle.net/10203/295628
Link
http://library.kaist.ac.kr/search/detail/view.do?bibCtrlNo=962475&flag=dissertation
Appears in Collection
EE-Theses_Ph.D.(박사논문)
Files in This Item
There are no files associated with this item.

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0