Forget-free subnetworks for life-long learning평생학습을 위한 망각 회피 하위-네트워크

Cited 0 time in webofscience Cited 0 time in scopus
  • Hit : 3
  • Download : 0
SoftNet jointly learns the regularized model weight and task-adaptive non-binary masks. The outstanding points of WSN and SoftNet were derived through the following three continuous learning scenarios. In the first scenario, Task Incremental Learning (TIL), WSN, and SoftNet are inherently immune to catastrophic forgetting as each selected subnetwork model does not infringe upon other subnetworks. Moreover, we observed that sub-networks compression in TIL minimizes the model capacity. Surprisingly, in the inference step, SoftNet generated by injecting small noises to the backgrounds of acquired WSN (holding the foregrounds of WSN) provides excellent forward transfer power for future tasks in TIL. In the second scenario, Few-shot Class Incremental Learning (FSCIL), SoftNet shows its effectiveness over WSN inregularizing parameters to tackle overfitting issues caused by a few examples. In the third scenario, Video Incremental Learning (VIL), WSN suggested that the reused weights of the subnetworks depend on the video contexts and that its video generation ability is close to the upper bounds deduced by Multi-Task Learning (MTL).; Inspired by Regularized Lottery Ticket Hypothesis (RLTH), which states that competitive smooth (non-binary) subnetworks exist within a dense network in continual learning tasks, we investigate two proposed architecture-based continual learning methods which sequentially learn and select adaptive binary-(WSN) and non-binary Soft-Subnetworks (SoftNet) for each task. WSN jointly learns the model weights and task-adaptive binary masks of subnetworks associated with each task whilst attempting to select a smallset of weights to be activated (winning ticket) by reusing weights of the prior subnetworks
Advisors
유창동researcher
Description
한국과학기술원 :전기및전자공학부,
Publisher
한국과학기술원
Issue Date
2023
Identifier
325007
Language
eng
Description

학위논문(박사) - 한국과학기술원 : 전기및전자공학부, 2023.8,[viii, 83 p. :]

Keywords

연속 학습 (Life-long, or Continual Learning, CL)▼a태스크 증분 학습(Task Incremental Learning, TIL)▼a소수 증분 학습(Few-shot Class Incremental Learning, FSCIL)▼a정규화된 복권 가설(Regularized Lottery Ticket Hypotheshes, RLTH)▼a하위-네트워크(Wining SubNetworks, WSN)▼a정규화된 하위-네트워크 (Soft-SubNetwork, SoftNet)▼a멀티 태스크 학습(Multi-Task Learning, MTL); Continual Learning (CL)▼aTask Incremental Learning (TIL)▼aFew-shot Class Incremental Learning (FSCIL)▼aRegularized Lottery Ticket Hypothesis (RLTH)▼aWining SubNetworks (WSN)▼aSoft-SubNetwork (SoftNet)▼aMulti-Task Learning (MTL)

URI
http://hdl.handle.net/10203/320938
Link
http://library.kaist.ac.kr/search/detail/view.do?bibCtrlNo=1047228&flag=dissertation
Appears in Collection
EE-Theses_Ph.D.(박사논문)
Files in This Item
There are no files associated with this item.

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0