Forget-free subnetworks for life-long learning평생학습을 위한 망각 회피 하위-네트워크

Cited 0 time in webofscience Cited 0 time in scopus
  • Hit : 4
  • Download : 0
DC FieldValueLanguage
dc.contributor.advisor유창동-
dc.contributor.author강해용-
dc.contributor.authorKang, Haeyong-
dc.date.accessioned2024-07-26T19:30:50Z-
dc.date.available2024-07-26T19:30:50Z-
dc.date.issued2023-
dc.identifier.urihttp://library.kaist.ac.kr/search/detail/view.do?bibCtrlNo=1047228&flag=dissertationen_US
dc.identifier.urihttp://hdl.handle.net/10203/320938-
dc.description학위논문(박사) - 한국과학기술원 : 전기및전자공학부, 2023.8,[viii, 83 p. :]-
dc.description.abstractSoftNet jointly learns the regularized model weight and task-adaptive non-binary masks. The outstanding points of WSN and SoftNet were derived through the following three continuous learning scenarios. In the first scenario, Task Incremental Learning (TIL), WSN, and SoftNet are inherently immune to catastrophic forgetting as each selected subnetwork model does not infringe upon other subnetworks. Moreover, we observed that sub-networks compression in TIL minimizes the model capacity. Surprisingly, in the inference step, SoftNet generated by injecting small noises to the backgrounds of acquired WSN (holding the foregrounds of WSN) provides excellent forward transfer power for future tasks in TIL. In the second scenario, Few-shot Class Incremental Learning (FSCIL), SoftNet shows its effectiveness over WSN inregularizing parameters to tackle overfitting issues caused by a few examples. In the third scenario, Video Incremental Learning (VIL), WSN suggested that the reused weights of the subnetworks depend on the video contexts and that its video generation ability is close to the upper bounds deduced by Multi-Task Learning (MTL).-
dc.description.abstractInspired by Regularized Lottery Ticket Hypothesis (RLTH), which states that competitive smooth (non-binary) subnetworks exist within a dense network in continual learning tasks, we investigate two proposed architecture-based continual learning methods which sequentially learn and select adaptive binary-(WSN) and non-binary Soft-Subnetworks (SoftNet) for each task. WSN jointly learns the model weights and task-adaptive binary masks of subnetworks associated with each task whilst attempting to select a smallset of weights to be activated (winning ticket) by reusing weights of the prior subnetworks-
dc.languageeng-
dc.publisher한국과학기술원-
dc.subject연속 학습 (Life-long, or Continual Learning, CL)▼a태스크 증분 학습(Task Incremental Learning, TIL)▼a소수 증분 학습(Few-shot Class Incremental Learning, FSCIL)▼a정규화된 복권 가설(Regularized Lottery Ticket Hypotheshes, RLTH)▼a하위-네트워크(Wining SubNetworks, WSN)▼a정규화된 하위-네트워크 (Soft-SubNetwork, SoftNet)▼a멀티 태스크 학습(Multi-Task Learning, MTL)-
dc.subjectContinual Learning (CL)▼aTask Incremental Learning (TIL)▼aFew-shot Class Incremental Learning (FSCIL)▼aRegularized Lottery Ticket Hypothesis (RLTH)▼aWining SubNetworks (WSN)▼aSoft-SubNetwork (SoftNet)▼aMulti-Task Learning (MTL)-
dc.titleForget-free subnetworks for life-long learning-
dc.title.alternative평생학습을 위한 망각 회피 하위-네트워크-
dc.typeThesis(Ph.D)-
dc.identifier.CNRN325007-
dc.description.department한국과학기술원 :전기및전자공학부,-
dc.contributor.alternativeauthorYoo, Chang D.-
Appears in Collection
EE-Theses_Ph.D.(박사논문)
Files in This Item
There are no files associated with this item.

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0