Selective forgetting of classes and tasks for deep neural networks심층 신경망을 위한 클래스와 태스크의 선택적 망각

Cited 0 time in webofscience Cited 0 time in scopus
  • Hit : 172
  • Download : 0
Pruning units such as filters and neurons have been used to address the inefficiency of memory and computational resources in deep neural networks. Existing work mainly focuses on a single task with a static number of classes, but in practice, classes and tasks become obsolete. Using information about obsolescence, I propose a `Selective Forgetting' method that prunes units that are unnecessary to classes or tasks of interests. Furthermore, I suggest a novel neural architecture that is advantageous to selective forgetting by disentangling contributions of units across classes and tasks. I validate my approach in experiments about class forgetting on a single task model and task forgetting on continual learning models. Experimental results demonstrate that 1) my pruning method outperforms baselines in terms of mean and minimum performance of classes/tasks to be preserved and 2) my architecture makes units be disentangled and gains more benefit from selective forgetting than the entangled network.
Advisors
Oh, Aliceresearcher오혜연researcher
Description
한국과학기술원 :전산학부,
Publisher
한국과학기술원
Issue Date
2019
Identifier
325007
Language
eng
Description

학위논문(석사) - 한국과학기술원 : 전산학부, 2019.8,[iv, 23 p. :]

Keywords

Deep neural network pruning▼aefficient inference▼aclass and task obsolescence▼acontinual learning; 심층 신경망 프루닝▼a효율적 추론▼a불용 클래스와 태스크▼a연속 학습

URI
http://hdl.handle.net/10203/283161
Link
http://library.kaist.ac.kr/search/detail/view.do?bibCtrlNo=876079&flag=dissertation
Appears in Collection
CS-Theses_Master(석사논문)
Files in This Item
There are no files associated with this item.

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0