Deep neural network pruning for self-supervised transfer learning자기 지도 전이 학습을 위한 심층 신경망 가지치기

Cited 0 time in webofscience Cited 0 time in scopus
  • Hit : 136
  • Download : 0
DC FieldValueLanguage
dc.contributor.advisorYoo, Changdong-
dc.contributor.advisor유창동-
dc.contributor.authorMadjid, Sultan Rizky Hikmawan-
dc.date.accessioned2023-06-26T19:34:25Z-
dc.date.available2023-06-26T19:34:25Z-
dc.date.issued2023-
dc.identifier.urihttp://library.kaist.ac.kr/search/detail/view.do?bibCtrlNo=1032939&flag=dissertationen_US
dc.identifier.urihttp://hdl.handle.net/10203/309976-
dc.description학위논문(석사) - 한국과학기술원 : 전기및전자공학부, 2023.2,[vi, 40 p. :]-
dc.description.abstractRecent advancements in self-supervised learning framework show promising results on various downstream tasks, where it can produce on-par or better visual representations than supervised ones with extensive pretraining costs and high complexity. One possible way to solve these problems is to sparsify the network before learning via Pruning-at-Initialization (PaI), aimed to downsize neural networks without significant loss of accuracy compared to dense, overparameterized models. However, our understanding of Pruning-at-Initialization (PaI) methods is limited to supervised learning, where models learn from massive amounts of carefully labeled data. In this work, we first investigate how sparse networks obtained from the criterion of different PaI methods would perform on the Self-Supervised Learning pretraining framework and how comparable they are to the supervised learning setup. Furthermore, we find that sparse networks trained on self-supervised frameworks perform better quantitatively and qualitatively than their supervised counterpart on various downstream tasks, especially in transfer learning tasks.-
dc.languageeng-
dc.publisher한국과학기술원-
dc.subjectSelf-supervised learning▼aNetwork pruning▼aPruning-at-initialization▼aUnsupervised representation learning▼aTransfer learning▼aComputer vision-
dc.subject자기 지도 학습▼a네트워크 정리▼a초기화 시 프루닝▼a감독되지 않은 표현 학습▼a전이 학습▼a컴퓨터 비전-
dc.titleDeep neural network pruning for self-supervised transfer learning-
dc.title.alternative자기 지도 전이 학습을 위한 심층 신경망 가지치기-
dc.typeThesis(Master)-
dc.identifier.CNRN325007-
dc.description.department한국과학기술원 :전기및전자공학부,-
dc.contributor.alternativeauthor마드지드 술탄 리즈키 히크마완-
Appears in Collection
EE-Theses_Master(석사논문)
Files in This Item
There are no files associated with this item.

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0