Meta Learning for the representation change in task-specific update과제별 학습을 통한 특징 변화를 위한 메타 학습

Cited 0 time in webofscience Cited 0 time in scopus
  • Hit : 271
  • Download : 0
DC FieldValueLanguage
dc.contributor.advisorYun, Se-Young-
dc.contributor.advisor윤세영-
dc.contributor.authorYoo, Hyungjun-
dc.date.accessioned2022-04-27T19:32:32Z-
dc.date.available2022-04-27T19:32:32Z-
dc.date.issued2021-
dc.identifier.urihttp://library.kaist.ac.kr/search/detail/view.do?bibCtrlNo=948321&flag=dissertationen_US
dc.identifier.urihttp://hdl.handle.net/10203/296218-
dc.description학위논문(석사) - 한국과학기술원 : 지식서비스공학대학원, 2021.2,[v, 38 p. :]-
dc.description.abstractModel Agnostic Meta-Learning (MAML) is one of the most representative of gradient-based meta-learning algorithms. MAML learns new tasks with a few data samples using inner updates from a meta-initialization point and learns the meta-initialization parameters with outer updates. It has recently been hypothesized that representation reuse, which makes little change in efficient representations, is the dominant factor in the performance of the meta-initialized model through MAML in contrast to representation change, which causes a significant change in representations. In this study, we investigate the necessity of representation change for the ultimate goal of few-shot learning, which is solving domain-agnostic tasks. To this aim, we propose a novel meta-learning algorithm, called BOIL (Body Only update in Inner Loop), which updates only the body (extractor) of the model and freezes the head (classifier) during inner loop updates. BOIL leverages representation change rather than representation reuse. A frozen head cannot achieve better results than even a random guessing classifier at the initial point of new tasks, and feature vectors (representations) have to move quickly to their corresponding frozen head vectors. We visualize this property using cosine similarity, CKA, and empirical results without the head. Although the inner loop updates purely hinge on representation change, BOIL empirically shows significant performance improvement over MAML, particularly on cross-domain tasks. The results imply that representation change in gradient-based meta-learning approaches is a critical component.-
dc.languageeng-
dc.publisher한국과학기술원-
dc.subjectMeta-Learning▼aFew-shot Learning▼aDeep Neural Network-
dc.subject메타학습▼a퓨샷학습▼a심층 인공신경망-
dc.titleMeta Learning for the representation change in task-specific update-
dc.title.alternative과제별 학습을 통한 특징 변화를 위한 메타 학습-
dc.typeThesis(Master)-
dc.identifier.CNRN325007-
dc.description.department한국과학기술원 :지식서비스공학대학원,-
dc.contributor.alternativeauthor유형준-
Appears in Collection
KSE-Theses_Master(석사논문)
Files in This Item
There are no files associated with this item.

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0