Meta-GMVAE: Mixture of Gaussian VAEs for unsupervised meta-learning비지도 메타학습을 위한 가우시안 혼합 모델 기반의 생성 모델

Cited 0 time in webofscience Cited 0 time in scopus
  • Hit : 201
  • Download : 0
DC FieldValueLanguage
dc.contributor.advisorHwang, Sung Ju-
dc.contributor.advisor황성주-
dc.contributor.authorLee, Dong Bok-
dc.date.accessioned2022-04-15T07:56:34Z-
dc.date.available2022-04-15T07:56:34Z-
dc.date.issued2021-
dc.identifier.urihttp://library.kaist.ac.kr/search/detail/view.do?bibCtrlNo=963742&flag=dissertationen_US
dc.identifier.urihttp://hdl.handle.net/10203/294850-
dc.description학위논문(석사) - 한국과학기술원 : AI대학원, 2021.8,[ii, 17 p. :]-
dc.description.abstractUnsupervised learning aims to learn meaningful representations from unlabeled data which can capture its intrinsic structure, that can be transferred to downstream tasks. Meta-learning, whose objective is to learn to generalize across tasks such that the learned model can rapidly adapt to a novel task, shares the spirit of unsupervised learning in that the both seek to learn more effective and efficient learning procedure than learning from scratch. The fundamental difference of the two is that the most meta-learning approaches are supervised, assuming full access to the labels. However, acquiring labeled dataset for meta-training not only is costly as it requires human efforts in labeling but also limits its applications to pre-defined task distributions. In this paper, we propose a principled unsupervised meta-learning model, namely Meta-GMVAE, based on Variational Autoencoder (VAE) and set-level variational inference. Moreover, we introduce a mixture of Gaussian (GMM) prior, assuming that each modality represents each class-concept in a randomly sampled episode, which we optimize with Expectation-Maximization (EM). Then, the learned model can be used for downstream few-shot classification tasks, where we obtain task-specific parameters by performing semi-supervised EM on the latent representations of the support and query set, and predict labels of the query set by computing aggregated posteriors. We validate our model on Omniglot and Mini-ImageNet datasets by evaluating its performance on downstream few-shot classification tasks. The results show that our model obtain impressive performance gains over existing unsupervised meta-learning baselines, even outperforming supervised MAML on a certain setting.-
dc.languageeng-
dc.publisher한국과학기술원-
dc.subjectUnsupervised Learning▼aMeta-learning▼aGenerative Models▼aFew-shot Classification▼aEM algorithm-
dc.subject비지도 학습▼a메타 학습▼a생성 모델▼a퓨샷 분류▼a기댓값 최대화-
dc.titleMeta-GMVAE: Mixture of Gaussian VAEs for unsupervised meta-learning-
dc.title.alternative비지도 메타학습을 위한 가우시안 혼합 모델 기반의 생성 모델-
dc.typeThesis(Master)-
dc.identifier.CNRN325007-
dc.description.department한국과학기술원 :AI대학원,-
dc.contributor.alternativeauthor이동복-
Appears in Collection
AI-Theses_Master(석사논문)
Files in This Item
There are no files associated with this item.

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0