Meta-GMVAE: Mixture of Gaussin VAE for Unsupervised Meta-Learning

Cited 0 time in webofscience Cited 0 time in scopus
  • Hit : 136
  • Download : 0
DC FieldValueLanguage
dc.contributor.authorLee, Dong Bokko
dc.contributor.authorMin, Dongchanko
dc.contributor.authorLee, Seanieko
dc.contributor.authorHwang, Sung Juko
dc.date.accessioned2021-12-15T06:49:23Z-
dc.date.available2021-12-15T06:49:23Z-
dc.date.created2021-12-01-
dc.date.issued2021-05-03-
dc.identifier.citationNinth International Conference on Learning Representation, ICLR 2021-
dc.identifier.urihttp://hdl.handle.net/10203/290692-
dc.description.abstractUnsupervised learning aims to learn meaningful representations from unlabeled data which can captures its intrinsic structure, that can be transferred to downstream tasks. Meta-learning, whose objective is to learn to generalize across tasks such that the learned model can rapidly adapt to a novel task, shares the spirit of unsupervised learning in that the both seek to learn more effective and efficient learning procedure than learning from scratch. The fundamental difference of the two is that the most meta-learning approaches are supervised, assuming full access to the labels. However, acquiring labeled dataset for meta-training not only is costly as it requires human efforts in labeling but also limits its applications to pre-defined task distributions. In this paper, we propose a principled unsupervised meta-learning model, namely Meta-GMVAE, based on Variational Autoencoder (VAE) and set-level variational inference. Moreover, we introduce a mixture of Gaussian (GMM) prior, assuming that each modality represents each class-concept in a randomly sampled episode, which we optimize with Expectation-Maximization (EM). Then, the learned model can be used for downstream few-shot classification tasks, where we obtain task-specific parameters by performing semi-supervised EM on the latent representations of the support and query set, and predict labels of the query set by computing aggregated posteriors. We validate our model on Omniglot and Mini-ImageNet datasets by evaluating its performance on downstream few-shot classification tasks. The results show that our model obtain impressive performance gains over existing unsupervised meta-learning baselines, even outperforming supervised MAML on a certain setting.-
dc.languageEnglish-
dc.publisherThe International Conference on Learning Representations-
dc.titleMeta-GMVAE: Mixture of Gaussin VAE for Unsupervised Meta-Learning-
dc.typeConference-
dc.type.rimsCONF-
dc.citation.publicationnameNinth International Conference on Learning Representation, ICLR 2021-
dc.identifier.conferencecountryAU-
dc.identifier.conferencelocationVirtual-
dc.contributor.localauthorHwang, Sung Ju-
Appears in Collection
RIMS Conference Papers
Files in This Item
There are no files associated with this item.

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0