Diffusion models for few-shot image generation퓨샷 영상 생성을 위한 확산 모델 기법

Cited 0 time in webofscience Cited 0 time in scopus
  • Hit : 649
  • Download : 0
DC FieldValueLanguage
dc.contributor.advisorYoon, Kuk-Jin-
dc.contributor.advisor윤국진-
dc.contributor.authorLee, Jeongmin-
dc.date.accessioned2023-06-22T19:31:43Z-
dc.date.available2023-06-22T19:31:43Z-
dc.date.issued2023-
dc.identifier.urihttp://library.kaist.ac.kr/search/detail/view.do?bibCtrlNo=1032344&flag=dissertationen_US
dc.identifier.urihttp://hdl.handle.net/10203/308275-
dc.description학위논문(석사) - 한국과학기술원 : 로봇공학학제전공, 2023.2,[vi, 50 p. :]-
dc.description.abstractRecently, diffusion models have demonstrated excellent performance on a range of image generation tasks, outperforming Generative Adversarial Networks (GANs). However, diffusion models require large-scale training datasets for training, and if the number of training samples is insufficient, the model is unable to generate diverse samples and simply replicates the training data. In this work, I propose a novel diffusion model adaptation method that utilizes a pre-trained model to adapt to few-shot datasets, which provide 10 or fewer training samples. To achieve this, I fine-tune diffusion models with the diffusion loss and use a modified cross-domain distance consistency loss to prevent overfitting. In addition, I propose source model-guided sampling, which preserves the overall structure of the generated sample from the source model. To demonstrate the performance and adaptation capability in few-shot settings, I conducted quantitative and qualitative experiments in various data domains. Furthermore, I extend the proposed framework to various image translation tasks by using the pre-trained source model jointly in the sampling process. The experimental results show that the proposed method is effective at adapting diffusion models to few-shot datasets and can be applied to various image translation tasks.-
dc.languageeng-
dc.publisher한국과학기술원-
dc.subjectComputer vision▼aDeep learning▼aGenerative models▼aFew-shot learning▼aDiffusion models-
dc.subject컴퓨터비전▼a딥러닝▼a생성모델▼a퓨샷러닝▼a확산 모델-
dc.titleDiffusion models for few-shot image generation-
dc.title.alternative퓨샷 영상 생성을 위한 확산 모델 기법-
dc.typeThesis(Master)-
dc.identifier.CNRN325007-
dc.description.department한국과학기술원 :로봇공학학제전공,-
dc.contributor.alternativeauthor이정민-
Appears in Collection
RE-Theses_Master(석사논문)
Files in This Item
There are no files associated with this item.

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0