Deep Neural Network Compression for Image Inpainting

Cited 0 time in webofscience Cited 0 time in scopus
  • Hit : 186
  • Download : 0
DC FieldValueLanguage
dc.contributor.authorKim, Soyeongko
dc.contributor.authorKim, Do-Yeonko
dc.contributor.authorMoon, Jaekyunko
dc.date.accessioned2022-11-29T08:01:45Z-
dc.date.available2022-11-29T08:01:45Z-
dc.date.created2022-11-28-
dc.date.issued2022-10-27-
dc.identifier.citationInternational Workshop on Computational Aspects of Deep Learning, CADL2022-
dc.identifier.urihttp://hdl.handle.net/10203/301261-
dc.description.abstractImage inpainting techniques have recently been developed leveraging deep neural networks and have seen many real-world applications. However, image inpainting networks, which are typically based on generative adversarial network (GAN), suffer from high parameter complexities and long inference time. While there are some efforts to compress image-to-image translation GAN, compressing image inpainting networks has rarely been explored. In this paper, we aim to create a small and efficient GAN-based inpainting model by compressing the generator of the inpainting model without sacrificing the quality of reconstructed images. We propose novel channel pruning and knowledge distillation techniques that are specialized for image inpainting models with mask information. Experimental results demonstrate that our compressed inpainting model with only one-tenth of the model size achieves similar performance to the full model.-
dc.languageEnglish-
dc.publisherECCV-
dc.titleDeep Neural Network Compression for Image Inpainting-
dc.typeConference-
dc.type.rimsCONF-
dc.citation.publicationnameInternational Workshop on Computational Aspects of Deep Learning, CADL2022-
dc.identifier.conferencecountryIS-
dc.identifier.conferencelocationVirtual-
dc.contributor.localauthorMoon, Jaekyun-
Appears in Collection
EE-Conference Papers(학술회의논문)
Files in This Item
There are no files associated with this item.

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0