DC Field | Value | Language |
---|---|---|
dc.contributor.author | Kim, Soyeong | ko |
dc.contributor.author | Kim, Do-Yeon | ko |
dc.contributor.author | Moon, Jaekyun | ko |
dc.date.accessioned | 2022-11-29T08:01:45Z | - |
dc.date.available | 2022-11-29T08:01:45Z | - |
dc.date.created | 2022-11-28 | - |
dc.date.issued | 2022-10-27 | - |
dc.identifier.citation | International Workshop on Computational Aspects of Deep Learning, CADL2022 | - |
dc.identifier.uri | http://hdl.handle.net/10203/301261 | - |
dc.description.abstract | Image inpainting techniques have recently been developed leveraging deep neural networks and have seen many real-world applications. However, image inpainting networks, which are typically based on generative adversarial network (GAN), suffer from high parameter complexities and long inference time. While there are some efforts to compress image-to-image translation GAN, compressing image inpainting networks has rarely been explored. In this paper, we aim to create a small and efficient GAN-based inpainting model by compressing the generator of the inpainting model without sacrificing the quality of reconstructed images. We propose novel channel pruning and knowledge distillation techniques that are specialized for image inpainting models with mask information. Experimental results demonstrate that our compressed inpainting model with only one-tenth of the model size achieves similar performance to the full model. | - |
dc.language | English | - |
dc.publisher | ECCV | - |
dc.title | Deep Neural Network Compression for Image Inpainting | - |
dc.type | Conference | - |
dc.type.rims | CONF | - |
dc.citation.publicationname | International Workshop on Computational Aspects of Deep Learning, CADL2022 | - |
dc.identifier.conferencecountry | IS | - |
dc.identifier.conferencelocation | Virtual | - |
dc.contributor.localauthor | Moon, Jaekyun | - |
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.