DC Field | Value | Language |
---|---|---|
dc.contributor.author | Kim, Soo Ye | ko |
dc.contributor.author | Aberman, Kfir | ko |
dc.contributor.author | Kanazawa, Nori | ko |
dc.contributor.author | Kanazawa, Nori | ko |
dc.contributor.author | Garg, Rahul | ko |
dc.contributor.author | Wadhwa, Neal | ko |
dc.contributor.author | Chang, Huiwen | ko |
dc.contributor.author | Karnad, Nikhil | ko |
dc.contributor.author | Kim, Munchurl | ko |
dc.contributor.author | Liba, Orly | ko |
dc.date.accessioned | 2022-12-03T05:01:07Z | - |
dc.date.available | 2022-12-03T05:01:07Z | - |
dc.date.created | 2022-12-02 | - |
dc.date.created | 2022-12-02 | - |
dc.date.issued | 2022-06-20 | - |
dc.identifier.citation | 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, CVPRW 2022, pp.476 - 486 | - |
dc.identifier.issn | 2160-7508 | - |
dc.identifier.uri | http://hdl.handle.net/10203/301523 | - |
dc.description.abstract | Although deep learning has enabled a huge leap forward in image inpainting, current methods are often unable to synthesize realistic high-frequency details. In this paper, we propose applying super-resolution to coarsely reconstructed outputs, refining them at high resolution, and then downscaling the output to the original resolution. By introducing high-resolution images to the refinement network, our framework is able to reconstruct finer details that are usually smoothed out due to spectral bias– the tendency of neural networks to reconstruct low frequencies better than high frequencies. To assist training the refinement network on large upscaled holes, we propose a progressive learning technique in which the size of the missing regions increases as training progresses. Our zoom-in, refine and zoomout strategy, combined with high-resolution supervision and progressive learning, constitutes a framework-agnostic approach for enhancing high-frequency details that can be applied to any CNN-based inpainting method. We provide qualitative and quantitative evaluations along with an ablation analysis to show the effectiveness of our approach. This seemingly simple, yet powerful approach, outperforms existing inpainting methods. | - |
dc.language | English | - |
dc.publisher | The IEEE / CVF Computer Vision and Pattern Recognition Conference (CVPR) | - |
dc.title | Zoom-to-Inpaint: Image Inpainting with High Frequency Details | - |
dc.type | Conference | - |
dc.identifier.wosid | 000861612700054 | - |
dc.identifier.scopusid | 2-s2.0-85136220669 | - |
dc.type.rims | CONF | - |
dc.citation.beginningpage | 476 | - |
dc.citation.endingpage | 486 | - |
dc.citation.publicationname | 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, CVPRW 2022 | - |
dc.identifier.conferencecountry | US | - |
dc.identifier.conferencelocation | New Orleans, USA | - |
dc.identifier.doi | 10.1109/CVPRW56347.2022.00063 | - |
dc.contributor.localauthor | Kim, Munchurl | - |
dc.contributor.nonIdAuthor | Aberman, Kfir | - |
dc.contributor.nonIdAuthor | Kanazawa, Nori | - |
dc.contributor.nonIdAuthor | Kanazawa, Nori | - |
dc.contributor.nonIdAuthor | Garg, Rahul | - |
dc.contributor.nonIdAuthor | Wadhwa, Neal | - |
dc.contributor.nonIdAuthor | Chang, Huiwen | - |
dc.contributor.nonIdAuthor | Karnad, Nikhil | - |
dc.contributor.nonIdAuthor | Liba, Orly | - |
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.