Image-to-image translation via group-wise deep whitening-and-coloring transformation

Cited 31 time in webofscience Cited 0 time in scopus
  • Hit : 89
  • Download : 0
DC FieldValueLanguage
dc.contributor.authorCho, Wonwoongko
dc.contributor.authorChoi, Sunghako
dc.contributor.authorPark, David Keetaeko
dc.contributor.authorShin, Inkyuko
dc.contributor.authorChoo, Jaegulko
dc.date.accessioned2021-01-12T02:50:18Z-
dc.date.available2021-01-12T02:50:18Z-
dc.date.created2020-12-03-
dc.date.issued2019-06-
dc.identifier.citation32nd IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2019, pp.10631 - 10639-
dc.identifier.urihttp://hdl.handle.net/10203/279875-
dc.description.abstractRecently, unsupervised exemplar-based image-to-image translation, conditioned on a given exemplar without the paired data, has accomplished substantial advancements. In order to transfer the information from an exemplar to an input image, existing methods often use a normalization technique, e.g., adaptive instance normalization, that controls the channel-wise statistics of an input activation map at a particular layer, such as the mean and the variance. Meanwhile, style transfer approaches similar task to image translation by nature, demonstrated superior performance by using the higher-order statistics such as covariance among channels in representing a style. In detail, it works via whitening (given a zero-mean input feature, transforming its covariance matrix into the identity). followed by coloring (changing the covariance matrix of the whitened feature to those of the style feature). However, applying this approach in image translation is computationally intensive and error-prone due to the expensive time complexity and its non-trivial backpropagation. In response, this paper proposes an end-to-end approach tailored for image translation that efficiently approximates this transformation with our novel regularization methods. We further extend our approach to a group-wise form for memory and time efficiency as well as image quality. Extensive qualitative and quantitative experiments demonstrate that our proposed method is fast, both in training and inference, and highly effective in reflecting the style of an exemplar.-
dc.languageEnglish-
dc.publisherIEEE Computer Society-
dc.titleImage-to-image translation via group-wise deep whitening-and-coloring transformation-
dc.typeConference-
dc.identifier.wosid000542649304026-
dc.identifier.scopusid2-s2.0-85078785679-
dc.type.rimsCONF-
dc.citation.beginningpage10631-
dc.citation.endingpage10639-
dc.citation.publicationname32nd IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2019-
dc.identifier.conferencecountryUS-
dc.identifier.conferencelocationLong Beach-
dc.identifier.doi10.1109/CVPR.2019.01089-
dc.contributor.localauthorChoo, Jaegul-
dc.contributor.nonIdAuthorCho, Wonwoong-
dc.contributor.nonIdAuthorChoi, Sungha-
dc.contributor.nonIdAuthorPark, David Keetae-
dc.contributor.nonIdAuthorShin, Inkyu-
Appears in Collection
RIMS Conference Papers
Files in This Item
There are no files associated with this item.
This item is cited by other documents in WoS
⊙ Detail Information in WoSⓡ Click to see webofscience_button
⊙ Cited 31 items in WoS Click to see citing articles in records_button

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0