Generating a Fusion Image: One's Identity and Another's Shape

Cited 28 time in webofscience Cited 0 time in scopus
  • Hit : 168
  • Download : 0
Generating a novel image by manipulating two input images is an interesting research problem in the study of generative adversarial networks (GANs). We propose a new GAN-based network that generates a fusion image with the identity of input image x and the shape of input image y. Our network can simultaneously train on more than two image datasets in an unsupervised manner. We define an identity loss LI to catch the identity of image x and a shape loss LS to get the shape of y. In addition, we propose a novel training method called Min-Patch training to focus the generator on crucial parts of an image, rather than its entirety. We show qualitative results on the VGG Youtube Pose dataset, Eye dataset (MPIIGaze and UnityEyes), and the Photo-Sketch-Cartoon dataset.
Publisher
IEEE Computer Society, the Computer Vision Foundation (CVF)
Issue Date
2018-06-19
Language
English
Citation

CVPR 2018 IEEE Conference on Computer Vision and Pattern Recognition , pp.1635 - 1643

DOI
10.1109/CVPR.2018.00176
URI
http://hdl.handle.net/10203/247463
Appears in Collection
EE-Conference Papers(학술회의논문)
Files in This Item
There are no files associated with this item.
This item is cited by other documents in WoS
⊙ Detail Information in WoSⓡ Click to see webofscience_button
⊙ Cited 28 items in WoS Click to see citing articles in records_button

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0