DC Field | Value | Language |
---|---|---|
dc.contributor.author | Kim, Saehun | ko |
dc.contributor.author | Do, Jeonghyeok | ko |
dc.contributor.author | Kim, Munchurl | ko |
dc.date.accessioned | 2021-02-16T01:50:06Z | - |
dc.date.available | 2021-02-16T01:50:06Z | - |
dc.date.created | 2021-02-08 | - |
dc.date.created | 2021-02-08 | - |
dc.date.issued | 2021-01 | - |
dc.identifier.citation | IEEE ACCESS, v.9, pp.7930 - 7942 | - |
dc.identifier.issn | 2169-3536 | - |
dc.identifier.uri | http://hdl.handle.net/10203/280742 | - |
dc.description.abstract | Numerous methods for style transfer have been developed using unsupervised learning and gained impressive results. However, optimal style transfer cannot be conducted from a global fashion in certain style domains, mainly when a single target-style domain contains semantic objects that have their own distinct and unique styles, e.g., those objects in the anime-style domain. Previous methods are incongruent because the unsupervised learning can not provide the semantic mappings between the multi-style objects according to their unique styles. Thus, in this paper, we propose a pseudo-supervised learning framework for the semantic multi-style transfer (SMST), which consists of (i) a pseudo ground truth (pGT) generation phase and (ii) a SMST learning phase. In the pGT generation phase, multiple semantic objects of the photo images are separately transferred to the target-domain object styles in an object-oriented fashion. Then the transferred objects are composed back to an image, which is the pGT. In the SMST learning phase, a SMST network (SMSTnet) is trained with the pairs of the photo images and its respective pGT in a supervised manner. From this, our framework can provide the semantic mappings of multi-style objects. Moreover, to embrace the multi-styles of various objects into a single generator, we design the SMSTnet with channel attentions in conjunction with a discriminator dedicated to our pseudo-supervised learning. Our method has been applied and intensively tested for anime-style transfer learning. The experimental results demonstrate the effectiveness of our method and show its superiority compared to the state-of-the-art methods. | - |
dc.language | English | - |
dc.publisher | IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC | - |
dc.title | Pseudo-Supervised Learning for Semantic Multi-Style Transfer | - |
dc.type | Article | - |
dc.identifier.wosid | 000608610500001 | - |
dc.identifier.scopusid | 2-s2.0-85099245969 | - |
dc.type.rims | ART | - |
dc.citation.volume | 9 | - |
dc.citation.beginningpage | 7930 | - |
dc.citation.endingpage | 7942 | - |
dc.citation.publicationname | IEEE ACCESS | - |
dc.identifier.doi | 10.1109/ACCESS.2021.3049637 | - |
dc.contributor.localauthor | Kim, Munchurl | - |
dc.contributor.nonIdAuthor | Kim, Saehun | - |
dc.contributor.nonIdAuthor | Do, Jeonghyeok | - |
dc.description.isOpenAccess | Y | - |
dc.type.journalArticle | Article | - |
dc.subject.keywordAuthor | Style transfer | - |
dc.subject.keywordAuthor | image-to-image translation | - |
dc.subject.keywordAuthor | generative adversarial networks | - |
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.