HairFIT: Pose-Invariant Hairstyle Transfer via Flow-based Hair Alignment and Semantic-Region-Aware Inpainting

Cited 0 time in webofscience Cited 0 time in scopus
  • Hit : 334
  • Download : 0
DC FieldValueLanguage
dc.contributor.authorChung, Chaeyeonko
dc.contributor.authorKim, Taewooko
dc.contributor.authorNam, Heylinko
dc.contributor.authorChoi, Seunghwanko
dc.contributor.authorGu, Gyojungko
dc.contributor.authorPark, Sunghyunko
dc.contributor.authorChoo, Jaegulko
dc.date.accessioned2021-12-14T06:49:48Z-
dc.date.available2021-12-14T06:49:48Z-
dc.date.created2021-12-03-
dc.date.issued2021-11-24-
dc.identifier.citationThe 32nd British Machine Vision Conference, BMVC 2021-
dc.identifier.urihttp://hdl.handle.net/10203/290596-
dc.description.abstractHairstyle transfer is the task of modifying a source hairstyle to a target one. Although recent hairstyle transfer models can reflect the delicate features of hairstyles, they still have two major limitations. First, the existing methods fail to transfer hairstyles when a source and a target image have different poses (e.g., viewing direction or face size), which is prevalent in the real world. Also, the previous models generate unrealistic images when there is a non-trivial amount of regions in the source image occluded by its original hair. When modifying long hair to short hair, shoulders or backgrounds occluded by the long hair need to be inpainted. To address these issues, we propose a novel framework for pose-invariant hairstyle transfer, HairFIT. Our model consists of two stages: 1) flow-based hair alignment and 2) hair synthesis. In the hair alignment stage, we leverage a keypoint-based optical flow estimator to align a target hairstyle with a source pose. Then, we generate a final hairstyle-transferred image in the hair synthesis stage based on Semantic-region-aware Inpainting Mask (SIM) estimator. Our SIM estimator divides the occluded regions in the source image into different semantic regions to reflect their distinct features during the inpainting. To demonstrate the effectiveness of our model, we conduct quantitative and qualitative evaluations using multi-view datasets, K-hairstyle and VoxCeleb. The results indicate that HairFIT achieves a state-of-the-art performance by successfully transferring hairstyles between images of different poses, which have never been achieved before.-
dc.languageEnglish-
dc.publisherBritish Machine Vision Association-
dc.titleHairFIT: Pose-Invariant Hairstyle Transfer via Flow-based Hair Alignment and Semantic-Region-Aware Inpainting-
dc.typeConference-
dc.type.rimsCONF-
dc.citation.publicationnameThe 32nd British Machine Vision Conference, BMVC 2021-
dc.identifier.conferencecountryUK-
dc.identifier.conferencelocationOnline-
dc.contributor.localauthorChoo, Jaegul-
dc.contributor.nonIdAuthorChung, Chaeyeon-
dc.contributor.nonIdAuthorKim, Taewoo-
dc.contributor.nonIdAuthorNam, Heylin-
dc.contributor.nonIdAuthorChoi, Seunghwan-
dc.contributor.nonIdAuthorGu, Gyojung-
dc.contributor.nonIdAuthorPark, Sunghyun-
Appears in Collection
RIMS Conference Papers
Files in This Item
There are no files associated with this item.

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0