Spacetime Expression Cloning for Blendshapes

Cited 41 time in webofscience Cited 0 time in scopus
  • Hit : 674
  • Download : 36
DC FieldValueLanguage
dc.contributor.authorSeol, Yeong-Hoko
dc.contributor.authorLewis, J. P.ko
dc.contributor.authorSeo, Jae-Wooko
dc.contributor.authorChoi, Byung-Kukko
dc.contributor.authorAnjyo, Kenko
dc.contributor.authorNoh, Jun-Yongko
dc.date.accessioned2013-03-12T10:23:34Z-
dc.date.available2013-03-12T10:23:34Z-
dc.date.created2012-06-28-
dc.date.created2012-06-28-
dc.date.issued2012-04-
dc.identifier.citationACM TRANSACTIONS ON GRAPHICS, v.31, no.2-
dc.identifier.issn0730-0301-
dc.identifier.urihttp://hdl.handle.net/10203/102030-
dc.description.abstractThe goal of a practical facial animation retargeting system is to reproduce the character of a source animation on a target face while providing room for additional creative control by the animator. This article presents a novel spacetime facial animation retargeting method for blendshape face models. Our approach starts from the basic principle that the source and target movements should be similar. By interpreting movement as the derivative of position with time, and adding suitable boundary conditions, we formulate the retargeting problem as a Poisson equation. Specified (e. g., neutral) expressions at the beginning and end of the animation as well as any user-specified constraints in the middle of the animation serve as boundary conditions. In addition, a model-specific prior is constructed to represent the plausible expression space of the target face during retargeting. A Bayesian formulation is then employed to produce target animation that is consistent with the source movements while satisfying the prior constraints. Since the preservation of temporal derivatives is the primary goal of the optimization, the retargeted motion preserves the rhythm and character of the source movement and is free of temporal jitter. More importantly, our approach provides spacetime editing for the popular blendshape representation of facial models, exhibiting smooth and controlled propagation of user edits across surrounding frames.-
dc.languageEnglish-
dc.publisherASSOC COMPUTING MACHINERY-
dc.subjectMOTION-
dc.subjectANIMATION-
dc.subjectFACES-
dc.titleSpacetime Expression Cloning for Blendshapes-
dc.typeArticle-
dc.identifier.wosid000303437400003-
dc.identifier.scopusid2-s2.0-84860475653-
dc.type.rimsART-
dc.citation.volume31-
dc.citation.issue2-
dc.citation.publicationnameACM TRANSACTIONS ON GRAPHICS-
dc.identifier.doi10.1145/2159516.2159519-
dc.embargo.liftdate9999-12-31-
dc.embargo.terms9999-12-31-
dc.contributor.localauthorNoh, Jun-Yong-
dc.contributor.nonIdAuthorLewis, J. P.-
dc.contributor.nonIdAuthorAnjyo, Ken-
dc.type.journalArticleArticle-
dc.subject.keywordAuthorAlgorithms-
dc.subject.keywordAuthorExperimentation-
dc.subject.keywordAuthorFace-
dc.subject.keywordAuthorretargeting-
dc.subject.keywordAuthormovement-
dc.subject.keywordAuthorspacetime-
dc.subject.keywordAuthorediting-
dc.subject.keywordPlusMOTION-
dc.subject.keywordPlusANIMATION-
dc.subject.keywordPlusFACES-
Appears in Collection
GCT-Journal Papers(저널논문)
Files in This Item
This item is cited by other documents in WoS
⊙ Detail Information in WoSⓡ Click to see webofscience_button
⊙ Cited 41 items in WoS Click to see citing articles in records_button

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0