Diverse Motion Stylization for Multiple Style Domains via Spatial-Temporal Graph-Based Generative Model

Cited 0 time in webofscience Cited 0 time in scopus
  • Hit : 240
  • Download : 21
DC FieldValueLanguage
dc.contributor.authorPark, Soominko
dc.contributor.authorJang, Deok-Kyeongko
dc.contributor.authorLee, Sung-Heeko
dc.date.accessioned2021-10-18T08:10:55Z-
dc.date.available2021-10-18T08:10:55Z-
dc.date.created2021-10-18-
dc.date.created2021-10-18-
dc.date.created2021-10-18-
dc.date.created2021-10-18-
dc.date.issued2021-09-
dc.identifier.citationPROCEEDINGS OF THE ACM ON COMPUTER GRAPHICS AND INTERACTIVE TECHNIQUES, v.4, no.3-
dc.identifier.issn2577-6193-
dc.identifier.urihttp://hdl.handle.net/10203/288242-
dc.description.abstractThis paper presents a novel deep learning-based framework for translating a motion into various styles within multiple domains. Our framework is a single set of generative adversarial networks that learns stylistic features from a collection of unpaired motion clips with style labels to support mapping between multiple style domains. We construct a spatio-temporal graph to model a motion sequence and employ the spatial-temporal graph convolution networks (ST-GCN) to extract stylistic properties along spatial and temporal dimensions. Through spatial-temporal modeling, our framework shows improved style translation results between significantly different actions and on a long motion sequence containing multiple actions. In addition, we first develop a mapping network for motion stylization that maps a random noise to style, which allows for generating diverse stylization results without using reference motions. Through various experiments, we demonstrate the ability of our method to generate improved results in terms of visual quality, stylistic diversity, and content preservation.-
dc.languageEnglish-
dc.publisherASSOC COMPUTING MACHINERY-
dc.titleDiverse Motion Stylization for Multiple Style Domains via Spatial-Temporal Graph-Based Generative Model-
dc.typeArticle-
dc.identifier.scopusid2-s2.0-85116466320-
dc.type.rimsART-
dc.citation.volume4-
dc.citation.issue3-
dc.citation.publicationnamePROCEEDINGS OF THE ACM ON COMPUTER GRAPHICS AND INTERACTIVE TECHNIQUES-
dc.identifier.doi10.1145/3480145-
dc.embargo.liftdate9999-12-31-
dc.embargo.terms9999-12-31-
dc.contributor.localauthorLee, Sung-Hee-
dc.description.isOpenAccessN-
dc.type.journalArticleArticle-
dc.subject.keywordAuthormotion synthesis-
dc.subject.keywordAuthorgenerative model-
dc.subject.keywordAuthorgraph convolutional networks-
dc.subject.keywordAuthorcharacter animation-
dc.subject.keywordAuthordeep learning-
Appears in Collection
GCT-Journal Papers(저널논문)
Files in This Item

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0