Diverse Motion Stylization for Multiple Style Domains via Spatial-Temporal Graph-Based Generative Model

Cited 0 time in webofscience Cited 0 time in scopus
  • Hit : 239
  • Download : 21
This paper presents a novel deep learning-based framework for translating a motion into various styles within multiple domains. Our framework is a single set of generative adversarial networks that learns stylistic features from a collection of unpaired motion clips with style labels to support mapping between multiple style domains. We construct a spatio-temporal graph to model a motion sequence and employ the spatial-temporal graph convolution networks (ST-GCN) to extract stylistic properties along spatial and temporal dimensions. Through spatial-temporal modeling, our framework shows improved style translation results between significantly different actions and on a long motion sequence containing multiple actions. In addition, we first develop a mapping network for motion stylization that maps a random noise to style, which allows for generating diverse stylization results without using reference motions. Through various experiments, we demonstrate the ability of our method to generate improved results in terms of visual quality, stylistic diversity, and content preservation.
Publisher
ASSOC COMPUTING MACHINERY
Issue Date
2021-09
Language
English
Article Type
Article
Citation

PROCEEDINGS OF THE ACM ON COMPUTER GRAPHICS AND INTERACTIVE TECHNIQUES, v.4, no.3

ISSN
2577-6193
DOI
10.1145/3480145
URI
http://hdl.handle.net/10203/288242
Appears in Collection
GCT-Journal Papers(저널논문)
Files in This Item

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0