Spatio-temporal graph-based generative adversarial networks for character motion style transfer캐릭터 모션 스타일 전이를 위한 시공간적 그래프 기반의 적대적 생성 신경망

Cited 0 time in webofscience Cited 0 time in scopus
  • Hit : 261
  • Download : 0
This paper presents a novel deep learning-based framework for the automatic generation of stylistic variations on character motion. Our framework is a single set of generative adversarial networks that learns stylistic features from a collection of unpaired motion clips with style labels while supporting multiple cross-domain mapping. We construct a spatio-temporal graph to model a motion sequence and employ graph convolution networks (GCN) to extract stylistic properties along spatial and temporal dimensions. Through spatio-temporal modeling, our framework can perform a robust style transfer on long sequential heterogeneous data and between (extremely different) independent actions. For the motion style translation task, we first use a network that maps a random noise to style, which allows diverse stylization results to be generated without using reference motion. Through various experiments, we demonstrate the ability of our method to generate improved results in terms of visual quality, stylistic diversity, and content preservation.
Advisors
Lee, Sung-Heeresearcher이성희researcher
Description
한국과학기술원 :문화기술대학원,
Publisher
한국과학기술원
Issue Date
2021
Identifier
325007
Language
eng
Description

학위논문(석사) - 한국과학기술원 : 문화기술대학원, 2021.2,[iv, 23 p. :]

Keywords

Generative adversarial networks▼aGraph convolution networks▼aMotion style transfer▼acharacter animation▼adeep learning-based motion synthesis; 적대적 생성 신경망▼a그래프 컨볼루션 네트워크▼a모션 스타일 트랜스퍼▼a캐릭터 애니메이션▼a딥러닝 기반 모션 생성

URI
http://hdl.handle.net/10203/295120
Link
http://library.kaist.ac.kr/search/detail/view.do?bibCtrlNo=948619&flag=dissertation
Appears in Collection
GCT-Theses_Master(석사논문)
Files in This Item
There are no files associated with this item.

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0