GENERATING VIDEOS WITH DYNAMICS-AWARE IMPLICIT GENERATIVE ADVERSARIAL NETWORKS

Cited 0 time in webofscience Cited 0 time in scopus
  • Hit : 53
  • Download : 0
DC FieldValueLanguage
dc.contributor.authorYu, Sihyunko
dc.contributor.authorTack, Jihoonko
dc.contributor.authorMo, Sangwooko
dc.contributor.authorKim, Hyunsuko
dc.contributor.authorKim, Junhoko
dc.contributor.authorHa, Jung-Wooko
dc.contributor.authorShin, Jinwooko
dc.date.accessioned2023-09-14T12:00:40Z-
dc.date.available2023-09-14T12:00:40Z-
dc.date.created2023-09-14-
dc.date.issued2022-04-
dc.identifier.citation10th International Conference on Learning Representations, ICLR 2022-
dc.identifier.urihttp://hdl.handle.net/10203/312652-
dc.description.abstractIn the deep learning era, long video generation of high-quality still remains challenging due to the spatio-temporal complexity and continuity of videos. Existing prior works have attempted to model video distribution by representing videos as 3D grids of RGB values, which impedes the scale of generated videos and neglects continuous dynamics. In this paper, we found that the recent emerging paradigm of implicit neural representations (INRs) that encodes a continuous signal into a parameterized neural network effectively mitigates the issue. By utilizing INRs of video, we propose dynamics-aware implicit generative adversarial network (DIGAN), a novel generative adversarial network for video generation. Specifically, we introduce (a) an INR-based video generator that improves the motion dynamics by manipulating the space and time coordinates differently and (b) a motion discriminator that efficiently identifies the unnatural motions without observing the entire long frame sequences. We demonstrate the superiority of DIGAN under various datasets, along with multiple intriguing properties, e.g., long video synthesis, video extrapolation, and non-autoregressive video generation. For example, DIGAN improves the previous state-of-the-art FVD score on UCF-101 by 30.7% and can be trained on 128 frame videos of 128×128 resolution, 80 frames longer than the 48 frames of the previous state-of-the-art method.-
dc.languageEnglish-
dc.publisherInternational Conference on Learning Representations, ICLR-
dc.titleGENERATING VIDEOS WITH DYNAMICS-AWARE IMPLICIT GENERATIVE ADVERSARIAL NETWORKS-
dc.typeConference-
dc.identifier.scopusid2-s2.0-85147341877-
dc.type.rimsCONF-
dc.citation.publicationname10th International Conference on Learning Representations, ICLR 2022-
dc.identifier.conferencecountryUS-
dc.identifier.conferencelocationVirtual-
dc.contributor.localauthorShin, Jinwoo-
dc.contributor.nonIdAuthorKim, Hyunsu-
dc.contributor.nonIdAuthorKim, Junho-
dc.contributor.nonIdAuthorHa, Jung-Woo-
Appears in Collection
AI-Conference Papers(학술대회논문)
Files in This Item
There are no files associated with this item.

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0