Lip to Speech Synthesis with Visual Context Attentional GAN

Cited 0 time in webofscience Cited 0 time in scopus
  • Hit : 271
  • Download : 0
In this paper, we propose a novel lip-to-speech generative adversarial network, Visual Context Attentional GAN (VCA-GAN), which can jointly model local and global lip movements during speech synthesis. Specifically, the proposed VCA-GAN synthesizes the speech from local lip visual features by finding a mapping function of visemes-to-phonemes, while global visual context is embedded into the intermediate speech representation to refine the coarse speech representation in details. To achieve this, a visual context attention module is proposed where it encodes global representations from the local visual features and provides the desired global visual context corresponding to the given coarse speech representation to the generator. In addition to the explicit modelling of local and global visual representations, a synchronization technique is introduced through contrastive learning that guides the generator to synthesize a speech in sync with the given input lip movements. Extensive experiments demonstrate that the proposed VCA-GAN outperforms existing state-of-the-art and is able to effectively synthesize the speech from multi-speaker that has been barely handled in the previous works.
Publisher
Neural Information Processing Systems
Issue Date
2021-12-06
Language
English
Citation

Thirty-fifth Conference on Neural Information Processing Systems (NeurIPS 2021)

URI
http://hdl.handle.net/10203/289063
Appears in Collection
EE-Conference Papers(학술회의논문)
Files in This Item
There are no files associated with this item.

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0