You Said That?: Synthesising Talking Faces from Audio

Cited 69 time in webofscience Cited 0 time in scopus
  • Hit : 190
  • Download : 0
We describe a method for generating a video of a talking face. The method takes still images of the target face and an audio speech segment as inputs, and generates a video of the target face lip synched with the audio. The method runs in real time and is applicable to faces and audio not seen at training time. To achieve this we develop an encoder-decoder convolutional neural network (CNN) model that uses a joint embedding of the face and audio to generate synthesised talking face video frames. The model is trained on unlabelled videos using cross-modal self-supervision. We also propose methods to re-dub videos by visually blending the generated face into the source video frame using a multi-stream CNN model.
Publisher
SPRINGER
Issue Date
2019-12
Language
English
Article Type
Article
Citation

INTERNATIONAL JOURNAL OF COMPUTER VISION, v.127, no.11-12, pp.1767 - 1779

ISSN
0920-5691
DOI
10.1007/s11263-019-01150-y
URI
http://hdl.handle.net/10203/289580
Appears in Collection
EE-Journal Papers(저널논문)
Files in This Item
There are no files associated with this item.
This item is cited by other documents in WoS
⊙ Detail Information in WoSⓡ Click to see webofscience_button
⊙ Cited 69 items in WoS Click to see citing articles in records_button

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0