Learning to lip read words by watching videos

Cited 35 time in webofscience Cited 0 time in scopus
  • Hit : 167
  • Download : 0
DC FieldValueLanguage
dc.contributor.authorChung, Joon Sonko
dc.contributor.authorZisserman, Andrewko
dc.date.accessioned2021-11-27T06:40:34Z-
dc.date.available2021-11-27T06:40:34Z-
dc.date.created2021-11-26-
dc.date.created2021-11-26-
dc.date.created2021-11-26-
dc.date.issued2018-08-
dc.identifier.citationCOMPUTER VISION AND IMAGE UNDERSTANDING, v.173, pp.76 - 85-
dc.identifier.issn1077-3142-
dc.identifier.urihttp://hdl.handle.net/10203/289581-
dc.description.abstractOur aim is to recognise the words being spoken by a talking face, given only the video but not the audio. Existing works in this area have focussed on trying to recognise a small number of utterances in controlled environments (e.g. digits and alphabets), partially due to the shortage of suitable datasets. We make three novel contributions: first, we develop a pipeline for fully automated data collection from TV broadcasts. With this we have generated a dataset with over a million word instances, spoken by over a thousand different people; second, we develop a two-stream convolutional neural network that learns a joint embedding between the sound and the mouth motions from unlabelled data. We apply this network to the tasks of audio-to-video synchronisation and active speaker detection; third, we train convolutional and recurrent networks that are able to effectively learn and recognize hundreds of words from this large-scale dataset. In lip reading and in speaker detection, we demonstrate results that exceed the current state-of-the-art on public benchmark datasets.-
dc.languageEnglish-
dc.publisherACADEMIC PRESS INC ELSEVIER SCIENCE-
dc.titleLearning to lip read words by watching videos-
dc.typeArticle-
dc.identifier.wosid000454184600009-
dc.identifier.scopusid2-s2.0-85044661381-
dc.type.rimsART-
dc.citation.volume173-
dc.citation.beginningpage76-
dc.citation.endingpage85-
dc.citation.publicationnameCOMPUTER VISION AND IMAGE UNDERSTANDING-
dc.identifier.doi10.1016/j.cviu.2018.02.001-
dc.contributor.localauthorChung, Joon Son-
dc.contributor.nonIdAuthorZisserman, Andrew-
dc.description.isOpenAccessN-
dc.type.journalArticleArticle-
dc.subject.keywordAuthorLip reading-
dc.subject.keywordAuthorLip synchronisation-
dc.subject.keywordAuthorActive speaker detection-
dc.subject.keywordAuthorLarge vocabulary-
dc.subject.keywordAuthorDataset-
dc.subject.keywordPlusSPEECH-
dc.subject.keywordPlusEXTRACTION-
dc.subject.keywordPlusFEATURES-
Appears in Collection
EE-Journal Papers(저널논문)
Files in This Item
There are no files associated with this item.
This item is cited by other documents in WoS
⊙ Detail Information in WoSⓡ Click to see webofscience_button
⊙ Cited 35 items in WoS Click to see citing articles in records_button

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0