Multi-modality Associative Bridging through Memory: Speech Sound Recollected from Face Video

Cited 24 time in webofscience Cited 0 time in scopus
  • Hit : 166
  • Download : 0
DC FieldValueLanguage
dc.contributor.authorKim, Minsuko
dc.contributor.authorHong, Joannako
dc.contributor.authorPark, Se Jinko
dc.contributor.authorRo, Yong Manko
dc.date.accessioned2021-10-15T00:30:30Z-
dc.date.available2021-10-15T00:30:30Z-
dc.date.created2021-07-23-
dc.date.created2021-07-23-
dc.date.created2021-07-23-
dc.date.created2021-07-23-
dc.date.issued2021-10-15-
dc.identifier.citation18th IEEE/CVF International Conference on Computer Vision (ICCV), pp.296 - 306-
dc.identifier.urihttp://hdl.handle.net/10203/288199-
dc.description.abstractIn this paper, we introduce a novel audio-visual multi-modal bridging framework that can utilize both audio and visual information, even with uni-modal inputs. We exploit a memory network that stores source (i.e., visual) and target (i.e., audio) modal representations, where source modal representation is what we are given, and target modal representations are what we want to obtain from the memory network. We then construct an associative bridge between source and target memories that considers the interrelationship between the two memories. By learning the interrelationship through the associative bridge, the proposed bridging framework is able to obtain the target modal representations inside the memory network, even with the source modal input only, and it provides rich information for its downstream tasks. We apply the proposed framework to two tasks: lip reading and speech reconstruction from silent video. Through the proposed associative bridge and modality-specific memories, each task knowledge is enriched with the recalled audio context, achieving state-of-the-art performance. We also verify that the associative bridge properly relates the source and target memories.-
dc.languageEnglish-
dc.publisherComputer Vision Foundation, IEEE Computer Society-
dc.titleMulti-modality Associative Bridging through Memory: Speech Sound Recollected from Face Video-
dc.typeConference-
dc.identifier.wosid000797698900030-
dc.type.rimsCONF-
dc.citation.beginningpage296-
dc.citation.endingpage306-
dc.citation.publicationname18th IEEE/CVF International Conference on Computer Vision (ICCV)-
dc.identifier.conferencecountryCN-
dc.identifier.conferencelocationVirtual-
dc.identifier.doi10.1109/ICCV48922.2021.00036-
dc.contributor.localauthorRo, Yong Man-
Appears in Collection
EE-Conference Papers(학술회의논문)
Files in This Item
There are no files associated with this item.
This item is cited by other documents in WoS
⊙ Detail Information in WoSⓡ Click to see webofscience_button
⊙ Cited 24 items in WoS Click to see citing articles in records_button

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0