Modality Shifting Attention Network for Multi-Modal Video Question Answering

Cited 0 time in webofscience Cited 0 time in scopus
  • Hit : 71
  • Download : 0
DC FieldValueLanguage
dc.contributor.authorKim, Junyeongko
dc.contributor.authorMa, Minukko
dc.contributor.authorPham, Trungko
dc.contributor.authorKim, Kyungsuko
dc.contributor.authorYoo, Chang D.ko
dc.date.accessioned2022-08-24T06:00:19Z-
dc.date.available2022-08-24T06:00:19Z-
dc.date.created2022-06-25-
dc.date.created2022-06-25-
dc.date.issued2020-06-
dc.identifier.citation2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)-
dc.identifier.issn1063-6919-
dc.identifier.urihttp://hdl.handle.net/10203/298059-
dc.description.abstractThis paper considers a network referred to as Modality Shifting Attention Network (MSAN) for Multimodal Video Question Answering (MVQA) task. MSAN decomposes the task into two sub-tasks: (1) localization of temporal moment relevant to the question, and (2) accurate prediction of the answer based on the localized moment. The modality required for temporal localization may be different from that for answer prediction, and this ability to shift modality is essential for performing the task. To this end, MSAN is based on (1) the moment proposal network (MPN) that attempts to locate the most appropriate temporal moment from each of the modalities, and also on (2) the heterogeneous reasoning network (HRN) that predicts the answer using an attention mechanism on both modalities. MSAN is able to place importance weight on the two modalities for each sub-task using a component referred to as Modality Importance Modulation (MIM). Experimental results show that MSAN outperforms previous state-of-the-art by achieving 71.13% test accuracy on TVQA benchmark dataset. Extensive ablation studies and qualitative analysis are conducted to validate various components of the network.-
dc.languageEnglish-
dc.publisherIEEE-
dc.titleModality Shifting Attention Network for Multi-Modal Video Question Answering-
dc.typeConference-
dc.identifier.scopusid2-s2.0-85094325087-
dc.type.rimsCONF-
dc.citation.publicationname2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)-
dc.identifier.conferencecountryUS-
dc.identifier.conferencelocationVirtual-
dc.identifier.doi10.1109/cvpr42600.2020.01012-
dc.contributor.localauthorYoo, Chang D.-
dc.contributor.nonIdAuthorKim, Kyungsu-
Appears in Collection
EE-Conference Papers(학술회의논문)
Files in This Item
There are no files associated with this item.

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0