Cross-modal alignment and translation for missing modality action recognition모달리티 소실을 고려하는 행동 인식을 위한 모달리티 간 정렬 및 변환

Cited 0 time in webofscience Cited 0 time in scopus
  • Hit : 97
  • Download : 0
Multimodal data provides complementary information on the same context, leading to performance improvement in video action recognition. However, in reality, not all modalities are available at test time. To this end, we propose Cross-Modal Alignment and Translation (CMAT) framework for action recognition that is robust to missing modalities. Specifically, our framework first aligns representations of multiple modalities from the same video sample through contrastive learning, effectively alleviating the bias with respect to the type of missing modality. Then, CMAT learns to translate representations of one modality into that of another modality. This allows the representations of the missing modalities to be generated from the remaining modalities during the testing. Consequently, CMAT fully utilizes multimodal information obtained through abundant interactions across modalities. The proposed CMAT achieves the state-of-the-art performances in both complete and missing modality settings on NTU RGB+D, NTU RGB+D 120, and Northwestern-UCLA datasets. Moreover, extensive ablation studies demonstrate the effectiveness of our design.
Advisors
Kim, Changickresearcher김창익researcher
Description
한국과학기술원 :전기및전자공학부,
Publisher
한국과학기술원
Issue Date
2023
Identifier
325007
Language
eng
Description

학위논문(석사) - 한국과학기술원 : 전기및전자공학부, 2023.2,[iii, 24 p. :]

Keywords

Action recognition▼aMulti-modal learning▼aMissing modality▼aContrastive learning▼aFeature translation; 행동 인식▼a멀티모달 학습▼a소실 모달리티▼a대조 학습▼a특징 변환

URI
http://hdl.handle.net/10203/309877
Link
http://library.kaist.ac.kr/search/detail/view.do?bibCtrlNo=1032888&flag=dissertation
Appears in Collection
EE-Theses_Master(석사논문)
Files in This Item
There are no files associated with this item.

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0