MARIO: Modality-Aware Attention and Modality-Preserving Decoders for Multimedia Recommendation

Cited 0 time in webofscience Cited 0 time in scopus
  • Hit : 203
  • Download : 0
We address the multimedia recommendation problem, which utilizes items' multimodal features, such as visual and textual modalities, in addition to interaction information. While a number of existing multimedia recommender systems have been developed for this problem, we point out that none of these methods individually capture the influence of each modality at the interaction level. More importantly, we experimentally observe that the learning procedures of existing works fail to preserve the intrinsic modality-specific properties of items. To address above limitations, we propose an accurate multimedia recommendation framework, named MARIO, based on modality-aware attention and modality-preserving decoders. MARIO predicts users' preferences by considering the individual influence of each modality on each interaction while obtaining item embeddings that preserve the intrinsic modality-specific properties. The experiments on four real-life datasets demonstrate that MARIO consistently and significantly outperforms seven competitors in terms of the recommendation accuracy: MARIO yields up to 14.61% higher accuracy, compared to the best competitor.
Publisher
Association for Computing Machinery
Issue Date
2022-10-18
Language
English
Citation

31st ACM International Conference on Information and Knowledge Management, CIKM 2022, pp.993 - 1002

DOI
10.1145/3511808.3557387
URI
http://hdl.handle.net/10203/301507
Appears in Collection
AI-Conference Papers(학술대회논문)
Files in This Item
There are no files associated with this item.

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0