Less Can Be More: Sound Source Localization With a Classification Model

Cited 6 time in webofscience Cited 0 time in scopus
  • Hit : 54
  • Download : 0
In this paper, we tackle sound localization as a natural outcome of the audio-visual video classification problem. Differently from the existing sound localization approaches, we do not use any explicit sub-modules or training mechanisms but use simple cross-modal attention on top of the representations learned by a classification loss. Our key contribution is to show that a simple audio-visual classification model has the ability to localize sound sources accurately and to give on par performance with state-of-the-art methods by proving that indeed "less is more". Furthermore, we propose potential applications that can be built based on our model. First, we introduce informative moment selection to enhance the localization task learning in the existing approaches compare to mid-frame usage. Then, we introduce a pseudo bounding box generation procedure that can significantly boost the performance of the existing methods in semi-supervised settings or be used for large-scale automatic annotation with minimal effort from any video dataset.
Publisher
IEEE COMPUTER SOC
Issue Date
2022-01
Language
English
Citation

22nd IEEE/CVF Winter Conference on Applications of Computer Vision (WACV), pp.577 - 586

ISSN
2472-6737
DOI
10.1109/WACV51458.2022.00065
URI
http://hdl.handle.net/10203/298277
Appears in Collection
EE-Conference Papers(학술회의논문)
Files in This Item
There are no files associated with this item.
This item is cited by other documents in WoS
⊙ Detail Information in WoSⓡ Click to see webofscience_button
⊙ Cited 6 items in WoS Click to see citing articles in records_button

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0