Robust Multispectral Pedestrian Detection via Uncertainty-Aware Cross-Modal Learning

Cited 0 time in webofscience Cited 0 time in scopus
  • Hit : 228
  • Download : 0
With the development of deep neural networks, multispectral pedestrian detection has been received a great attention by exploiting complementary properties of multiple modalities (e.g., color-visible and thermal modalities). Previous works usually rely on network prediction scores in combining complementary modal information. However, it is widely known that deep neural networks often show the overconfident problem which results in limited performance. In this paper, we propose a novel uncertainty-aware cross-modal learning to alleviate the aforementioned problem in multispectral pedestrian detection. First, we extract object region uncertainty which represents the reliability of object region features in multiple modalities. Then, we combine each modal object region feature considering object region uncertainty. Second, we guide the classifier of detection framework with soft target labels to be aware of the level of object region uncertainty in multiple modalities. To verify the effectiveness of the proposed methods, we conduct extensive experiments with various detection frameworks on two public datasets (i.e., KAIST Multispectral Pedestrian Dataset and CVC-14). © 2021, Springer Nature Switzerland AG.
Publisher
Springer Science and Business Media Deutschland GmbH
Issue Date
2021-06-22
Language
English
Citation

27th International Conference on MultiMedia Modeling, MMM 2021, pp.391 - 402

ISSN
0302-9743
DOI
10.1007/978-3-030-67832-6_32
URI
http://hdl.handle.net/10203/288792
Appears in Collection
EE-Conference Papers(학술회의논문)
Files in This Item
There are no files associated with this item.

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0