DC Field | Value | Language |
---|---|---|
dc.contributor.author | Park, Sungjune | ko |
dc.contributor.author | Kim, Jung Uk | ko |
dc.contributor.author | Kim, Yeon Gyun | ko |
dc.contributor.author | Moon, Sang Keun | ko |
dc.contributor.author | Ro, Yong Man | ko |
dc.date.accessioned | 2021-11-04T06:44:55Z | - |
dc.date.available | 2021-11-04T06:44:55Z | - |
dc.date.created | 2021-10-26 | - |
dc.date.created | 2021-10-26 | - |
dc.date.created | 2021-10-26 | - |
dc.date.issued | 2021-06-22 | - |
dc.identifier.citation | 27th International Conference on MultiMedia Modeling, MMM 2021, pp.391 - 402 | - |
dc.identifier.issn | 0302-9743 | - |
dc.identifier.uri | http://hdl.handle.net/10203/288792 | - |
dc.description.abstract | With the development of deep neural networks, multispectral pedestrian detection has been received a great attention by exploiting complementary properties of multiple modalities (e.g., color-visible and thermal modalities). Previous works usually rely on network prediction scores in combining complementary modal information. However, it is widely known that deep neural networks often show the overconfident problem which results in limited performance. In this paper, we propose a novel uncertainty-aware cross-modal learning to alleviate the aforementioned problem in multispectral pedestrian detection. First, we extract object region uncertainty which represents the reliability of object region features in multiple modalities. Then, we combine each modal object region feature considering object region uncertainty. Second, we guide the classifier of detection framework with soft target labels to be aware of the level of object region uncertainty in multiple modalities. To verify the effectiveness of the proposed methods, we conduct extensive experiments with various detection frameworks on two public datasets (i.e., KAIST Multispectral Pedestrian Dataset and CVC-14). © 2021, Springer Nature Switzerland AG. | - |
dc.language | English | - |
dc.publisher | Springer Science and Business Media Deutschland GmbH | - |
dc.title | Robust Multispectral Pedestrian Detection via Uncertainty-Aware Cross-Modal Learning | - |
dc.type | Conference | - |
dc.identifier.scopusid | 2-s2.0-85101746172 | - |
dc.type.rims | CONF | - |
dc.citation.beginningpage | 391 | - |
dc.citation.endingpage | 402 | - |
dc.citation.publicationname | 27th International Conference on MultiMedia Modeling, MMM 2021 | - |
dc.identifier.conferencecountry | CS | - |
dc.identifier.conferencelocation | Prague | - |
dc.identifier.doi | 10.1007/978-3-030-67832-6_32 | - |
dc.contributor.localauthor | Ro, Yong Man | - |
dc.contributor.nonIdAuthor | Kim, Yeon Gyun | - |
dc.contributor.nonIdAuthor | Moon, Sang Keun | - |
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.