Joint self-supervised learning and adversarial adaptation for monocular depth estimation from thermal image

Cited 0 time in webofscience Cited 0 time in scopus
  • Hit : 161
  • Download : 0
DC FieldValueLanguage
dc.contributor.authorShin, Ukcheolko
dc.contributor.authorPark, Kwanyongko
dc.contributor.authorLee, Kyunghyunko
dc.contributor.authorLee, Byeong-Ukko
dc.contributor.authorKweon, In Soko
dc.date.accessioned2023-06-21T06:01:17Z-
dc.date.available2023-06-21T06:01:17Z-
dc.date.created2023-06-21-
dc.date.issued2023-07-
dc.identifier.citationMACHINE VISION AND APPLICATIONS, v.34, no.4-
dc.identifier.issn0932-8092-
dc.identifier.urihttp://hdl.handle.net/10203/307403-
dc.description.abstractDepth estimation from thermal images is one potential solution to achieve reliability and robustness against diverse weather, lighting, and environmental conditions. Also, a self-supervised training method further boosts its scalability to various scenar-ios, which are usually impossible to collect ground-truth labels, such as GPS-denied and LiDAR-denied conditions. However, self-supervision from thermal images is usually insufficient to train networks due to the thermal image properties, such as low-contrast and textureless properties. Introducing additional self-supervision sources (e.g., RGB images) also introduces further hardware and software constraints, such as complicated multi-sensor calibration and synchronized data acquisition. Therefore, this manuscript proposes a novel training framework combining self-supervised learning and adversarial feature adaptation to leverage additional modality information without such constraints. The framework aims to train a network that estimates a monocular depth map from a thermal image in a self-supervised manner. In the training stage, the framework uti-lizes two self-supervisions; image reconstruction of unpaired RGB-thermal images and adversarial feature adaptation between unpaired RGB-thermal features. Based on the proposed method, the trained network achieves state-of-the-art quantitative results and edge-preserved depth estimation results compared to previous methods. Our source code is available at www. github.com/ukcheolshin/SelfDepth4Thermal-
dc.languageEnglish-
dc.publisherSPRINGER-
dc.titleJoint self-supervised learning and adversarial adaptation for monocular depth estimation from thermal image-
dc.typeArticle-
dc.identifier.wosid001000235500001-
dc.identifier.scopusid2-s2.0-85160934788-
dc.type.rimsART-
dc.citation.volume34-
dc.citation.issue4-
dc.citation.publicationnameMACHINE VISION AND APPLICATIONS-
dc.identifier.doi10.1007/s00138-023-01404-3-
dc.contributor.localauthorKweon, In So-
dc.description.isOpenAccessN-
dc.type.journalArticleArticle-
dc.subject.keywordAuthorDepth estimation-
dc.subject.keywordAuthorSelf-supervised learning-
dc.subject.keywordAuthorAdversarial domain adaptation-
dc.subject.keywordAuthorThermal image-
dc.subject.keywordAuthorThermal vision-
dc.subject.keywordPlusVISION-
Appears in Collection
EE-Journal Papers(저널논문)
Files in This Item
There are no files associated with this item.

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0