Self-Supervised Depth and Ego-Motion Estimation for Monocular Thermal Video Using Multi-Spectral Consistency Loss

Cited 0 time in webofscience Cited 0 time in scopus
  • Hit : 74
  • Download : 0
A thermal camera can robustly capture thermal radiation images under harsh light conditions such as night scenes, tunnels, and disaster scenarios. However, despite this advantage, neither depth nor ego-motion estimation research for the thermal camera have not been actively explored so far. In this paper, we propose a self-supervised learning method for depth and ego-motion estimation from thermal images. The proposed method exploits multi-spectral consistency that consists of temperature and photometric consistency loss. The temperature consistency loss provides a fundamental self-supervisory signal by reconstructing clipped and colorized thermal images. Additionally, we design a differentiable forward warping module that can transform the coordinate system of the estimated depth map and relative pose from thermal camera to visible camera. Based on the proposed module, the photometric consistency loss can provide complementary self-supervision to networks. Networks trained with the proposed method robustly estimate the depth and pose from monocular thermal video under low-light and even zero-light conditions. To the best of our knowledge, this is the first work to simultaneously estimate both depth and ego-motion from monocular thermal video in a self-supervised manner.
Publisher
IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
Issue Date
2022-04
Language
English
Article Type
Article
Citation

IEEE ROBOTICS AND AUTOMATION LETTERS, v.7, no.2, pp.1103 - 1110

ISSN
2377-3766
DOI
10.1109/LRA.2021.3137895
URI
http://hdl.handle.net/10203/291847
Appears in Collection
EE-Journal Papers(저널논문)
Files in This Item
There are no files associated with this item.

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0