DVL-SLAM: Sparse Depth Enhanced Direct Visual-LiDAR SLAM

Cited 42 time in webofscience Cited 18 time in scopus
  • Hit : 585
  • Download : 0
This paper presents a framework for direct visual-LiDAR SLAM that combines the sparse depth measurement of light detection and ranging (LiDAR) with a monocular camera. The exploitation of the depth measurement between two sensor modalities has been reported in the literature but mostly by a keyframe-based approach or by using a dense depth map. When the sparsity becomes severe, the existing methods reveal limitation. The key finding of this paper is that the direct method is more robust under sparse depth with narrow field of view. The direct exploitation of sparse depth is achieved by implementing a joint optimization of each measurement under multiple keyframes. To ensure real-time performance, the keyframes of the sliding window are kept constant through rigorous marginalization. Through cross-validation, loop-closure achieves the robustness even in large-scale mapping. We intensively evaluated the proposed method using our own portable camera-LiDAR sensor system as well as the KITTI dataset. For the evaluation, the performance according to the LiDAR of sparsity was simulated by sampling the laser beam from 64 to 16 and 8. The experiment proves that the presented approach is significantly outperformed in terms of accuracy and robustness under sparse depth measurements.
Publisher
SPRINGER
Issue Date
2020-01
Language
English
Article Type
Article
Citation

AUTONOMOUS ROBOTS, v.44, no.2, pp.115 - 130

ISSN
0929-5593
DOI
10.1007/s10514-019-09881-0
URI
http://hdl.handle.net/10203/273395
Appears in Collection
CE-Journal Papers(저널논문)
Files in This Item
There are no files associated with this item.
This item is cited by other documents in WoS
⊙ Detail Information in WoSⓡ Click to see webofscience_button
⊙ Cited 42 items in WoS Click to see citing articles in records_button

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0