Scanline Resolution-invariant Depth Completion using a Single Image and Sparse LiDAR Point Cloud

Cited 8 time in webofscience Cited 0 time in scopus
  • Hit : 327
  • Download : 0
DC FieldValueLanguage
dc.contributor.authorRyu, Kwonyoungko
dc.contributor.authorLee, Kang-ilko
dc.contributor.authorCho, Jegyeongko
dc.contributor.authorYoon, Kuk-Jinko
dc.date.accessioned2021-08-10T06:50:03Z-
dc.date.available2021-08-10T06:50:03Z-
dc.date.created2021-06-24-
dc.date.created2021-06-24-
dc.date.created2021-06-24-
dc.date.issued2021-10-
dc.identifier.citationIEEE ROBOTICS AND AUTOMATION LETTERS, v.6, no.4, pp.6961 - 6968-
dc.identifier.issn2377-3766-
dc.identifier.urihttp://hdl.handle.net/10203/287112-
dc.description.abstractMost existing deep learning-based depth completion methods are only suitable for high (e.g. 64-scanline) resolution LiDAR measurements, and they usually fail to predict a reliable dense depth map with low resolution (4, 8, or 16-scanline) LiDAR. However, it is of great interest to reduce the number of LiDAR channels in many aspects (cost, weight of a device, power consumption). In this letter, we propose a new depth completion framework with various LiDAR scanline resolutions, which performs as well as methods built for 64-scanline resolution LiDAR inputs. For this, we define a consistency loss between the predictions from LiDAR measurements of different scanline resolutions. (i.e., 4, 8, 16, 32-scanline LiDAR measurements) Also, we design a fusion module to integrate features from different modalities. Experiments show our proposed method outperforms the current state-of-the-art depth completion methods for input LiDAR measurements of low scanline resolution and performs comparably to the methods(models) for input LiDAR measurements of 64-scanline resolution on the KITTI benchmark dataset.-
dc.languageEnglish-
dc.publisherIEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC-
dc.titleScanline Resolution-invariant Depth Completion using a Single Image and Sparse LiDAR Point Cloud-
dc.typeArticle-
dc.identifier.wosid000678343900037-
dc.identifier.scopusid2-s2.0-85110810049-
dc.type.rimsART-
dc.citation.volume6-
dc.citation.issue4-
dc.citation.beginningpage6961-
dc.citation.endingpage6968-
dc.citation.publicationnameIEEE ROBOTICS AND AUTOMATION LETTERS-
dc.identifier.doi10.1109/LRA.2021.3096499-
dc.contributor.localauthorYoon, Kuk-Jin-
dc.contributor.nonIdAuthorRyu, Kwonyoung-
dc.contributor.nonIdAuthorLee, Kang-il-
dc.contributor.nonIdAuthorCho, Jegyeong-
dc.description.isOpenAccessN-
dc.type.journalArticleArticle-
dc.subject.keywordAuthorDepth completion-
dc.subject.keywordAuthordepth estimation-
dc.subject.keywordAuthorsensor fusion-
dc.subject.keywordAuthordeep learning for visual perception-
dc.subject.keywordAuthorLiDAR-
Appears in Collection
ME-Journal Papers(저널논문)
Files in This Item
There are no files associated with this item.
This item is cited by other documents in WoS
⊙ Detail Information in WoSⓡ Click to see webofscience_button
⊙ Cited 8 items in WoS Click to see citing articles in records_button

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0