DC Field | Value | Language |
---|---|---|
dc.contributor.author | Ryu, Kwonyoung | ko |
dc.contributor.author | Lee, Kang-il | ko |
dc.contributor.author | Cho, Jegyeong | ko |
dc.contributor.author | Yoon, Kuk-Jin | ko |
dc.date.accessioned | 2021-08-10T06:50:03Z | - |
dc.date.available | 2021-08-10T06:50:03Z | - |
dc.date.created | 2021-06-24 | - |
dc.date.created | 2021-06-24 | - |
dc.date.created | 2021-06-24 | - |
dc.date.issued | 2021-10 | - |
dc.identifier.citation | IEEE ROBOTICS AND AUTOMATION LETTERS, v.6, no.4, pp.6961 - 6968 | - |
dc.identifier.issn | 2377-3766 | - |
dc.identifier.uri | http://hdl.handle.net/10203/287112 | - |
dc.description.abstract | Most existing deep learning-based depth completion methods are only suitable for high (e.g. 64-scanline) resolution LiDAR measurements, and they usually fail to predict a reliable dense depth map with low resolution (4, 8, or 16-scanline) LiDAR. However, it is of great interest to reduce the number of LiDAR channels in many aspects (cost, weight of a device, power consumption). In this letter, we propose a new depth completion framework with various LiDAR scanline resolutions, which performs as well as methods built for 64-scanline resolution LiDAR inputs. For this, we define a consistency loss between the predictions from LiDAR measurements of different scanline resolutions. (i.e., 4, 8, 16, 32-scanline LiDAR measurements) Also, we design a fusion module to integrate features from different modalities. Experiments show our proposed method outperforms the current state-of-the-art depth completion methods for input LiDAR measurements of low scanline resolution and performs comparably to the methods(models) for input LiDAR measurements of 64-scanline resolution on the KITTI benchmark dataset. | - |
dc.language | English | - |
dc.publisher | IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC | - |
dc.title | Scanline Resolution-invariant Depth Completion using a Single Image and Sparse LiDAR Point Cloud | - |
dc.type | Article | - |
dc.identifier.wosid | 000678343900037 | - |
dc.identifier.scopusid | 2-s2.0-85110810049 | - |
dc.type.rims | ART | - |
dc.citation.volume | 6 | - |
dc.citation.issue | 4 | - |
dc.citation.beginningpage | 6961 | - |
dc.citation.endingpage | 6968 | - |
dc.citation.publicationname | IEEE ROBOTICS AND AUTOMATION LETTERS | - |
dc.identifier.doi | 10.1109/LRA.2021.3096499 | - |
dc.contributor.localauthor | Yoon, Kuk-Jin | - |
dc.contributor.nonIdAuthor | Ryu, Kwonyoung | - |
dc.contributor.nonIdAuthor | Lee, Kang-il | - |
dc.contributor.nonIdAuthor | Cho, Jegyeong | - |
dc.description.isOpenAccess | N | - |
dc.type.journalArticle | Article | - |
dc.subject.keywordAuthor | Depth completion | - |
dc.subject.keywordAuthor | depth estimation | - |
dc.subject.keywordAuthor | sensor fusion | - |
dc.subject.keywordAuthor | deep learning for visual perception | - |
dc.subject.keywordAuthor | LiDAR | - |
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.