Robust visual localization using points and lines within a visual-LiDAR feature map카메라-라이다 특징지도 상의 점-선분 기반 강인한 카메라 위치 추정 기법

Cited 0 time in webofscience Cited 0 time in scopus
  • Hit : 148
  • Download : 0
Visual localization is a problem to estimate a 6 DoF camera pose given a single image. It is a key component to implement autonomous robots, AR, or MR that needs to interact with environments based on the current position. A GPS is widely used to find the global position of a car navigation system in outdoor environments. However, it is limited in its position robustness, especially within the city areas where many high-rise buildings exist or indoor environments. To overcome this limitation, visual localization has been getting attention from academia and Industries. Recently, visual localization has been studied actively and shown limitations on accuracy in low-textured environments where only small numbers of feature points are extracted or the feature points are not well-distributed on images due to simple-structured places. This thesis proposes to utilize line features that can provide additional constraints to optimize a camera pose from a 3-dimensional map. To achieve the goal, a robust line descriptor has to be developed that can work in large viewpoint changes. The hand-crafted line descriptor such as LBD only works well in a narrow baseline, and other recent CNN-based line descriptors also have innate limitations in their architecture to deal with the variable length of line segments as their inputs. Therefore, this thesis proposes a robust line descriptor inspired NLP that can treat various lengths of textual sentences. Also, the proposed network enhanced line descriptor by sharing line's geometric attributes with neighbor line segments vai GNN. To build accurate 3D point-line feature maps with their descriptors, we utilize a LiDAR sensor. LiDAR can measure the accurate depth, but the laser density is sparse when projected onto the image. Therefore, there is sometimes ambiguity in finding the depth value of point and line features on the image. Therefore in the second chapter of the thesis, we present a point-line feature mapping with depth completion. Then as a reference study, we introduce a two-stage depth completion method to utilize LiDAR in the algorithm. The proposed method separates a depth completion problem into two components: a depth prediction module using an image and a non-deep learning-based depth regression module to fuse LiDAR and a depth image from a camera. It has advantageous to be more robust when sparsity and bias of LiDAR inputs are changed, compared to the recent deep-learning-based methods. In the third chapter of the thesis, we present an outlier detection method of line correspondences using the known vertical direction of IMU, which ensures the robustness of visual localization. In the end, we validate Line-Loc in indoor and outdoor environments. Moreover, we present PL-Loc with two strategies using feature points and lines together and we show that complementary roles of points and lines can enhance the localizing performance.
Advisors
Ryu, Jee-Hwanresearcher유지환researcher
Description
한국과학기술원 :로봇공학학제전공,
Publisher
한국과학기술원
Issue Date
2022
Identifier
325007
Language
eng
Description

학위논문(박사) - 한국과학기술원 : 로봇공학학제전공, 2022.2,[v, 55 p. :]

URI
http://hdl.handle.net/10203/307955
Link
http://library.kaist.ac.kr/search/detail/view.do?bibCtrlNo=996447&flag=dissertation
Appears in Collection
RE-Theses_Ph.D.(박사논문)
Files in This Item
There are no files associated with this item.

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0