Efficient sensor fusion method for enhancing three-dimensional object detection and pose estimation삼차원 물체 탐지 및 자세 추정의 성능 향상을 위한 효율적 센서 퓨전 기법

Cited 0 time in webofscience Cited 0 time in scopus
  • Hit : 344
  • Download : 0
Performance of object detection using single images has been significantly improved by recent progress of artificial intelligence technologies. But, existing technologies focus on detecting objects in 2D images, which makes difficult to use it in real-world robot applications carrying out various tasks such as detecting objects, operating or avoiding them as well. Instead of the use of 2D images, 3D object detection is more suitable for such robot applications. Of course, research on 3D detection has been conducted in the academic community, but its performances do not reach that of 2D object detection as of yet. This is because 3D points from 3D scanner are too sparse to capture fine structured and small objects such as bicycle, person and road sign etc. In this dissertation, we propose a Camera-LiDAR sensor fusion method for enhancing 3D object detection and pose estimation for robotic applications as twofold. The first part of this dissertation is a 3D object proposal method which will reduce the search region of an object. By proposal of a region assumed to contain an object, rather than searching an entire area for the object, we can increase time efficiency and improve accuracy for detection. In this dissertation we propose a 3D object proposal, applying an object proposal used for a 2D image onto 3D dataset. The proposed 3D object proposal method shows higher recall with fewer number of proposals to that of 2D, by using discontinuity in 3D. The second part is a depth completion which make dense depth maps from sparse 3D point measurements from LiDAR data. A major bottleneck in 3D object detection and pose estimation comes from the sparsity of the LiDAR sensor itself. The proposed method propagates initial sparse depth points into a corresponding image with a geometric consistency assuming that 3D points is perpendicular to the neighbor normal vector. In this step, we additionally propose an accurate surface normal estimation to handle over-smoothing artifact in depth boundaries. We demonstrate that the estimated dense depth maps benefit robotics applications in real-world environments. However, there is still problem of our depth completion in the computational complexity. Finally, we propose a selective depth propagation method to resolve the computational complexity. Using our object proposal, we generate selective regions for depth completion, and then propagate sparse 3D depth into those regions. As a result, our unified method achieves reducing the computational time by 10 times compared to depth completion for whole image regions.
Advisors
Kweon, In Soresearcher권인소researcher
Description
한국과학기술원 :전기및전자공학부,
Publisher
한국과학기술원
Issue Date
2019
Identifier
325007
Language
eng
Description

학위논문(박사) - 한국과학기술원 : 전기및전자공학부, 2019.2,[vi, 79 p. :]

Keywords

3D object detection▼aobject pose estimation▼adepth upsampling▼aselective depth propagation; 삼차원 물체 탐지▼a물체 자세 추정▼a물체 영역 제안▼a삼차원 점 고밀도화▼a선택적 삼차원 정보 고밀도화

URI
http://hdl.handle.net/10203/265131
Link
http://library.kaist.ac.kr/search/detail/view.do?bibCtrlNo=842208&flag=dissertation
Appears in Collection
EE-Theses_Ph.D.(박사논문)
Files in This Item
There are no files associated with this item.

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0