High-quality visual sensing via sensor fusion for intelligent robotic systems센서 융합을 이용한 지능 로봇의 고품질 시각 인지 방법

Cited 0 time in webofscience Cited 0 time in scopus
  • Hit : 1047
  • Download : 0
Robot technologies are expected to significantly improve human safety and convenience by substituting for human workers. Many robot systems such as autonomous vehicles and humanoid robots were implemented and disclosed to the public through various press media and famous events such as DARPA challenges. The robot systems are expected to carry out diverse tasks in the human world, and sensing and recognizing surrounding environments is one of the fundamental abilities of the robots, and diverse sensors have been developed for "robot sense". Among them, image and depth sensors are widely used and provide a most useful information for surrounding environments, and there has been a large amount research in the field of computer and robot vision using these sensors. However, when trying to exploit the algorithms to recognize the surrounding environments, we often suffer from lower performance of the algorithms than the performance on their report under outdoor environments. In this dissertation, to prevent the performance degradation of the algorithms we propose robust visual sensing methods using sensor fusion approaches to obtain high-quality visual information for robotic sensing with actual implementation cases. First, we present a new method to automatically adjust camera exposure for capturing high-quality image by exploiting relationship between gradient information and camera exposures. Since most of robot vision algorithms heavily rely on low-level image features, we pay attention to the gradient information in order to determine a proper exposure level and make a camera capture important image features robust to illumination conditions. Additionally, we introduce a new control algorithm to achieve both the brightness consistency between adjacent cameras and the proper exposure level of each camera for multi-camera systems. We implement our system with off-the-shelf machine vision cameras and demonstrate the effectiveness of our algorithms on several practical applications such as pedestrian detection, visual odometry, surround-view imaging, panoramic imaging, and stereo matching. Second, we present a high-quality depth generation method which propagates iteratively unstructured sparse depth points by fusing sharp edge boundaries of the depth data and the corresponding image. Our depth processing method explicitly handles noisy or unreliable depth observations and refines depth map using image and depth guidance scheme, and filter out unreliable depth points using confidence map. The confidence map is converted into binary mask using proposed self-learning framework which automatically generates labeled training dataset. We show the performance of our depth generation method quantitatively and qualitatively on several synthetic and real-world datasets. Finally, we present intelligent robotic systems that I participated in development, which manages and fuses information obtained from various detection algorithms. One of them is KAIST autonomous driving system, named EURECAR, and the other is KAIST humanoid system, named DRC-HUBO+. The two intelligent robotic system integrates various vision based detection algorithms by modular network architecture with proposed high-quality visual sensing system. EURECAR system was evaluated on challenging real track with a set of traffic signals at Hyundai Autonomous Vehicle Competition(AVC) 2012 and showed good performance, and DRC-HUBO+ also showed its performance at the DARPA Robotics Challenge(DRC) Finals 2015. The robot successfully carried out all tasks, and we got first place with full score.
Advisors
Kweon, In Soresearcher권인소researcher
Description
한국과학기술원 :미래자동차학제전공,
Publisher
한국과학기술원
Issue Date
2017
Identifier
325007
Language
eng
Description

학위논문(박사) - 한국과학기술원 : 미래자동차학제전공, 2017.2,[vi, 99 p. :]

Keywords

robot vision; image acquisition; depth processing; real-time computer vision; self-learning; automated robotic system; intelligent robotic system; 로봇 비전; 영상 정보 처리; 깊이 정보 처리; 실시간 컴퓨터 비전 기술; 자가 학습; 자율 로봇 시스템; 지능형 로봇 시스템

URI
http://hdl.handle.net/10203/241796
Link
http://library.kaist.ac.kr/search/detail/view.do?bibCtrlNo=675696&flag=dissertation
Appears in Collection
PD-Theses_Ph.D.(박사논문)
Files in This Item
There are no files associated with this item.

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0