DC Field | Value | Language |
---|---|---|
dc.contributor.advisor | 금동석 | - |
dc.contributor.author | Kim, Youngseok | - |
dc.contributor.author | 김영석 | - |
dc.date.accessioned | 2024-07-26T19:31:00Z | - |
dc.date.available | 2024-07-26T19:31:00Z | - |
dc.date.issued | 2023 | - |
dc.identifier.uri | http://library.kaist.ac.kr/search/detail/view.do?bibCtrlNo=1047408&flag=dissertation | en_US |
dc.identifier.uri | http://hdl.handle.net/10203/320981 | - |
dc.description | 학위논문(박사) - 한국과학기술원 : 조천식모빌리티대학원, 2023.8,[viii, 143 p. :] | - |
dc.description.abstract | The 3D object detection that identifies the types and locations of objects around a vehicle is crucial to enhance driving safety and reduce accident rates. For 3D object detection to be effective in a vehicle environment, it should be capable of precisely measuring the 3D position of objects, operating robustly in the presence of abnormal sensor input, and functioning fast in high-speed driving scenarios where surrounding objects move rapidly. In particular, this technology must operate efficiently under conditions of low-cost sensors and limited computing resources. In this dissertation, we delve into the topic of multi-sensor fusion methods that employ camera and radar sensors to achieve 3D object detection that is both robust and accurate while remaining efficient. To address this, we introduce a robust fusion method that adaptively fuses information from each sensor by obtaining the weights of the camera and radar, enabling the model to operate effectively even when one of the two sensor inputs is partially abnormal. Secondly, we present a sensor fusion method that takes into account the complementary spatial and contextual characteristics of cameras and radars. By doing so, we can leverage the strengths of each sensor to boost the overall performance of the system. Finally, we suggest a method for transforming the information from each sensor into a unified coordinate system. This tackles the issue of cameras and radar sensors having different coordinate systems and ensures that sensor fusion-based 3D object detection technology can satisfy real-time requirements and achieve high performance. | - |
dc.language | eng | - |
dc.publisher | 한국과학기술원 | - |
dc.subject | 자율주행▼a딥 러닝▼a컴퓨터 비전▼a다중 센서 융합▼a3차원 객체 검출 | - |
dc.subject | Autonomous driving▼aDeep learning▼aComputer vision▼aMulti-sensor fusion▼a3D object detection | - |
dc.title | 3D object detection via multi-sensor fusion for autonomous driving | - |
dc.title.alternative | 자율주행을 위한 다중 센서 융합 기반 3차원 객체 검출 기술 | - |
dc.type | Thesis(Ph.D) | - |
dc.identifier.CNRN | 325007 | - |
dc.description.department | 한국과학기술원 :조천식모빌리티대학원, | - |
dc.contributor.alternativeauthor | Kum, Dongsuk | - |
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.