3D object detection via multi-sensor fusion for autonomous driving자율주행을 위한 다중 센서 융합 기반 3차원 객체 검출 기술

Cited 0 time in webofscience Cited 0 time in scopus
  • Hit : 136
  • Download : 0
DC FieldValueLanguage
dc.contributor.advisor금동석-
dc.contributor.authorKim, Youngseok-
dc.contributor.author김영석-
dc.date.accessioned2024-07-26T19:31:00Z-
dc.date.available2024-07-26T19:31:00Z-
dc.date.issued2023-
dc.identifier.urihttp://library.kaist.ac.kr/search/detail/view.do?bibCtrlNo=1047408&flag=dissertationen_US
dc.identifier.urihttp://hdl.handle.net/10203/320981-
dc.description학위논문(박사) - 한국과학기술원 : 조천식모빌리티대학원, 2023.8,[viii, 143 p. :]-
dc.description.abstractThe 3D object detection that identifies the types and locations of objects around a vehicle is crucial to enhance driving safety and reduce accident rates. For 3D object detection to be effective in a vehicle environment, it should be capable of precisely measuring the 3D position of objects, operating robustly in the presence of abnormal sensor input, and functioning fast in high-speed driving scenarios where surrounding objects move rapidly. In particular, this technology must operate efficiently under conditions of low-cost sensors and limited computing resources. In this dissertation, we delve into the topic of multi-sensor fusion methods that employ camera and radar sensors to achieve 3D object detection that is both robust and accurate while remaining efficient. To address this, we introduce a robust fusion method that adaptively fuses information from each sensor by obtaining the weights of the camera and radar, enabling the model to operate effectively even when one of the two sensor inputs is partially abnormal. Secondly, we present a sensor fusion method that takes into account the complementary spatial and contextual characteristics of cameras and radars. By doing so, we can leverage the strengths of each sensor to boost the overall performance of the system. Finally, we suggest a method for transforming the information from each sensor into a unified coordinate system. This tackles the issue of cameras and radar sensors having different coordinate systems and ensures that sensor fusion-based 3D object detection technology can satisfy real-time requirements and achieve high performance.-
dc.languageeng-
dc.publisher한국과학기술원-
dc.subject자율주행▼a딥 러닝▼a컴퓨터 비전▼a다중 센서 융합▼a3차원 객체 검출-
dc.subjectAutonomous driving▼aDeep learning▼aComputer vision▼aMulti-sensor fusion▼a3D object detection-
dc.title3D object detection via multi-sensor fusion for autonomous driving-
dc.title.alternative자율주행을 위한 다중 센서 융합 기반 3차원 객체 검출 기술-
dc.typeThesis(Ph.D)-
dc.identifier.CNRN325007-
dc.description.department한국과학기술원 :조천식모빌리티대학원,-
dc.contributor.alternativeauthorKum, Dongsuk-
Appears in Collection
GT-Theses_Ph.D.(박사논문)
Files in This Item
There are no files associated with this item.

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0