DC Field | Value | Language |
---|---|---|
dc.contributor.advisor | Chang, Dong Eui | - |
dc.contributor.advisor | 장동의 | - |
dc.contributor.author | Wang, Tianqi | - |
dc.date.accessioned | 2021-05-13T19:39:54Z | - |
dc.date.available | 2021-05-13T19:39:54Z | - |
dc.date.issued | 2020 | - |
dc.identifier.uri | http://library.kaist.ac.kr/search/detail/view.do?bibCtrlNo=925250&flag=dissertation | en_US |
dc.identifier.uri | http://hdl.handle.net/10203/285086 | - |
dc.description | 학위논문(석사) - 한국과학기술원 : 전기및전자공학부, 2020.8,[iii, 26 p. :] | - |
dc.description.abstract | Deep learning based approaches have shown impressive performance in robotics applications in recent years. Nevertheless, the demand for huge training data and the lack of the explainability and safety guarantee of the trained policy still limit its usage in safety-critical applications such as autonomous cars, surgical robots, etc. In this thesis, we focus on developing the navigation methods for the racing drones, which need to fly through complex tracks formed by gates at high speed. We use a modularized approach to divide the whole system into perception, planning, and control module. The focus of this thesis is on the perception module which takes the camera image and drone states as input and generates high-level navigation commands such as the navigation direction and speed. As for the planning and control module, we leverage well-studied state-of-the-art methods for path planning and drone control. This modularization choice combines the advantages of the learning-based methods for a robust perception module and the model-based methods for precise planning and control. An expert policy that automatically maneuvers the drone through the track is proposed in this thesis based on the accurate positions of the drone and the gates which are available in the simulator. While the expert policy generates navigation command only based on the accurate positions of the drone and the gates, informative auxiliary data like the camera images and drone states can still be collected at the same time along with the expert policy’s decisions. We then use imitation learning to train a customized neural network to map from the camera images and drone states to the high-level commands including the navigation direction and speed. Moreover, the extensive domain randomization involved during data collection makes the trained policy robust enough and successfully directly transferred to environments that were unseen during training. | - |
dc.language | eng | - |
dc.publisher | 한국과학기술원 | - |
dc.subject | Drong racing▼adeep learning▼aimitation learning▼acomputer vision | - |
dc.subject | 레이싱 드론▼a심층 학습▼a모방 학습▼a컴퓨터 비전 | - |
dc.title | Robust navigation for racing drones based on imitation learning | - |
dc.title.alternative | 모방 학습에 기반한 강건한 경주용 드론 내비게이션 | - |
dc.type | Thesis(Master) | - |
dc.identifier.CNRN | 325007 | - |
dc.description.department | 한국과학기술원 :전기및전자공학부, | - |
dc.contributor.alternativeauthor | 왕천기 | - |
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.