On mapless navigation through deep reinforcement learning심층 강화 학습을 통한 메플리스 내비게이션

Cited 0 time in webofscience Cited 0 time in scopus
  • Hit : 450
  • Download : 0
We present a deep reinforcement learning based local planer for a mobile robot which can navigate towards goal locations using only a sparse 20-dimensional laser scan and relative goal position as inputs and linear and angular velocity as output. We train multiple models end-to-end without any expert demonstrations or handcrafted features using both on-policy and off-policy methods with prioritized experience replay. Traditional local motion planning methods rely on an obstacle cost map that assumes a relatively static environment while our method can continue to operate even under significant environmental changes. Through the use of a stacked recurrent intermediate model architecture, our policies are able to scale more efficiently with environment complexity and can handle dynamic environments significantly better than prior work. We demonstrate that the learned policies can also generalize to novel environments not encountered during training while incurring no additional training cost.
Advisors
Jo, Sung Horesearcher조성호researcher
Description
한국과학기술원 :전산학부,
Publisher
한국과학기술원
Issue Date
2018
Identifier
325007
Language
eng
Description

학위논문(석사) - 한국과학기술원 : 전산학부, 2018.8,[iv, 24 p. :]

Keywords

Deep learning in robotics and automation▼aautonomous vehicle navigation▼amotion and path planning▼areinforcement learning; 로봇 공학 및 자동화 분야의 심층 학습▼a자율 주행 차량 내비게이션▼a모션 및 경로 계획▼a강화 학습

URI
http://hdl.handle.net/10203/267093
Link
http://library.kaist.ac.kr/search/detail/view.do?bibCtrlNo=828616&flag=dissertation
Appears in Collection
CS-Theses_Master(석사논문)
Files in This Item
There are no files associated with this item.

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0