Reinforcement learning-based powered descent and landing for planetary exploration행성 탐사를 위한 강화학습 기반 동력 하강 및 착륙

Cited 0 time in webofscience Cited 0 time in scopus
  • Hit : 46
  • Download : 0
This study introduces a reinforcement learning-based powered descent and landing technique for autonomous planetary landing. The main problem is decomposed into landing site selection and landing guidance subproblems to reduce complexity and ensure real-time capability. The landing site selection problem uses image-processing techniques to determine a safe location for landing based on altitude sensor data. The factors related to landing site selection are created as individual maps. An optimization problem formulation to maximize the weighted sum of these factors is developed and validated through a case study. A reinforcement learning technique is then applied to find an efficient and safe landing trajectory for the selected site. An episode reward (instead of a step reward) adopted in this study can ensure learning autonomy. A gradual reward system based on curriculum learning addresses the sparse reward. The proposed technique is compared with the solution obtained by offline optimization to demonstrate its effectiveness in stable autonomous landing. The model fidelity was enhanced to account for the uncertainty in real-world landing situations. The reinforcement learning-based landing framework proposed in this study can provide a real-time autonomous planetary capability.
Advisors
안재명researcher
Description
한국과학기술원 :항공우주공학과,
Publisher
한국과학기술원
Issue Date
2024
Identifier
325007
Language
eng
Description

학위논문(석사) - 한국과학기술원 : 항공우주공학과, 2024.2,[v, 70 p. :]

Keywords

행성 착륙▼a자율성▼a착륙 지점 선정▼a착륙 유도기법▼a강화학습; Planetary landing▼aLanding site selection▼aLanding guidance▼aReinforcement learning

URI
http://hdl.handle.net/10203/321852
Link
http://library.kaist.ac.kr/search/detail/view.do?bibCtrlNo=1097658&flag=dissertation
Appears in Collection
AE-Theses_Master(석사논문)
Files in This Item
There are no files associated with this item.

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0