Data-Driven Hazard Avoidance Landing of Parafoil: A Deep Reinforcement Learning Approach

Cited 0 time in webofscience Cited 0 time in scopus
  • Hit : 6
  • Download : 0
DC FieldValueLanguage
dc.contributor.authorPark, Junwooko
dc.contributor.authorBang, Hyochoongko
dc.date.accessioned2024-09-06T06:00:23Z-
dc.date.available2024-09-06T06:00:23Z-
dc.date.created2023-11-20-
dc.date.issued2024-01-
dc.identifier.citationJOURNAL OF AEROSPACE INFORMATION SYSTEMS, v.21, no.1, pp.58 - 74-
dc.identifier.issn1940-3151-
dc.identifier.urihttp://hdl.handle.net/10203/322791-
dc.description.abstractThis paper examines a couple of realizations of autonomous landing hazard avoidance technology of parafoil: a reinforcement-learning-based approach and a rule-based approach, advocating the former. Furthermore, comparative advantages and behavioral analogies between the two approaches are presented. In the data-driven approach, a decision process observing only a series of nadir-pointing images is designed without explicit augmentation of vehicle dynamics for the homogeneity of observation data. An agent then learns the hazard avoidance steering law in an end-to-end fashion. On the contrary, the rule-based approach is facilitated via explicit notions of guidance-control hierarchy, vehicle dynamic states, and metric details of ground obstacles. The soft actor-critic method is applied to learn a policy that maps the down-looking images to parafoil brakes, whereas a vector field guidance law is employed in the rule-based approach, considering each hazard as a repulsive source. This paper then presents empirical equivalences in designing both approaches and their distinctions. Numerical experiments in multiple test cases validate the reinforcement learning method and present comparisons between the approaches regarding their resultant trajectories. The interesting behaviors of the resultant policy of the data-driven approach are emphasized.-
dc.languageEnglish-
dc.publisherAMER INST AERONAUTICS ASTRONAUTICS-
dc.titleData-Driven Hazard Avoidance Landing of Parafoil: A Deep Reinforcement Learning Approach-
dc.typeArticle-
dc.identifier.wosid001092250300001-
dc.identifier.scopusid2-s2.0-85183166124-
dc.type.rimsART-
dc.citation.volume21-
dc.citation.issue1-
dc.citation.beginningpage58-
dc.citation.endingpage74-
dc.citation.publicationnameJOURNAL OF AEROSPACE INFORMATION SYSTEMS-
dc.identifier.doi10.2514/1.I011281-
dc.contributor.localauthorBang, Hyochoong-
dc.description.isOpenAccessN-
dc.type.journalArticleArticle-
dc.subject.keywordAuthorGuidance, Navigation, and Control Systems-
dc.subject.keywordAuthorReinforcement Learning-
dc.subject.keywordAuthorAircraft Parachute-
dc.subject.keywordAuthorAutonomous Landing Hazard Avoidance Technology-
dc.subject.keywordAuthorAutomatic Landing System-
dc.subject.keywordAuthorData Driven Control System-
dc.subject.keywordAuthorVision Based Navigation-
dc.subject.keywordAuthorObstacle-Free Zone-
dc.subject.keywordAuthorAerodynamic Decelerator Systems-
dc.subject.keywordAuthorMachine Learning Control-
dc.subject.keywordPlusSTANDOFF TRACKING-
dc.subject.keywordPlusGUIDANCE-
dc.subject.keywordPlusSTRATEGY-
dc.subject.keywordPlusSYSTEM-
Appears in Collection
AE-Journal Papers(저널논문)
Files in This Item
There are no files associated with this item.

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0