Learning-based Robust Flight Control under Center-of-Pressure Uncertainty

Cited 0 time in webofscience Cited 0 time in scopus
  • Hit : 65
  • Download : 0
This paper aims to suggest a robust flight control algorithm for the center-of-pressure (CP) uncertainty attenuation, using a deep reinforcement learning (DRL) algorithm called the proximal policy optimization (PPO). To this end, the three-loop autopilot, which has been widely adopted for flight control algorithms for various flight vehicles, is first designed from the nonlinear control methodology by assuming that the unknown CP uncertainty is measurable. A reinforcement learning (RL) agent based on the PPO algorithm is then introduced to learn how to estimate variations of the CP uncertainty. We integrate the nonlinear three-loop autopilot and the RL agent by replacing the CP uncertainty term with its prediction provided by the RL agent. Lastly, the RL agent is trained to satisfy a prescribed control performance even in the presence of the CP uncertainty. In this paper, illustrative examples are provided to verify the feasibility of the proposed approach. The main contribution of this paper lies in formulating the robust flight control problem against the CP uncertainty in the DRL framework. Also, appropriate combinations of reward functions and observations are provided to achieve the prescribed control goal.
Publisher
ICROS (Institute of Control, Robotics and Systems)
Issue Date
2021-12-16
Language
English
Citation

The 9th International Conference on Robot Intelligence Technology and Applications, RiTA2021

URI
http://hdl.handle.net/10203/291160
Appears in Collection
AE-Conference Papers(학술회의논문)
Files in This Item
There are no files associated with this item.

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0