Autopilot Design for Unmanned Combat Aerial Vehicles (UCAVs) via Learning-based Approach

Cited 0 time in webofscience Cited 0 time in scopus
  • Hit : 299
  • Download : 0
This paper deals with the autopilot design methodology for UCAVs (Unmanned Combat Aerial Vehicles) using the learning-based approach. In this study, the acceleration control is considered for the UCAV’s autopilot design, and the reinforcement learning, especially deep deterministic policy gradient (DDPG) method, is used to design the autopilot. To this end, we establish the autopilot design problem based on the reinforcement learning framework. First, we define an actor, an environment, an observation, and a reward. And then, we design appropriate observation and reward function for the autopilot design problems using the reinforcement learning framework. In order to validate the proposed method, the numerical simulation is performed using tensor flow. The potential importance of the proposed approach is that it can learn the autopilot algorithm from scratch without any model information of the UCAV systems. Therefore, the proposed method can allow to significantly reduce time and effort for identifying the model information in the autopilot design process. This ability of the proposed method could also be useful for improving the safety of the systems against the unexpected system failure or damage of the UCAVs during their operations.
IEEE Computational Intelligence Society
Issue Date

2019 IEEE Symposium Series on Computational Intelligence, pp.476 - 481

Appears in Collection
AE-Conference Papers(학술회의논문)
Files in This Item
There are no files associated with this item.


  • mendeley


rss_1.0 rss_2.0 atom_1.0