This paper aims to suggest a robust flight control algorithm for the center-of-pressure (CP) uncertainty attenuation, using a deep reinforcement learning (DRL) algorithm called the proximal policy optimization (PPO). To this end, the three-loop autopilot, which has been widely adopted for flight control algorithms for various flight vehicles, is first designed from the nonlinear control methodology by assuming that the unknown CP uncertainty is measurable. A reinforcement learning (RL) agent based on the PPO algorithm is then introduced to learn how to estimate variations of the CP uncertainty. We integrate the nonlinear three-loop autopilot and the RL agent by replacing the CP uncertainty term with its prediction provided by the RL agent. Lastly, the RL agent is trained to satisfy a prescribed control performance even in the presence of the CP uncertainty. In this paper, illustrative examples are provided to verify the feasibility of the proposed approach. The main contribution of this paper lies in formulating the robust flight control problem against the CP uncertainty in the DRL framework. Also, appropriate combinations of reward functions and observations are provided to achieve the prescribed control goal.