To exploit dynamic properties of a legged robot, it requires robust control method which enables to traverse in complicated environments. This paper presents a control framework which learns gait planning by Reinforcement Learning(RL) using Model Predictive Control(MPC) and Central Pattern Generator. A neural network trained by RL operates as a high-level controller to decide where and when to put feet on the ground. Given the constraints of the footstep and model dynamics, MPC computes optimal ground reaction forces to follow reference trajectory. The combined framework can perform dynamic tasks by learning gait planning, while utilizing practical and robust properties of MPC. The proposed method verified its practicality by performing push recovery and robust locomotion tasks on both simulation and real hardware.