Legged robots are usually operated in various environments, such as rugged, steep, and slippery surfaces. Since the visual information may not be sufficient under harsh environments due to motion blur, it is important to utilize the proprioceptive sensors to estimate the pose of the legged robot robustly. Accordingly, there have been some studies to estimate the pose by learning inertial information. This paper proposes a novel algorithm that learns the pose from the inertial and joint data for more robust and accurate state estimation. The performance of the proposed approach is verified through the experiments.