We propose a novel approach for upright and stabilizing 360-degree videos using deep learning. The inherent shaking during filming of 360-degree videos exacerbates user dizziness when viewing the content through virtual reality devices. The absence of labeled video data with camera rotation values has hindered previous research in this area. However, in this paper, we address this limitation by employing image augmentation. Our method involves two steps to achieve detailed stabilization. In the first step, we approximately align the horizon for each frame. In the second step, we leverage optical flow to estimate the rotation matrix between two consecutive frames, enabling more precise adjustments. Finally, by applying the inverse rotation of the estimated matrix to each frame, we obtain a stabilized image. Extensive experimentation demonstrates the effectiveness of our proposed methodology.