Nowadays, deep learning techniques show dramatic performance in computer vision areas, and they even outperform humans on complex tasks such as ImageNet classification. But it turns out a deep learning based model is vulnerable to some small perturbation called an adversarial attack. This is a problem in the view of the safety and security of artificial intelligence, which has recently been studied a lot. These attacks have shown that they can easily fool models of image classification, semantic segmentation, and object detection. We focus on the adversarial attack in semantic segmentation task since there is little work in this task. We point out this attack can be protected by denoise autoencoder, which is used for denoising the perturbation and restoring the original images. We build a deep denoise autoencoder model for removing the adversarial perturbation and restoring the clean image. We experiment with various noise distributions and verify the effect of denoise autoencoder against adversarial attack in semantic segmentation task.