Deep neural networks have demonstrated their usefulness in a wide range of applications such as vision, voice, natural language processing. A deep neural network provides high capacity for better modeling the input-to-output transition of real data. However, the high capacity potentially occurs an over-fitting problem that indicates a large gap between the training error and generalization error. To alleviate this problem, this dissertation proposes a novel regularization technique. Specifically, adversarial dropout identifies a subnetwork that performs worse or provides output far from the output of the origin network, even though most of the neurons in the origin network are contained in the subnetwork. We developed a regularization that reduces the gap between the output of the intentionally perturbed network and the correct target, or the output of the origin network. Additionally, we theoretically proved that the regularization was the upper bound of the gap between the training and the inference phases of the random dropout. Our experiments demonstrated the applicability of adversarial dropout to two types of deep neural networks such as feed-forward neural networks and recurrent neural networks.