Anomaly detection is identifying whether a given sample is drawn from outside the training distribution. Advances in deep neural networks (DNNs) have led to impressive results and in recent years many works have exploited DNNs for anomaly detection. Among others, generative/reconstruction model-based approaches have been frequently used for anomaly detection because they do not require labels for training. The anomaly detection performance of these methods, however, varies a lot, due to the change of the intra-class variance and the difference in complexity of input samples. To heighten the performance of anomaly detection, relatively new self-supervision-based approaches have been proposed and shown outstanding results for anomaly detection. There has been a report that the performance of these approaches, however, depends on the data set. To implement a framework for anomaly detection that is data set-independent and shows the stable performance on various experimental environment, we propose a dual discriminator with a correlative autoencoder for anomaly detection. In the proposed framework, the discriminator implicitly estimates the conditional probability density function and the autoencoder has improved ability to control the reconstruction error. We provide theoretical foundation of our method and verify it through various experiments. We also confirm practical benefits of our interpretation of the conditional expectation and the proposed framework by comparing our results with other state-of-the-art methods. The proposed method shows better results as compared with previous probabilistic/reconstruction/generative model-based approaches. Also our method shows better results on the rotation invariant data set as compared with previous self-supervision-based approaches. We expect that the proposed method can be widely applicable for various anomaly detection applications because our method is data set-independent and shows the stable performance on various experimental environment.