Out-of-distribution (OOD) detection, i.e., identifying whether a given test sample is drawn from outside the training distribution, is essential for a deep classifier to be deployed in a real-world application. The existing state-of-the-art methods of OOD detection tackle this issue by utilizing the internal feature of the classification network. However, we found that such detection methods inherently struggle to detect hard OOD images, i.e., drawn near from the training distribution: a naive softmax-based baseline even outperforms them. Motivated by this, we propose a simple yet effective training scheme for further calibrating the softmax probability of a classifier to achieve high OOD detection performance under both hard and easy scenarios. In particular, we suggest to optimize consistency regularization and self-supervised loss during training. Our experiments demonstrate the superiority of our simple method under various OOD detection scenarios.