In recent years, deep generative models have been largely dismissed for fully-supervised classification settings as they are often substantially outperformed by deep discriminative classifiers (e.g., softmax classifiers). In contrast to this common belief, this thesis shows that it is possible to formulate a simple generative classifier that is useful in detecting abnormal samples (i.e., novelty detection) and handling training samples with (incorrect) noisy labels without much sacrifice of the original discriminative per- formance with respect to in-distribution or/and clean labeled data. We believe that our approach have a potential to apply to many other related machine learning tasks, e.g., active learning, ensemble learning, and few-shot learning.