In the area of deep neural networks (DNNs), there are several research directions to improve the network performance, such as research about the network architecture, optimization techniques, etc. For a part of these various research ways, we focus on multitask learning techniques. Although DNNs are generally trained to minimize the loss between output and label, we propose another loss term that helps generate high performance in the main task. By using the auxiliary loss term that makes DNNs to generate input images, proposed model achieves high performance better than DNNs with general single loss term in face recognition task. Furthermore, we proposed a novel knowledge distillation technique which makes another loss term that minimizes the distances between the features of the main DNN and the other DNN. In several image classification datasets, we showed that our proposed multitask learning model generates high performance better than the general single loss model.
We conducted further research on not only the supervised learning but also unsupervised learning. In the generative adversarial network (GAN) framework which is widely used to generate specific distributions, we proposed novel task that creates an intermediate domain space from two existing domains by adding one more auxiliary loss term at the general GAN loss terms.