In this paper, we propose a novel unsupervised learning method for facial retargeting. The goal of facial animation retargeting is to transfer the animation of a source character to a target character while preserving the semantic meaning of the animation. These techniques are widespread throughout the entertainment industry due to their convenience. While numerous research has been studied for the last few years, traditional methods require manual blendshape data pairs, pairs of vertex points, or reconstruction of a facial mesh. Therefore, we propose a neural network-based method to retarget facial animation from one blendshape model to another blendshape model without a manual pairing process. By formulating the retargeting problem as an unsupervised image-to-image translation, our method translates the rendered image of the source model to the image of the target model. Additionally, the proposed method introduces a blendshape prediction network to extract the blendshape weights from the translated image enabling retargeting of blendshape animation.