Speaker identification is fundamentally important for the various purposes such as home device, surveillance or authorization. The main difficulty of speaker recognition is to improve the robust identification accuracy. In this paper, we present a multimodal method based on deep neural networks for speaker identification by using both face recognition and voice identification. Our proposed multimodal model shows more robust speaker identification performance. As a face recognition, we use a convolutional neural network, especially VGG Face descriptor networks. For voice identification, we use Gaussian Mixture Model based on i-vector. After feature extraction, feature vectors from each face and voice information are concatenated and trains multimodal deep neural network in order to get 1024-dimension multimodal embeddings. We validate the performance of our model by new dataset which consists of 281 TED videos. The multimodal DNN model depicts more reliable identification performance than single modality based identification methods like face recognition or speaker recognition.