Multimodal speaker identification using deep neural network = 깊은 신경망을 이용한 멀티모달 화자 인식 알고리즘

Cited 0 time in webofscience Cited 0 time in scopus
  • Hit : 71
  • Download : 0
Speaker identification is fundamentally important for the various purposes such as home device, surveillance or authorization. The main difficulty of speaker recognition is to improve the robust identification accuracy. In this paper, we present a multimodal method based on deep neural networks for speaker identification by using both face recognition and voice identification. Our proposed multimodal model shows more robust speaker identification performance. As a face recognition, we use a convolutional neural network, especially VGG Face descriptor networks. For voice identification, we use Gaussian Mixture Model based on i-vector. After feature extraction, feature vectors from each face and voice information are concatenated and trains multimodal deep neural network in order to get 1024-dimension multimodal embeddings. We validate the performance of our model by new dataset which consists of 281 TED videos. The multimodal DNN model depicts more reliable identification performance than single modality based identification methods like face recognition or speaker recognition.
Advisors
Kim, Dae-Shikresearcher김대식researcher
Description
한국과학기술원 :전기및전자공학부,
Publisher
한국과학기술원
Issue Date
2017
Identifier
325007
Language
eng
Description

학위논문(석사) - 한국과학기술원 : 전기및전자공학부, 2017.2,[iii, 29 p. :]

Keywords

Speaker Identification; deep learning; multimodal model; i-vector; convolutional neural network; 화자 인식; 딥 러닝; 멀티모달 모델; i-벡터; 컨볼루젼 신경망

URI
http://hdl.handle.net/10203/243321
Link
http://library.kaist.ac.kr/search/detail/view.do?bibCtrlNo=675429&flag=dissertation
Appears in Collection
EE-Theses_Master(석사논문)
Files in This Item
There are no files associated with this item.

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0