ACNN-VC: Utilizing Adaptive Convolution Neural Network for One-Shot Voice Conversion

Cited 0 time in webofscience Cited 0 time in scopus
  • Hit : 374
  • Download : 0
DC FieldValueLanguage
dc.contributor.authorUm, Jisubko
dc.contributor.authorChoi, Yeunjuko
dc.contributor.authorKim, Hoi-Rinko
dc.date.accessioned2022-11-24T11:01:48Z-
dc.date.available2022-11-24T11:01:48Z-
dc.date.created2022-11-21-
dc.date.created2022-11-21-
dc.date.issued2022-09-21-
dc.identifier.citation23rd Annual Conference of the International Speech Communication Association, INTERSPEECH 2022, pp.2998 - 3002-
dc.identifier.issn2308-457X-
dc.identifier.urihttp://hdl.handle.net/10203/300901-
dc.description.abstractVoice conversion (VC) converts speaker characteristics of a source speaker to ones of a target speaker without modifying the linguistic content. To overcome limitations of the existing VC systems for target speakers unseen during training, many researchers have recently studied one-shot voice conversion. Although many papers have shown that voice conversion can be performed even with only one utterance of an unseen target speaker, it sounds still far from the target speaker's voice. To enhance the similarity of the generated speech, we implement an adaptive convolution neural network (ACNN) for the voice conversion system in two ways. Firstly, we utilize ACNNs with a normalization method to adapt speaker-related information in denormalization process. The second method is to build an architecture with ACNNs that have various receptive fields to generate a voice closer to the target speaker while considering temporal patterns. We evaluated two methods through objective and subjective evaluation metrics. Results show that the converted speech is better than the previous methods in terms of the speaker similarity while keeping the naturalness score.-
dc.languageEnglish-
dc.publisherISCA-
dc.titleACNN-VC: Utilizing Adaptive Convolution Neural Network for One-Shot Voice Conversion-
dc.typeConference-
dc.identifier.wosid000900724503034-
dc.identifier.scopusid2-s2.0-85140066692-
dc.type.rimsCONF-
dc.citation.beginningpage2998-
dc.citation.endingpage3002-
dc.citation.publicationname23rd Annual Conference of the International Speech Communication Association, INTERSPEECH 2022-
dc.identifier.conferencecountryKO-
dc.identifier.conferencelocationIncheon-
dc.identifier.doi10.21437/Interspeech.2022-10473-
dc.contributor.localauthorKim, Hoi-Rin-
Appears in Collection
EE-Conference Papers(학술회의논문)
Files in This Item
There are no files associated with this item.

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0