Multi-speaker and multi-domain emotional voice conversion using factorized hierarchical variational autoencoder

Cited 9 time in webofscience Cited 0 time in scopus
  • Hit : 103
  • Download : 0
Due to the complexity of emotional features, there has been limited success in emotional voice conversion. One major challenge is that conversion between more than two kinds of emotions often accompanies distortion of voice signal. The factorized hierarchical variational autoencoder (FHVAE) [1] was previously shown to have an ability, called sequence-level regularization, to generate disentangled representations of both sequence-level (such as speaker identity) and segment-level features. This study exploits the FHVAE pipeline to produce disentangled representations of emotion, making it possible to greatly facilitate emotional voice conversion. We propose three versions of algorithms for improving the quality of disentangled representation and audio synthesis. We conducted three mean opinion score (MOS) surveys to assess the performance of our models in terms of 1) speaker's voice preservation, 2) emotion conversion, and 3) audio naturalness.
Publisher
IEEE
Issue Date
2020-05
Language
English
Citation

2020 IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP 2020, pp.7769 - 7773

ISSN
1520-6149
DOI
10.1109/ICASSP40776.2020.9054534
URI
http://hdl.handle.net/10203/288370
Appears in Collection
BiS-Conference Papers(학술회의논문)
Files in This Item
There are no files associated with this item.
This item is cited by other documents in WoS
⊙ Detail Information in WoSⓡ Click to see webofscience_button
⊙ Cited 9 items in WoS Click to see citing articles in records_button

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0