DC Field | Value | Language |
---|---|---|
dc.contributor.author | Choi, Heejin | ko |
dc.contributor.author | Park, Sangjun | ko |
dc.contributor.author | Park, Jinuk | ko |
dc.contributor.author | Hahn, Minsoo | ko |
dc.date.accessioned | 2019-12-13T12:27:34Z | - |
dc.date.available | 2019-12-13T12:27:34Z | - |
dc.date.created | 2019-11-28 | - |
dc.date.created | 2019-11-28 | - |
dc.date.created | 2019-11-28 | - |
dc.date.issued | 2019-01-12 | - |
dc.identifier.citation | 2019 IEEE International Conference on Consumer Electronics, ICCE 2019 | - |
dc.identifier.uri | http://hdl.handle.net/10203/269539 | - |
dc.description.abstract | This paper studies the methods for emotional speech synthesis using a neural vocoder. For a neural vocoder, WaveNet is used, which generates waveforms from mel spectrograms. We propose two networks, i.e., deep convolutional neural network (CNN)-based text-to-speech (TTS) system and emotional converter, and deep CNN architecture is designed as to utilize long-term context information. The first network estimates neutral mel spectrograms using linguistic features, and the second network converts neutral mel spectrograms to emotional mel spectrograms. Experimental results on a TTS system and emotional TTS system, showed that the proposed systems are indeed a promising approach. | - |
dc.language | English | - |
dc.publisher | Institute of Electrical and Electronics Engineers Inc. | - |
dc.title | Emotional Speech Synthesis for Multi-Speaker Emotional Dataset Using WaveNet Vocoder | - |
dc.type | Conference | - |
dc.identifier.wosid | 000462912600029 | - |
dc.identifier.scopusid | 2-s2.0-85063812326 | - |
dc.type.rims | CONF | - |
dc.citation.publicationname | 2019 IEEE International Conference on Consumer Electronics, ICCE 2019 | - |
dc.identifier.conferencecountry | US | - |
dc.identifier.conferencelocation | Tuscany Suites & Casino, Las Vegas, NV | - |
dc.identifier.doi | 10.1109/ICCE.2019.8661919 | - |
dc.contributor.localauthor | Hahn, Minsoo | - |
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.