Abstract
Generating expressive, naturally sounding, speech from text using a speech synthesis (TTS) system is a highly challenging problem. However for tasks such as audiobooks it is essential if their use is to become widespread. Generating expressive speech from text can be divided into two parts: predicting expressive information from text; and synthesizing the speech with a particular expression. Traditionally these components have been studied separately. This paper proposes an integrated approach, where the training data and representation of expressive synthesis is shared across the two components. There are several advantages to this scheme including: robust handling of automatically generated expressive labels; support for a continuous representation of expressions; and joint training of the expression predictor and speech synthesizer. Synthesis experiments indicated that the proposed approach produced far more expressive speech than both a neutral TTS and one where the expression was randomly selected. The experimental results also show the advantage of a continuous expressive synthesis space over a discrete space.
Original language | English |
---|---|
Article number | 6683056 |
Pages (from-to) | 323-335 |
Number of pages | 13 |
Journal | IEEE Journal on Selected Topics in Signal Processing |
Volume | 8 |
Issue number | 2 |
DOIs | |
Publication status | Published - 2014 Apr |
Externally published | Yes |
Keywords
- Expressive speech synthesis
- audiobook
- cluster adaptive training
- hidden Markov model
- neural network
ASJC Scopus subject areas
- Signal Processing
- Electrical and Electronic Engineering