Integrated expression prediction and speech synthesis from text

Langzhou Chen, Mark J.F. Gales, Norbert Braunschweiler, Masami Akamine, Kate Knill

Research output: Contribution to journalArticlepeer-review

12 Citations (Scopus)

Abstract

Generating expressive, naturally sounding, speech from text using a speech synthesis (TTS) system is a highly challenging problem. However for tasks such as audiobooks it is essential if their use is to become widespread. Generating expressive speech from text can be divided into two parts: predicting expressive information from text; and synthesizing the speech with a particular expression. Traditionally these components have been studied separately. This paper proposes an integrated approach, where the training data and representation of expressive synthesis is shared across the two components. There are several advantages to this scheme including: robust handling of automatically generated expressive labels; support for a continuous representation of expressions; and joint training of the expression predictor and speech synthesizer. Synthesis experiments indicated that the proposed approach produced far more expressive speech than both a neutral TTS and one where the expression was randomly selected. The experimental results also show the advantage of a continuous expressive synthesis space over a discrete space.

Original languageEnglish
Article number6683056
Pages (from-to)323-335
Number of pages13
JournalIEEE Journal on Selected Topics in Signal Processing
Volume8
Issue number2
DOIs
Publication statusPublished - 2014 Apr
Externally publishedYes

Keywords

  • Expressive speech synthesis
  • audiobook
  • cluster adaptive training
  • hidden Markov model
  • neural network

ASJC Scopus subject areas

  • Signal Processing
  • Electrical and Electronic Engineering

Fingerprint

Dive into the research topics of 'Integrated expression prediction and speech synthesis from text'. Together they form a unique fingerprint.

Cite this