Prosodic variation enhancement using unsupervised context labeling for HMM-based expressive speech synthesis

Yu Maeno, Takashi Nose, Takao Kobayashi, Tomoki Koriyama, Yusuke Ijima, Hideharu Nakajima, Hideyuki Mizuno, Osamu Yoshioka

Research output: Contribution to journalArticlepeer-review

11 Citations (Scopus)


This paper proposes an unsupervised labeling technique using phrase-level prosodic contexts for HMM-based expressive speech synthesis, which enables users to manually enhance prosodic variations of synthetic speech without degrading the naturalness. In the proposed technique, HMMs are first trained using the conventional labels including only linguistic information, and prosodic features are generated from the HMMs. The average difference of original and generated prosodic features for each accent phrase is then calculated and classified into three classes, e.g.; low, neutral, and high in the case of fundamental frequency. The created prosodic context label has a practical meaning such as high/low of relative pitch at the phrase level, and hence it is expected that users can modify the prosodic characteristic of synthetic speech in an intuitive way by manually changing the proposed labels. In the experiments, we evaluate the proposed technique in both ideal and practical conditions using speech of sales talk and fairy tale recorded under a realistic domain. In the evaluation under the practical condition, we evaluate whether the users achieve their intended prosodic modification by changing the proposed context label of a certain accent phrase for a given sentence.

Original languageEnglish
Pages (from-to)144-154
Number of pages11
JournalSpeech Communication
Publication statusPublished - 2014


  • Audiobook
  • HMM-based expressive speech synthesis
  • Prosodic context
  • Prosody control
  • Unsupervised labeling


Dive into the research topics of 'Prosodic variation enhancement using unsupervised context labeling for HMM-based expressive speech synthesis'. Together they form a unique fingerprint.

Cite this