Combining multiple high quality corpora for improving HMM-TTS

Vincent Wan, Javier Latorre, K. K. Chin, Langzhou Chen, Mark J.F. Gales, Heiga Zen, Kate Knill, Masami Akamine

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

20 Citations (Scopus)

Abstract

The most reliable way to build synthetic voices for end-products is to start with high quality recordings from professional voice talents. This paper describes the application of average voice models (AVMs) and a novel application of cluster adaptive training (CAT) to combine a small number of these high quality corpora to make best use of them and improve overall voice quality in hidden Markov model based text-to-speech (HMMTTS) systems. It is shown that integrated training by both CAT and AVM approaches, yields better sounding voices than speaker dependent modelling. It is also shown that CAT has an advantage over AVMs when adapting to a new speaker. Given a limited amount of adaptation data CATmaintains a much higher voice quality even when adapted to tiny amounts of speech.

Original languageEnglish
Title of host publication13th Annual Conference of the International Speech Communication Association 2012, INTERSPEECH 2012
Pages1134-1137
Number of pages4
Publication statusPublished - 2012
Event13th Annual Conference of the International Speech Communication Association 2012, INTERSPEECH 2012 - Portland, OR, United States
Duration: 2012 Sept 92012 Sept 13

Publication series

Name13th Annual Conference of the International Speech Communication Association 2012, INTERSPEECH 2012
Volume2

Conference

Conference13th Annual Conference of the International Speech Communication Association 2012, INTERSPEECH 2012
Country/TerritoryUnited States
CityPortland, OR
Period12/9/912/9/13

Keywords

  • Average voice models
  • Cluster adaptive training
  • Speaker adaptation
  • Speech synthesis

Fingerprint

Dive into the research topics of 'Combining multiple high quality corpora for improving HMM-TTS'. Together they form a unique fingerprint.

Cite this