Speaker-independent HMM-based voice conversion using adaptive quantization of the fundamental frequency

Takashi Nose, Takao Kobayashi

Research output: Contribution to journalArticlepeer-review

12 Citations (Scopus)


This paper describes a speaker-independent HMM-based voice conversion technique that incorporates context-dependent prosodic symbols obtained using adaptive quantization of the fundamental frequency (F0). In the HMM-based conversion of our previous study, the input utterance of a source speaker is decoded into phonetic and prosodic symbol sequences, and the converted speech is generated using the decoded information from the pre-trained target speaker's phonetically and prosodically context-dependent HMM. In our previous work, we generated the F0 symbol by quantizing the average log F0 value of each phone using the global mean and variance calculated from the training data. In the current study, these statistical parameters are obtained from each utterance itself, and this adaptive method improves the F0 conversion performance of the conventional one. We also introduce a speaker-independent model for decoding the input speech and model adaptation for training the target speaker's model in order to reduce the required amount of training data under a condition where the phonetic transcription is available for the input speech. Objective and subjective experimental results for Japanese speech demonstrate that the adaptive quantization method gives better F0 conversion performance than the conventional one. Moreover, our technique with only ten sentences of the target speaker's adaptation data outperforms the conventional GMM-based one using parallel data of 200 sentences.

Original languageEnglish
Pages (from-to)973-985
Number of pages13
JournalSpeech Communication
Issue number7
Publication statusPublished - 2011 Sept


  • Fundamental frequency quantization
  • Hidden Markov model (HMM)
  • HMM-based speech synthesis
  • Prosody conversion
  • Speaker-independent model
  • Voice conversion


Dive into the research topics of 'Speaker-independent HMM-based voice conversion using adaptive quantization of the fundamental frequency'. Together they form a unique fingerprint.

Cite this