HMM-based expressive singing voice synthesis with singing style control and robust pitch modeling

Takashi Nose, Misa Kanemoto, Tomoki Koriyama, Takao Kobayashi

Research output: Contribution to journalArticlepeer-review

11 Citations (Scopus)

Abstract

This paper proposes a singing style control technique based on multiple regression hidden semi-Markov models (MRHSMMs) for changing singing styles and their intensities appearing in synthetic singing voices. In the proposed technique, singing styles and their intensities are represented by low-dimensional vectors called style vectors and are modeled in accordance with the assumption that mean parameters of acoustic models are given as multiple regressions of the style vectors. In the synthesis process, we can weaken or emphasize the intensities of singing styles by setting a desired style vector. In addition, the idea of pitch adaptive training is extended to the case of the MRHSMM to improve the modeling accuracy of pitch associated with musical notes. A novel vibrato modeling technique is also presented to extract vibrato parameters from singing voices that sometimes have unclear vibrato expressions. Subjective evaluations show that we can intuitively control singing styles and their intensities while maintaining the naturalness of synthetic singing voices comparable to the conventional HSMM-based singing voice synthesis.

Original languageEnglish
Pages (from-to)308-322
Number of pages15
JournalComputer Speech and Language
Volume34
Issue number1
DOIs
Publication statusPublished - 2015 Nov 1

Keywords

  • HMM-based singing voice synthesis
  • Multiple-regression HSMM
  • Pitch adaptive training
  • Singing style control
  • Vibrato modeling

Fingerprint

Dive into the research topics of 'HMM-based expressive singing voice synthesis with singing style control and robust pitch modeling'. Together they form a unique fingerprint.

Cite this