Speech recognition under multiple noise environment based on multi-mixture HMM and weight optimization by the aspect model

Seong Jun Hahm, Yuichi Ohkawa, Masashi Ito, Motoyuki Suzuki, Akinori Ito, Shozo Makino

Research output: Contribution to journalArticlepeer-review

Abstract

In this paper, we propose an acoustic model that is robust to multiple noise environments, as well as a method for adapting the acoustic model to an environment to improve the model. The model is called "the multi-mixture model," which is based on a mixture of different HMMs each of which is trained using speech under different noise conditions. Speech recognition experiments showed that the proposed model performs better than the conventional multi-condition model. The method for adaptation is based on the aspect model, which is a "mixture-of-mixture" model. To realize adaptation using extremely small amount of adaptation data (i.e., a few seconds), we train a small number of mixture models, which can be interpreted as models for "clusters" of noise environments. Then, the models are mixed using weights, which are determined according to the adaptation data. The experimental results showed that the adaptation based on the aspect model improved the word accuracy in a heavy noise environment and showed no performance deterioration for all noise conditions, while the conventional methods either did not improve the performance or showed both improvement and degradation of recognition performance according to noise conditions.

Original languageEnglish
Pages (from-to)2407-2416
Number of pages10
JournalIEICE Transactions on Information and Systems
VolumeE93-D
Issue number9
DOIs
Publication statusPublished - 2010 Sept

Keywords

  • Aspect model
  • Multi-mixture HMM
  • Noise-independent acoustic model
  • Speech recognition in noisy environment

Fingerprint

Dive into the research topics of 'Speech recognition under multiple noise environment based on multi-mixture HMM and weight optimization by the aspect model'. Together they form a unique fingerprint.

Cite this