Round-robin duel discriminative language models

Takanobul Oba, Takaaki Hori, Atsushi Nakamura, Akinori Ito

Research output: Contribution to journalArticlepeer-review

23 Citations (Scopus)

Abstract

Discriminative training has received a lot of attention from both the machine learning and speech recognition communities. The idea behind the discriminative approach is to construct a model that distinguishes correct samples from incorrect samples, while the conventional generative approach estimates the distributions of correct samples. We propose a novel discriminative training method and apply it to a language model for reranking speech recognition hypotheses. Our proposed method has round-robin duel discrimination (R2D2) criteria in which all the pairs of sentence hypotheses including pairs of incorrect sentences are distinguished from each other, taking their error rate into account. Since the objective function is convex, the global optimum can be found through a normal parameter estimation method such as the quasi-Newton method. Furthermore, the proposed method is an expansion of the global conditional log-linear model whose objective function corresponds to the conditional random fields. Our experimental results show that R2D2 outperforms conventional methods in many situations, including different languages, different feature constructions and different difficulties.

Original languageEnglish
Article number6064876
Pages (from-to)1244-1255
Number of pages12
JournalIEEE Transactions on Audio, Speech and Language Processing
Volume20
Issue number4
DOIs
Publication statusPublished - 2012

Keywords

  • Discriminative language model
  • error correction
  • round-robin duel discrimination (R2D2)

ASJC Scopus subject areas

  • Acoustics and Ultrasonics
  • Electrical and Electronic Engineering

Fingerprint

Dive into the research topics of 'Round-robin duel discriminative language models'. Together they form a unique fingerprint.

Cite this