Leveraging diverse lexical resources for textual entailment recognition

Yotaro Watanabe, Junta Mizuno, Eric Nichols, Katsuma Narisawa, Keita Nabeshima, Naoaki Okazaki, Kentaro Inui

Research output: Contribution to journalArticlepeer-review

6 Citations (Scopus)


Since the problem of textual entailment recognition requires capturing semantic relations between diverse expressions of language, linguistic and world knowledge play an important role. In this article, we explore the effectiveness of different types of currently available resources including synonyms, antonyms, hypernym-hyponym relations, and lexical entailment relations for the task of textual entailment recognition. In order to do so, we develop an entailment relation recognition system which utilizes diverse linguistic analyses and resources to align the linguistic units in a pair of texts and identifies entailment relations based on these alignments. We use the Japanese subset of the NTCIR-9 RITE-1 dataset for evaluation and error analysis, conducting ablation testing and evaluation on hand-crafted alignment gold standard data to evaluate the contribution of individual resources. Error analysis shows that existing knowledge sources are effective for RTE, but that their coverage is limited, especially for domain-specific and other low-frequency expressions. To increase alignment coverage on such expressions, we propose a method of alignment inference that uses syntactic and semantic dependency information to identify likely alignments without relying on external resources. Evaluation adding alignment inference to a system using all available knowledge sources shows improvements in both precision and recall of entailment relation recognition.

Original languageEnglish
Article number18
JournalACM Transactions on Asian Language Information Processing
Issue number4
Publication statusPublished - 2012 Dec


  • Alignment
  • Lexical resources
  • Textual entailment


Dive into the research topics of 'Leveraging diverse lexical resources for textual entailment recognition'. Together they form a unique fingerprint.

Cite this