TY - GEN
T1 - Combining axiom injection and knowledge base completion for efficient natural language inference
AU - Yoshikawa, Masashi
AU - Mineshima, Koji
AU - Noji, Hiroshi
AU - Bekki, Daisuke
N1 - Funding Information:
We thank the three anonymous reviewers for their insightful comments. We are also grateful to Bevan Johns for proofreading examples in LexSICK dataset and Hitoshi Man-abe for his public codebase from which we learned many about the KBC techniques. This work was supported by JSPS KAKENHI Grant Number JP18J12945, and also by JST CREST Grant Number JPMJCR1301.
Funding Information:
We thank the three anonymous reviewers for their insightful comments. We are also grateful to Bevan Johns for proofreading examples in LexSICK dataset and Hitoshi Manabe for his public codebase from which we learned many about the KBC techniques. This work was supported by JSPS KAKENHI Grant Number JP18J12945, and also by JST CREST Grant Number JPMJCR1301.
Publisher Copyright:
© 2019, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved.
PY - 2019
Y1 - 2019
N2 - In logic-based approaches to reasoning tasks such as Recognizing Textual Entailment (RTE), it is important for a system to have a large amount of knowledge data. However, there is a tradeoff between adding more knowledge data for improved RTE performance and maintaining an efficient RTE system, as such a big database is problematic in terms of the memory usage and computational complexity. In this work, we show the processing time of a state-of-the-art logic-based RTE system can be significantly reduced by replacing its search-based axiom injection (abduction) mechanism by that based on Knowledge Base Completion (KBC). We integrate this mechanism in a Coq plugin that provides a proof automation tactic for natural language inference. Additionally, we show empirically that adding new knowledge data contributes to better RTE performance while not harming the processing speed in this framework.
AB - In logic-based approaches to reasoning tasks such as Recognizing Textual Entailment (RTE), it is important for a system to have a large amount of knowledge data. However, there is a tradeoff between adding more knowledge data for improved RTE performance and maintaining an efficient RTE system, as such a big database is problematic in terms of the memory usage and computational complexity. In this work, we show the processing time of a state-of-the-art logic-based RTE system can be significantly reduced by replacing its search-based axiom injection (abduction) mechanism by that based on Knowledge Base Completion (KBC). We integrate this mechanism in a Coq plugin that provides a proof automation tactic for natural language inference. Additionally, we show empirically that adding new knowledge data contributes to better RTE performance while not harming the processing speed in this framework.
UR - http://www.scopus.com/inward/record.url?scp=85085274066&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85085274066&partnerID=8YFLogxK
M3 - Conference contribution
AN - SCOPUS:85085274066
T3 - 33rd AAAI Conference on Artificial Intelligence, AAAI 2019, 31st Innovative Applications of Artificial Intelligence Conference, IAAI 2019 and the 9th AAAI Symposium on Educational Advances in Artificial Intelligence, EAAI 2019
SP - 7410
EP - 7417
BT - 33rd AAAI Conference on Artificial Intelligence, AAAI 2019, 31st Innovative Applications of Artificial Intelligence Conference, IAAI 2019 and the 9th AAAI Symposium on Educational Advances in Artificial Intelligence, EAAI 2019
PB - AAAI Press
T2 - 33rd AAAI Conference on Artificial Intelligence, AAAI 2019, 31st Annual Conference on Innovative Applications of Artificial Intelligence, IAAI 2019 and the 9th AAAI Symposium on Educational Advances in Artificial Intelligence, EAAI 2019
Y2 - 27 January 2019 through 1 February 2019
ER -