TY - GEN
T1 - Neural sentence generation from formal semantics
AU - Manome, Kana
AU - Yoshikawa, Masashi
AU - Yanaka, Hitomi
AU - Martínez-Gómez, Pascual
AU - Mineshima, Koji
AU - Bekki, Daisuke
N1 - Funding Information:
Acknowledgement We would like to thank the reviewers for their helpful comments and suggestions. We are also grateful to Ribeka Tanaka, Yu-rina Ito, and Yukiko Yana for helpful discussion and Fadoua Ghourabi for reading an earlier draft of the paper. This work was partially supported by JST AIP-PRISM Grant Number JPMJCR18Y1, Japan.
Publisher Copyright:
© 2018 Association for Computational Linguistics
PY - 2018
Y1 - 2018
N2 - Sequence-to-sequence models have shown strong performance in a wide range of NLP tasks, yet their applications to sentence generation from logical representations are underdeveloped. In this paper, we present a sequence-to-sequence model for generating sentences from logical meaning representations based on event semantics. We use a semantic parsing system based on Combinatory Categorial Grammar (CCG) to obtain data annotated with logical formulas. We augment our sequence-to-sequence model with masking for predicates to constrain output sentences. We also propose a novel evaluation method for generation using Recognizing Textual Entailment (RTE). Combining parsing and generation, we test whether or not the output sentence entails the original text and vice versa. Experiments showed that our model outperformed a baseline with respect to both BLEU scores and accuracies in RTE.
AB - Sequence-to-sequence models have shown strong performance in a wide range of NLP tasks, yet their applications to sentence generation from logical representations are underdeveloped. In this paper, we present a sequence-to-sequence model for generating sentences from logical meaning representations based on event semantics. We use a semantic parsing system based on Combinatory Categorial Grammar (CCG) to obtain data annotated with logical formulas. We augment our sequence-to-sequence model with masking for predicates to constrain output sentences. We also propose a novel evaluation method for generation using Recognizing Textual Entailment (RTE). Combining parsing and generation, we test whether or not the output sentence entails the original text and vice versa. Experiments showed that our model outperformed a baseline with respect to both BLEU scores and accuracies in RTE.
UR - http://www.scopus.com/inward/record.url?scp=85087181587&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85087181587&partnerID=8YFLogxK
M3 - Conference contribution
AN - SCOPUS:85087181587
T3 - INLG 2018 - 11th International Natural Language Generation Conference, Proceedings of the Conference
SP - 408
EP - 414
BT - INLG 2018 - 11th International Natural Language Generation Conference, Proceedings of the Conference
PB - Association for Computational Linguistics (ACL)
T2 - 11th International Natural Language Generation Conference, INLG 2018
Y2 - 5 November 2018 through 8 November 2018
ER -