TY - GEN
T1 - Summarize-then-Answer
T2 - 2021 Conference on Empirical Methods in Natural Language Processing, EMNLP 2021
AU - Inoue, Naoya
AU - Trivedi, Harsh
AU - Sinha, Steven
AU - Balasubramanian, Niranjan
AU - Inui, Kentaro
N1 - Funding Information:
This work was supported in part by the National Science Foundation under grant No. IIS-1815358 and JST CREST Grant Number JPMJCR20D2, Japan. We thank the anonymous reviewers for the insightful feedback.
Publisher Copyright:
© 2021 Association for Computational Linguistics
PY - 2021
Y1 - 2021
N2 - How can we generate concise explanations for multi-hop Reading Comprehension (RC)? The current strategies of identifying supporting sentences can be seen as an extractive question-focused summarization of the input text. However, these extractive explanations are not necessarily concise i.e. not minimally sufficient for answering a question. Instead, we advocate for an abstractive approach, where we propose to generate a question-focused, abstractive summary of input paragraphs and then feed it to an RC system. Given a limited amount of human-annotated abstractive explanations, we train the abstractive explainer in a semi-supervised manner, where we start from the supervised model and then train it further through trial and error maximizing a conciseness-promoted reward function. Our experiments demonstrate that the proposed abstractive explainer can generate more compact explanations than an extractive explainer with limited supervision (only 2k instances) while maintaining sufficiency.
AB - How can we generate concise explanations for multi-hop Reading Comprehension (RC)? The current strategies of identifying supporting sentences can be seen as an extractive question-focused summarization of the input text. However, these extractive explanations are not necessarily concise i.e. not minimally sufficient for answering a question. Instead, we advocate for an abstractive approach, where we propose to generate a question-focused, abstractive summary of input paragraphs and then feed it to an RC system. Given a limited amount of human-annotated abstractive explanations, we train the abstractive explainer in a semi-supervised manner, where we start from the supervised model and then train it further through trial and error maximizing a conciseness-promoted reward function. Our experiments demonstrate that the proposed abstractive explainer can generate more compact explanations than an extractive explainer with limited supervision (only 2k instances) while maintaining sufficiency.
UR - http://www.scopus.com/inward/record.url?scp=85127456964&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85127456964&partnerID=8YFLogxK
M3 - Conference contribution
AN - SCOPUS:85127456964
T3 - EMNLP 2021 - 2021 Conference on Empirical Methods in Natural Language Processing, Proceedings
SP - 6064
EP - 6080
BT - EMNLP 2021 - 2021 Conference on Empirical Methods in Natural Language Processing, Proceedings
PB - Association for Computational Linguistics (ACL)
Y2 - 7 November 2021 through 11 November 2021
ER -