TY - JOUR
T1 - Instance-based neural dependency parsing
AU - Ouchi, Hiroki
AU - Suzuki, Jun
AU - Kobayashi, Sosuke
AU - Yokoi, Sho
AU - Kuribayashi, Tatsuki
AU - Yoshikawa, Masashi
AU - Inui, Kentaro
N1 - Funding Information:
The authors are grateful to the anonymous reviewers and the Action Editor who provided many insightful comments that improve the paper. Special thanks also go to the members of Tohoku NLP Laboratory for the interesting comments and energetic discussions. The work of H. Ouchi was supported by JSPS KAKENHI grant number 19K20351. The work of J. Suzuki was supported by JST Moonshot R&D grant number JPMJMS2011 (fundamental research) and JSPS KAKENHI grant number 19H04162. The work of S. Yokoi was supported by JST ACT-X grant number JPMJAX200S, Japan. The work of T. Kuribayashi was supported by JSPS KAK-ENHI grant number 20J22697. The work of M. Yoshikawa was supported by JSPS KAKENHI grant number 20K23314. The work of K. Inui was supported by JST CREST grant number JPMJCR20D2, Japan.
Publisher Copyright:
© 2021 Association for Computational Linguistics.
PY - 2021/12/17
Y1 - 2021/12/17
N2 - Interpretable rationales for model predictions are crucial in practical applications. We develop neural models that possess an interpretable inference process for dependency parsing. Our models adopt instance-based inference, where dependency edges are extracted and labeled by comparing them to edges in a training set. The training edges are explicitly used for the predictions; thus, it is easy to grasp the contribution of each edge to the predictions. Our experiments show that our instance-based models achieve competitive accuracy with standard neural models and have the reasonable plausibility of instance-based explanations.
AB - Interpretable rationales for model predictions are crucial in practical applications. We develop neural models that possess an interpretable inference process for dependency parsing. Our models adopt instance-based inference, where dependency edges are extracted and labeled by comparing them to edges in a training set. The training edges are explicitly used for the predictions; thus, it is easy to grasp the contribution of each edge to the predictions. Our experiments show that our instance-based models achieve competitive accuracy with standard neural models and have the reasonable plausibility of instance-based explanations.
UR - http://www.scopus.com/inward/record.url?scp=85121933362&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85121933362&partnerID=8YFLogxK
U2 - 10.1162/tacl_a_00439
DO - 10.1162/tacl_a_00439
M3 - Article
AN - SCOPUS:85121933362
SN - 2307-387X
VL - 9
SP - 1493
EP - 1507
JO - Transactions of the Association for Computational Linguistics
JF - Transactions of the Association for Computational Linguistics
ER -