TY - GEN
T1 - Discriminative learning of first-order weighted abduction from partial discourse explanations
AU - Yamamoto, Kazeto
AU - Inoue, Naoya
AU - Watanabe, Yotaro
AU - Okazaki, Naoaki
AU - Inui, Kentaro
PY - 2013
Y1 - 2013
N2 - Abduction is inference to the best explanation. Abduction has long been studied in a wide range of contexts and is widely used for modeling artificial intelligence systems, such as diagnostic systems and plan recognition systems. Recent advances in the techniques of automatic world knowledge acquisition and inference technique warrant applying abduction with large knowledge bases to real-life problems. However, less attention has been paid to how to automatically learn score functions, which rank candidate explanations in order of their plausibility. In this paper, we propose a novel approach for learning the score function of first-order logic-based weighted abduction [1] in a supervised manner. Because the manual annotation of abductive explanations (i.e. a set of literals that explains observations) is a time-consuming task in many cases, we propose a framework to learn the score function from partially annotated abductive explanations (i.e. a subset of those literals). More specifically, we assume that we apply abduction to a specific task, where a subset of the best explanation is associated with output labels, and the rest are regarded as hidden variables. We then formulate the learning problem as a task of discriminative structured learning with hidden variables. Our experiments show that our framework successfully reduces the loss in each iteration on a plan recognition dataset.
AB - Abduction is inference to the best explanation. Abduction has long been studied in a wide range of contexts and is widely used for modeling artificial intelligence systems, such as diagnostic systems and plan recognition systems. Recent advances in the techniques of automatic world knowledge acquisition and inference technique warrant applying abduction with large knowledge bases to real-life problems. However, less attention has been paid to how to automatically learn score functions, which rank candidate explanations in order of their plausibility. In this paper, we propose a novel approach for learning the score function of first-order logic-based weighted abduction [1] in a supervised manner. Because the manual annotation of abductive explanations (i.e. a set of literals that explains observations) is a time-consuming task in many cases, we propose a framework to learn the score function from partially annotated abductive explanations (i.e. a subset of those literals). More specifically, we assume that we apply abduction to a specific task, where a subset of the best explanation is associated with output labels, and the rest are regarded as hidden variables. We then formulate the learning problem as a task of discriminative structured learning with hidden variables. Our experiments show that our framework successfully reduces the loss in each iteration on a plan recognition dataset.
UR - http://www.scopus.com/inward/record.url?scp=84875538813&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=84875538813&partnerID=8YFLogxK
U2 - 10.1007/978-3-642-37247-6_44
DO - 10.1007/978-3-642-37247-6_44
M3 - Conference contribution
AN - SCOPUS:84875538813
SN - 9783642372469
T3 - Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
SP - 545
EP - 558
BT - Computational Linguistics and Intelligent Text Processing - 14th International Conference, CICLing 2013, Proceedings
T2 - 14th Annual Conference on Intelligent Text Processing and Computational Linguistics, CICLing 2013
Y2 - 24 March 2013 through 30 March 2013
ER -