TY - GEN
T1 - Model-Agnostic Explanations for Decisions Using Minimal Patterns
AU - Asano, Kohei
AU - Chun, Jinhee
AU - Koike, Atsushi
AU - Tokuyama, Takeshi
N1 - Funding Information:
I would like to thank Quentin Labernia Louis Marie and Nguyen Van Quang for their comments on the manuscript. This work was partially supported by JSPS Kakenhi 15H02665 and 17K00002.
Publisher Copyright:
© 2019, Springer Nature Switzerland AG.
PY - 2019
Y1 - 2019
N2 - Recently, numerous high-performance machine learning models have been proposed. Unfortunately, such models often produce black-box decisions derived using opaque reasons and logic. Therefore, it is important to develop a tool that automatically gives the reasons underlying the black-box model’s decision. Ideally, the tool should be model-agnostic: applicable to any machine-learning model without knowing model details. A well-known previous work, LIME, is based on the linear decision. Although LIME provides important features for the decision, the result is still difficult to understand for users because the result might not contain the features required for the decision. We propose a novel model-agnostic explanation method named MP-LIME. The explanation consists of feature sets, each of which can reconstruct the decision correctly. Thereby, users can easily understand each feature set. By comparing our method to LIME, we demonstrate that our method often improves precision drastically. We also provide practical examples in which our method provides reasons for the decisions.
AB - Recently, numerous high-performance machine learning models have been proposed. Unfortunately, such models often produce black-box decisions derived using opaque reasons and logic. Therefore, it is important to develop a tool that automatically gives the reasons underlying the black-box model’s decision. Ideally, the tool should be model-agnostic: applicable to any machine-learning model without knowing model details. A well-known previous work, LIME, is based on the linear decision. Although LIME provides important features for the decision, the result is still difficult to understand for users because the result might not contain the features required for the decision. We propose a novel model-agnostic explanation method named MP-LIME. The explanation consists of feature sets, each of which can reconstruct the decision correctly. Thereby, users can easily understand each feature set. By comparing our method to LIME, we demonstrate that our method often improves precision drastically. We also provide practical examples in which our method provides reasons for the decisions.
KW - Explanation
KW - Interpretability
KW - Machine learning
UR - http://www.scopus.com/inward/record.url?scp=85072862815&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85072862815&partnerID=8YFLogxK
U2 - 10.1007/978-3-030-30487-4_19
DO - 10.1007/978-3-030-30487-4_19
M3 - Conference contribution
AN - SCOPUS:85072862815
SN - 9783030304867
T3 - Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
SP - 241
EP - 252
BT - Artificial Neural Networks and Machine Learning – ICANN 2019
A2 - Tetko, Igor V.
A2 - Karpov, Pavel
A2 - Theis, Fabian
A2 - Kurková, Vera
PB - Springer Verlag
T2 - 28th International Conference on Artificial Neural Networks, ICANN 2019
Y2 - 17 September 2019 through 19 September 2019
ER -