Recently, numerous high-performance machine learning models have been proposed. Unfortunately, such models often produce black-box decisions derived using opaque reasons and logic. Therefore, it is important to develop a tool that automatically gives the reasons underlying the black-box model’s decision. Ideally, the tool should be model-agnostic: applicable to any machine-learning model without knowing model details. A well-known previous work, LIME, is based on the linear decision. Although LIME provides important features for the decision, the result is still difficult to understand for users because the result might not contain the features required for the decision. We propose a novel model-agnostic explanation method named MP-LIME. The explanation consists of feature sets, each of which can reconstruct the decision correctly. Thereby, users can easily understand each feature set. By comparing our method to LIME, we demonstrate that our method often improves precision drastically. We also provide practical examples in which our method provides reasons for the decisions.