2025 Volume 29 Issue 6 Pages 1552-1564
A local explanation method (LE) in explainable artificial intelligence (XAI) is basically a two-step procedure: first construct a naively explainable model approximating the black-box model in need of explanations; then extract an explanation from the approximate model. Since an expert user knows that the extracted explanation aims to be just analogous to the target/ideal explanation, the expert user has to use analogical arguments to transfer certain properties observed on the former to the latter. In this paper, assuming an expert user whose knowledge satisfies certain conjectures, we reconstruct the structures “reason therefore conclusion” of these analogical arguments and study conditions for ensuring the truth of the reason, conditions for ensuring that the conclusion follows necessarily from the reason, as well as counter-arguments the user has to consider. It is argued that the presented findings shed light on the internal reasoning of an expert user at the end of User-LE dialogue. Broadly speaking, the paper suggests a promising direction to extend existing explanation methods, which are system-centered (focusing on generating explanations), to user-centered XAI which must attend to user’s receptions as well.
This article cannot obtain the latest cited-by information.