Host: The Japanese Society for Artificial Intelligence
Name : The 37th Annual Conference of the Japanese Society for Artificial Intelligence
Number : 37
Location : [in Japanese]
Date : June 06, 2023 - June 09, 2023
Recently, it has become common to collect and utilize big data in industry, and neural networks become applied to product quality prediction and anomaly detection using these data. In operating these models, it is important to identify information with high contribution and to consider the interpretability of the model. In general, generalization performance is considered in evaluating models, but interpretability is not always maximized when generalization performance is maximized. One of the method for extracting attribution is the saliency map, which interprets the relationship between the inputs and outputs of a neural network in terms of partial differential coefficient values. In this paper, we used the saliency map to visually grasp the attribution, defined smoothness and sparsity as measures, and verified the relationship between generalization performance and interpretability by visualizing the process of changes in attribution and evaluation indices.