Proceedings of the Annual Conference of JSAI
Online ISSN : 2758-7347
37th (2023)
Session ID : 4E3-GS-2-01
Conference information

Evaluating the Interpretability of Time-Series Regression Models Based on Neural Networks
*Yo NAKAMURAKeisuke KIRITOSHITomonori IZUMITANI
Author information
CONFERENCE PROCEEDINGS FREE ACCESS

Details
Abstract

Recently, it has become common to collect and utilize big data in industry, and neural networks become applied to product quality prediction and anomaly detection using these data. In operating these models, it is important to identify information with high contribution and to consider the interpretability of the model. In general, generalization performance is considered in evaluating models, but interpretability is not always maximized when generalization performance is maximized. One of the method for extracting attribution is the saliency map, which interprets the relationship between the inputs and outputs of a neural network in terms of partial differential coefficient values. In this paper, we used the saliency map to visually grasp the attribution, defined smoothness and sparsity as measures, and verified the relationship between generalization performance and interpretability by visualizing the process of changes in attribution and evaluation indices.

Content from these authors
© 2023 The Japanese Society for Artificial Intelligence
Previous article Next article
feedback
Top