Proceedings of the Annual Conference of JSAI
Online ISSN : 2758-7347
34th (2020)
Session ID : 3E1-GS-2-04
Conference information

Local model explanation with linguistic approach
*Takumi YANAGAWAFumihiko TERUI
Author information
CONFERENCE PROCEEDINGS FREE ACCESS

Details
Abstract

Machine Learning (ML) technology has been applied to many applications and the importance of ML model interpretability has been increasing. There is a way to explain ML models called Local Interpretable Model-agnostic Explanation (LIME). LIME is to make a ML model explainable by creating a human interpretable surrogate model in the local vicinity of the input data. In the case of ML models incorporating Natural Language tasks, however, there are some difficulties on the definition of the surrogate model and the local vicinity to explain complicated models that can understand linguistic effects. In this paper, we introduce a new method focusing on functional words and the effects by word permutation. We took experiments with this new method using a sentiment analysis model and the results show that the effects of functional words are properly explained. Furthermore, we will show that this method can be applied with functional word estimation.

Content from these authors
© 2020 The Japanese Society for Artificial Intelligence
Previous article Next article
feedback
Top