Journal of Information Processing
Online ISSN : 1882-6652
ISSN-L : 1882-6652
Providing Interpretability of Document Classification by Deep Neural Network with Self-attention
Atsuki TamekuriKosuke NakamuraYoshihaya TakahashiSaneyasu Yamaguchi
Author information
JOURNAL FREE ACCESS

2022 Volume 30 Pages 397-410

Details
Abstract

Deep learning has been widely used in natural language processing (NLP) such as document classification. For example, self-attention has achieved significant improvement in NLP. However, it has been pointed out that although deep learning accurately classifies documents, it is difficult for users to interpret the basis of the decision. In this paper, we focus on the task of classifying open-data news documents by their theme with a deep neural network with self-attention. We then propose methods for providing the interpretability for these classifications. First, we classify news documents by LSTM with a self-attention mechanism and then show that the network can classify documents highly accurately. Second, we propose five methods for providing the basis of the decision by focusing on various values, e.g., attention, the gradient between input and output values of a neural network, and classification results of a document with one word. Finally, we evaluate the performance of these methods in four evaluating ways and show that these methods can present interpretability suitably. In particular, the methods based on documents with one word can provide interpretability, which is extracting the words that have a strong influence on the classification results.

Content from these authors
© 2022 by the Information Processing Society of Japan
Previous article Next article
feedback
Top