Host: The Japanese Society for Artificial Intelligence
Name : The 32nd Annual Conference of the Japanese Society for Artificial Intelligence, 2018
Number : 32
Location : [in Japanese]
Date : June 05, 2018 - June 08, 2018
In recent years, complex machine learning models like deep neural networks play a central role in many real applications, due to their high predictive performance. Interpreting machine learning models is then considered to be important since practitioners constantly need clues for improvement on such complex models, whose behavior is not directly visible to human. In this paper, we focus on the inner workings of convolutional neural networks, visualize them by a method called layer-wise relevance propagation, and report several findings from the visualization.