2021 Volume 39 Issue 3 Pages 110-116
In the field of image recognition, deep learning has achieved high recognition performance. However, it is difficult for people to interpret the network for the decisions. Visual explanations can visualize the regions that the network gazed at during the inference by using attention maps. By visualizing the attention map, we can understand the basis of the AIʼs decision. The attention map used for visual explanation can be applied to the inference process using the attention mechanism to improve the recognition performance. Furthermore, the attention map can be manually adjusted and trained to introduce human knowledge into the network. This paper presents a brief survey of a visual explanation of deep learning and our research on visualization of the attention map by the attention mechanism.