Host: The Japanese Society for Artificial Intelligence
Name : The 37th Annual Conference of the Japanese Society for Artificial Intelligence
Number : 37
Location : [in Japanese]
Date : June 06, 2023 - June 09, 2023
Deep neural network (DNN) are widely used in various fields and are increasingly being applied to real-world problems including human decision-making tasks. However, in such situations, issues such as fairness of output results, ethical validity, and opaqueness of the model have arisen. To mitigate these problems, eXplainable AI (XAI), which explains the reasoning basis of DNN, is actively studied. On the other hand, it has been revealed that DNN based models have vulnerabilities called Adversarial Examples (AEs), which cause erroneous decisions by adding special perturbations to the input data that are imperceptible to humans. Such vulnerabilities have been confirmed to exist in image interpreters such as grad-cam, and it is essential to investigate these vulnerabilities in order to use such image interpreters safely. In this study, we propose an adversarial attack method to generate AEs that produce incorrect interpretations by using evolutionary computation under black box conditions where the internal structure of the attacked model is unavailable. Experimental results showed that the proposed method successfully generated AEs that mislead the interpretation results without changing the classification results of the image recognition model.