Host: The Japanese Society for Artificial Intelligence
Name : The 38th Annual Conference of the Japanese Society for Artificial Intelligence
Number : 38
Location : [in Japanese]
Date : May 28, 2024 - May 31, 2024
Deep neural networks (DNNs) are used in a wide range of fields and are increasingly being applied to real-world problems. In recent years, there has been an increasing number of efforts to use DNNs to replace human decision-making tasks. However, in such situations, issues such as fairness of output results, ethical validity, and opaqueness of the model have arisen. To mitigate these problems, eXplainable AI (XAI), which explains the reasoning basis of DNNs, is actively studied. On the other hand, it has been revealed that DNN based models have vulnerabilities called Adversarial Examples (AEs), which cause erroneous decisions by adding special perturbations to the input data that are imperceptible to humans. Such vulnerabilities have also been confirmed to exist in image interpreters, which are explainable methods in image classification, and it is essential to investigate these vulnerabilities in terms of AI reliability. This study proposes an adversarial attack method using evolutionary computation and Discrete Wavelet Transform under black box conditions where the internal structure of the attack target model is unknown. Experimental results have shown that the proposed method improved the search efficiency compared to the conventional method.