Proceedings of the Annual Conference of JSAI
Online ISSN : 2758-7347
37th (2023)
Session ID : 2L1-GS-11-03
Conference information

A Study on Black-Box Adversarial Attack for Image Interpreters
*Yudai HIROSEAyane TAJIMASatoshi ONO
Author information
CONFERENCE PROCEEDINGS FREE ACCESS

Details
Abstract

Deep neural network (DNN) are widely used in various fields and are increasingly being applied to real-world problems including human decision-making tasks. However, in such situations, issues such as fairness of output results, ethical validity, and opaqueness of the model have arisen. To mitigate these problems, eXplainable AI (XAI), which explains the reasoning basis of DNN, is actively studied. On the other hand, it has been revealed that DNN based models have vulnerabilities called Adversarial Examples (AEs), which cause erroneous decisions by adding special perturbations to the input data that are imperceptible to humans. Such vulnerabilities have been confirmed to exist in image interpreters such as grad-cam, and it is essential to investigate these vulnerabilities in order to use such image interpreters safely. In this study, we propose an adversarial attack method to generate AEs that produce incorrect interpretations by using evolutionary computation under black box conditions where the internal structure of the attacked model is unavailable. Experimental results showed that the proposed method successfully generated AEs that mislead the interpretation results without changing the classification results of the image recognition model.

Content from these authors
© 2023 The Japanese Society for Artificial Intelligence
Previous article Next article
feedback
Top