Proceedings of the Annual Conference of JSAI
Online ISSN : 2758-7347
35th (2021)
Session ID : 3I4-GS-7a-03
Conference information

Countermeasure of Adversarial Example Attack:Smoothing Filter-Based Denoising Technique
*Chiaki OTAHARAMasayuki YOSHINOKen NAGANUMAYumiko TOGASHISasa SHINYANon KAWANAKyohei YAMAMOTO
Author information
CONFERENCE PROCEEDINGS FREE ACCESS

Details
Abstract

With the development of AI technology, the use of AI in non-critical fields that do not cause loss of life or environmental pollution has been progressing, and it is expected that AI will be introduced in critical fields such as critical infrastructure systems and automobiles in the future. In the academic field, many cases of security attacks have been reported, such as Adversarial Example Attack, in which malicious input is given to cause misjudgment. In light of these circumstances, AI security measures are recommended in AI guidelines formulated in Japan and overseas. Against this background, we are investigating countermeasures against Adversaroal Example Attack. In this paper, we introduce a denoising technique to process the input data without affecting the training model. In this paper, we present a denoising technique based on a smoothing filter, which removes noise by smoothing the brightness of the image. The proposed denoising method achieves a correct decision with about 85% accuracy for the Adversarial Example.

Content from these authors
© 2021 The Japanese Society for Artificial Intelligence
Previous article Next article
feedback
Top