Host: The Japanese Society for Artificial Intelligence
Name : The 36th Annual Conference of the Japanese Society for Artificial Intelligence
Number : 36
Location : [in Japanese]
Date : June 14, 2022 - June 17, 2022
Adversarial Examples are malicious input data created by adding a small perturbation to original input data, and these data make a classifier outputs a wrong result. For object detectors, there exists a method to create Adversarial Examples Patches which can be printed and applied to object in real world, so detectors dismiss that object. In this work, we propose a defense method to detect if an Adversarial Examples Patches attack is taking place or not against an input image. Our method infers the position of Adversarial Examples Patches and paints those using bounding boxes with score values which are lowered below a threshold by an attack. We show our method is sufficiently better than random classification by evaluation over INRIA Person Dataset.