Proceedings of the Annual Conference of JSAI
Online ISSN : 2758-7347
36th (2022)
Session ID : 2D6-GS-2-03
Conference information

Defense against Adversarial Examples on Object Detection using Score Values Less than a Threshold
*Yoshihiro KOSEKI
Author information
CONFERENCE PROCEEDINGS FREE ACCESS

Details
Abstract

Adversarial Examples are malicious input data created by adding a small perturbation to original input data, and these data make a classifier outputs a wrong result. For object detectors, there exists a method to create Adversarial Examples Patches which can be printed and applied to object in real world, so detectors dismiss that object. In this work, we propose a defense method to detect if an Adversarial Examples Patches attack is taking place or not against an input image. Our method infers the position of Adversarial Examples Patches and paints those using bounding boxes with score values which are lowered below a threshold by an attack. We show our method is sufficiently better than random classification by evaluation over INRIA Person Dataset.

Content from these authors
© 2022 The Japanese Society for Artificial Intelligence
Previous article Next article
feedback
Top