ITE Transactions on Media Technology and Applications
Online ISSN : 2186-7364
ISSN-L : 2186-7364
Special Section on Fast-track Review
[Paper] Defense Method Against Adversarial Example Attacks using Thermal Noise of a CMOS Image Sensor
Yuki RogiKota YoshidaAyaka BannoTakeshi FujinoShunsuke Okura
著者情報
ジャーナル フリー

2026 年 14 巻 1 号 p. 93-101

詳細
抄録

With the development of IoT technology, edge AI is widely expected. Security and recovery from attacks are important for further development of edge AI. One of the attacks on edge AI is adversarial example (AE) attack which artificially causes false recognition by adding perturbation. As one of the solutions, a defense method to remove adversarial perturbation by adding disturbance noise and then using denoising autoencoder (DAE) has been proposed. In this paper, we first show that the effectiveness of the defense method noise is low when the perturbation noise is based on predictable pseudorandom. Next, we propose a defense method based on unpredictable pixel reset noise of a CMOS image sensor and a pre-processing to enhance the randomness of the perturbation noise. According to simulation results, we confirmed that the defense performance against AE attacks is improved by approximately 30%.

著者関連情報
© 2026 The Institute of Image Information and Television Engineers
前の記事 次の記事
feedback
Top