Journal of Signal Processing
Online ISSN : 1880-1013
Print ISSN : 1342-6230
ISSN-L : 1342-6230
A Feasibility Study of Data Poisoning against On-device Learning Edge AI by Physical Attack against Sensors
Takahito InoKota YoshidaHiroki MatsutaniTakeshi Fujino
Author information
JOURNAL FREE ACCESS

2024 Volume 28 Issue 4 Pages 107-110

Details
Abstract

In this paper, we examine security for edge AI devices that detect anomaly behavior due to machine failures. Typical edge AI devices perform only inference, but the inference accuracy may be degraded when sensing data changes depending on the environment in which the devices are deployed. One countermeasure against this problem is on-device learning, in which an AI updates its learning model in accordance with its environment. However, on-device learning AIs face a variety of security threats: for example, the correct inference cannot be performed if the deployed AI model is manipulated by the attacker. Although threats including invasive/non-invasive attacks via physical access have been mentioned in the past, relatively few reports have investigated experimental attacks. In this work, we demonstrated data poisoning by exploiting physical attacks on sensors implemented on edge AI. A system for detecting abnormal vibrations was evaluated using an autoencoder that utilizes sensing data from an accelerometer attached to a cooling fan as input. Accelerometer values can be tampered with during an acoustic wave injection attack, so we exploited this kind of attack. The anomaly detector that learned the tampered data could not detect abnormal fan vibrations correctly.

Content from these authors
© 2024 Research Institute of Signal Processing, Japan
Previous article Next article
feedback
Top