2024 Volume 28 Issue 4 Pages 107-110
In this paper, we examine security for edge AI devices that detect anomaly behavior due to machine failures. Typical edge AI devices perform only inference, but the inference accuracy may be degraded when sensing data changes depending on the environment in which the devices are deployed. One countermeasure against this problem is on-device learning, in which an AI updates its learning model in accordance with its environment. However, on-device learning AIs face a variety of security threats: for example, the correct inference cannot be performed if the deployed AI model is manipulated by the attacker. Although threats including invasive/non-invasive attacks via physical access have been mentioned in the past, relatively few reports have investigated experimental attacks. In this work, we demonstrated data poisoning by exploiting physical attacks on sensors implemented on edge AI. A system for detecting abnormal vibrations was evaluated using an autoencoder that utilizes sensing data from an accelerometer attached to a cooling fan as input. Accelerometer values can be tampered with during an acoustic wave injection attack, so we exploited this kind of attack. The anomaly detector that learned the tampered data could not detect abnormal fan vibrations correctly.