IEICE ESS Fundamentals Review
Online ISSN : 1882-0875
ISSN-L : 1882-0875
Proposed by HWS (Hardware Security)
Hardware Security on Edge AI Devices
Kota YoshidaTakeshi Fujino
Author information
JOURNAL FREE ACCESS

2021 Volume 15 Issue 2 Pages 88-100

Details
Abstract

Machine learning technologies such as deep neural networks (DNNs) (hereafter referred to as “AI”) demonstrate remarkable performance in various tasks such as image recognition, and the implementation of AI in society is expected to be further accelerated. At this time, it is important to consider security measures against attack methods such as inducing malfunctions and leaking privacy information on AI, in order to implement AI in safety or security applications such as those of autonomous driving vehicles or surveillance cameras. In addition, AI models must be protected, because trained AI models are important intellectual property. On the other hand, the inference process must be executed on embedded devices (hereinafter referred to as “edge AI”) in applications such as surveillance cameras and autonomous driving vehicles, which require privacy protection and processing without delay. In these cases, it is necessary to ensure security by assuming scenarios where an attacker can physically access the edge AI, and the research field of hardware security for edge AI has been activated since around 2017. In this paper, we review the threats of AI security and hardware security for edge AI and introduce recent research topics and countermeasures.

Content from these authors
© 2021 The Institute of Electronics, Information and Communication Engineers
Previous article Next article
feedback
Top