IEICE Transactions on Fundamentals of Electronics, Communications and Computer Sciences
Online ISSN : 1745-1337
Print ISSN : 0916-8508

この記事には本公開記事があります。本公開記事を参照してください。
引用する場合も本公開記事を引用してください。

Mitigation of Membership Inference Attack by Knowledge Distillation on Federated Learning
Rei UEDATsunato NAKAIKota YOSHIDATakeshi FUJINO
著者情報
ジャーナル フリー 早期公開

論文ID: 2024CIP0004

この記事には本公開記事があります。
詳細
抄録

Federated learning (FL) is a distributed deep learning technique involving multiple clients and a server. In FL, each client individually trains a model with its own training data and sends only the model to the server. The server then aggregates the received client models to build a server model. Because each client does not share its own training data with other clients or the server, FL is considered a distributed deep learning technique with privacy protection. However, several attacks that steal information about a specific client's training data from the aggregated model on the server have been reported for FL. These include membership inference attacks (MIAs), which identify whether or not specific data was used to train a target model. MIAs have been shown to work mainly because of overfitting of the model to the training data, and mitigation techniques based on knowledge distillation have thus been proposed. Because these techniques assume a lot of training data and computational power, they are difficult to introduce simply to clients in FL. In this paper, we propose a knowledge-distillation-based defense against MIAs that is designed for application in FL. The proposed method is effective against various MIAs without requiring additional training data, in contrast to the conventional defenses.

著者関連情報
© 2024 The Institute of Electronics, Information and Communication Engineers
feedback
Top