IEICE Transactions on Fundamentals of Electronics, Communications and Computer Sciences
Online ISSN : 1745-1337
Print ISSN : 0916-8508

This article has now been updated. Please use the final version.

Mitigation of Membership Inference Attack by Knowledge Distillation on Federated Learning
Rei UEDATsunato NAKAIKota YOSHIDATakeshi FUJINO
Author information
JOURNAL FREE ACCESS Advance online publication

Article ID: 2024CIP0004

Details
Abstract

Federated learning (FL) is a distributed deep learning technique involving multiple clients and a server. In FL, each client individually trains a model with its own training data and sends only the model to the server. The server then aggregates the received client models to build a server model. Because each client does not share its own training data with other clients or the server, FL is considered a distributed deep learning technique with privacy protection. However, several attacks that steal information about a specific client's training data from the aggregated model on the server have been reported for FL. These include membership inference attacks (MIAs), which identify whether or not specific data was used to train a target model. MIAs have been shown to work mainly because of overfitting of the model to the training data, and mitigation techniques based on knowledge distillation have thus been proposed. Because these techniques assume a lot of training data and computational power, they are difficult to introduce simply to clients in FL. In this paper, we propose a knowledge-distillation-based defense against MIAs that is designed for application in FL. The proposed method is effective against various MIAs without requiring additional training data, in contrast to the conventional defenses.

Content from these authors
© 2024 The Institute of Electronics, Information and Communication Engineers
feedback
Top