International Journal of Networking and Computing
Online ISSN : 2185-2847
Print ISSN : 2185-2839
ISSN-L : 2185-2839
Special issue on the Eleventh International Symposium on Networking and Computing
A Multi-Head Federated Continual Learning Approach for Improved Flexibility and Robustness in Edge Environments
Chunlu ChenKevin I-Kai WangPeng LiKouichi Sakurai
Author information
JOURNAL OPEN ACCESS

2024 Volume 14 Issue 2 Pages 123-144

Details
Abstract

In the rapidly evolving field of machine learning, the adoption of traditional approaches often encounters limitations, such as increased computational costs and the challenge of catastrophic forgetting, particularly when models undergo retraining with new datasets. This issue is especially pronounced in environments that require the ability to swiftly adapt to changing data landscapes. Continual learning emerges as a pivotal solution to these challenges, empowering models to assimilate new information while preserving the knowledge acquired from previous learning phases. Despite its benefits, the continual learning process's inherent need to retain prior knowledge introduces a potential risk for information leakage. Addressing these challenges, we propose a Federated Continual Learning (FCL) framework with a multi-head neural network model. This approach blends the privacy-preserving capabilities of Federated Learning (FL) with the adaptability of continual learning, ensuring both data privacy and continuous learning in edge computing environments. Moreover, this framework enhances our approach to adversarial training, as the constant influx of diverse and complex training data allows the model to improve its understanding and adaptability, thereby strengthening its defenses against adversarial threats. Our system features a architecture with dedicated fully-connected layers for each task, ensuring that unique features pertinent to each task are accurately captured and preserved over the model's lifetime. Data undergoes processing through these task-specific layers before a final label is determined, based on the highest prediction value. This method exploits the model's full range of knowledge, significantly boosting prediction accuracy. We have conducted thorough evaluations of our FCL framework on two benchmark datasets, MNIST and CIFAR-10, with the results clearly validating the effectiveness of our approach.

Content from these authors
© 2024 International Journal of Networking and Computing
Previous article Next article
feedback
Top