2025 Volume 16 Issue 1 Pages 79-95
Edge computing methods, particularly federated learning (FL), have gained significant attention because of their ability to share contributions to model training without exposing the training data to other users. Although conventional FL focuses on a single-task data distribution, it is unclear whether similar learning can be achieved for different tasks. We explore a learning architecture that supports multitask learning through partial model sharing. To demonstrate its effectiveness, we propose the common bases hypothesis, suggesting that the efficient sharing of common representations among tasks is possible. Using singular value decomposition to fix a subspace of the trained weights during learning, our results indicate that FL with partial model sharing for multiple tasks is feasible because of its ability to learn similar representations.