Host: The Japanese Society for Artificial Intelligence
Name : The 39th Annual Conference of the Japanese Society for Artificial Intelligence
Number : 39
Location : [in Japanese]
Date : May 27, 2025 - May 30, 2025
One-Shot Neural Architecture Search (NAS) reduced computational cost by pre-training a supernet and sharing its weights across subnetworks during the search process. However, its adaptation to multi-task learning remained challenging. In this study, we investigated the impact of supernet pre-training on multi-task search performance. Specifically, we examined the effectiveness of using a supernet trained on one task as a warm start for architecture search on a different task. Furthermore, we analyzed the effect of updating the supernet weights by training it on the current search task after the warm start. Our experiments aimed to determine whether a pre-trained supernet improved search efficiency and whether additional adaptation during the search process enhanced architecture performance. The results demonstrated that transferring a supernet across tasks improved search performance. Additionally, further training on the current task significantly enhanced search effectiveness.