Proceedings of the Annual Conference of JSAI
Online ISSN : 2758-7347
39th (2025)
Session ID : 3S1-GS-2-01
Conference information

Investigating the Impact of Supernet Pretraining in Multi-Task One-Shot NAS
*Yotaro YAMAGUCHIYuki TANIGAKI
Author information
CONFERENCE PROCEEDINGS FREE ACCESS

Details
Abstract

One-Shot Neural Architecture Search (NAS) reduced computational cost by pre-training a supernet and sharing its weights across subnetworks during the search process. However, its adaptation to multi-task learning remained challenging. In this study, we investigated the impact of supernet pre-training on multi-task search performance. Specifically, we examined the effectiveness of using a supernet trained on one task as a warm start for architecture search on a different task. Furthermore, we analyzed the effect of updating the supernet weights by training it on the current search task after the warm start. Our experiments aimed to determine whether a pre-trained supernet improved search efficiency and whether additional adaptation during the search process enhanced architecture performance. The results demonstrated that transferring a supernet across tasks improved search performance. Additionally, further training on the current task significantly enhanced search effectiveness.

Content from these authors
© 2025 The Japanese Society for Artificial Intelligence
Previous article Next article
feedback
Top