Host: The Japanese Society for Artificial Intelligence
Name : The 37th Annual Conference of the Japanese Society for Artificial Intelligence
Number : 37
Location : [in Japanese]
Date : June 06, 2023 - June 09, 2023
Recently, instruction tuning has been attracting significant attention as a method for training generalizable language models (e.g., ChatGPT). Although various prompts have been manually created for instruction tuning, it has not been clarified what kind of prompts are optimal for obtaining cross-task generalization ability. This study presents \emph{instruction optimization}, which optimizes training prompts by leveraging bilevel optimization, and we clarify what kind of prompts are optimal for instruction tuning. Experimental results demonstrate that instruction optimization enhances the diversity of prompts and improves the generalization performance in a zero-shot setting, whereas using the same examples rather than a variety of exemplars is more effective in a few-shot setting.