Proceedings of the Annual Conference of JSAI
Online ISSN : 2758-7347
38th (2024)
Session ID : 4Xin2-43
Conference information

A Comparative Analysis of Instruction Prompt Formats on Code Generation Task
*Waka ITOMiyu SATOShiho TAKANOKimio KURAMITSU
Author information
CONFERENCE PROCEEDINGS FREE ACCESS

Details
Abstract

In the development of Large Language Models for Code (Code LLM), it has been found that instruction tuning is effective in enhancing the performance of Code LLM. Instruction tuning is a method that improves generalization performance by additional learning of instructions. However, there is a variety of opinions on what form of instruction is optimal, and it has not been clarified. The purpose of this study is to investigate the impact of different instruction formats on code generation performance in order to enhance the effects of instruction tuning for Code LLM. In particular, we focused on the output formats used for code extraction and conducted experiments. We also visualized the experimental results. The results revealed the performance differences in code generation by the models due to different output formats, and it was clarified that the Markdown format was the most versatile. Moreover, it was revealed that specifying an output format resulted in a higher accuracy rate than not specifying an output format.

Content from these authors
© 2024 The Japanese Society for Artificial Intelligence
Previous article Next article
feedback
Top