Host: The Japanese Society for Artificial Intelligence
Name : The 101st SIG-SLUD
Number : 101
Location : [in Japanese]
Date : September 09, 2024 - September 10, 2024
Pages 96-101
To enhance the capabilities of LLMs in downstream tasks, appropriate prompts are essential, yet what constitutes appropriateness remains insufficiently debated. This paper categorizes natural language prompts into subtypes: (1) DP (Definition-based Prompt), (2) IP (Instance-based Prompt), and (3) RDP (Recursive Definition-based Prompt), and validates seven methods of knowledge injection using these three prompt types (D, I, RD, D+I, RD+I, D+RD, D+RD+I). Through a total of 350 experimental iterations, the results indicate: (1) I and RD show similar outcomes and demonstrate higher accuracy and stability than D. (2) Methods employing multiple prompt types consistently exhibit higher accuracy and stability compared to single prompt type methods. (3) D+I achieves the highest accuracy, while D+RD+I significantly excels in stability over D+I, showing overall superior performance. (4) RDP administration enhances stability, and a synergistic effect between DP and IP is observed. (5) IP yields better results than DP, whether used alone or in combination with RP. Based on these findings, this paper examines methods of knowledge injection and the role of domain experts.