Host: The Japanese Society for Artificial Intelligence
Name : The 38th Annual Conference of the Japanese Society for Artificial Intelligence
Number : 38
Location : [in Japanese]
Date : May 28, 2024 - May 31, 2024
The objective of this research is to understand the Ability to Understand the Logical Structure (AULS) in Large Language Models (LLMs).In this paper, we first introduce a method inspired by In-Context Learning (ICL), named "Inductive Bias Learning (IBL): Data2Code Model." We then apply IBL to several models, including GPT-4-Turbo, GPT-3.5-Turbo, and Gemini Pro, which have not been previously addressed in research, to compare and analyze the accuracy and characteristics of the predictive models they generate.The results demonstrated that all models possess the capability for IBL, with GPT-4-Turbo, in particular, achieving a notable improvement in accuracy compared to the conventional GPT-4. Furthermore, it was revealed that there is a variance in the performance of the predictive models generated between GPT-N and Gemini Pro.