Proceedings of the Annual Conference of JSAI
Online ISSN : 2758-7347
38th (2024)
Session ID : 1B3-GS-2-03
Conference information

Ability to understand the logical structure of Large Language Models and generate predictive model
*Toma TANAKANaofumi EMOTOYumibayashi TSUKASA
Author information
CONFERENCE PROCEEDINGS FREE ACCESS

Details
Abstract

The objective of this research is to understand the Ability to Understand the Logical Structure (AULS) in Large Language Models (LLMs).In this paper, we first introduce a method inspired by In-Context Learning (ICL), named "Inductive Bias Learning (IBL): Data2Code Model." We then apply IBL to several models, including GPT-4-Turbo, GPT-3.5-Turbo, and Gemini Pro, which have not been previously addressed in research, to compare and analyze the accuracy and characteristics of the predictive models they generate.The results demonstrated that all models possess the capability for IBL, with GPT-4-Turbo, in particular, achieving a notable improvement in accuracy compared to the conventional GPT-4. Furthermore, it was revealed that there is a variance in the performance of the predictive models generated between GPT-N and Gemini Pro.

Content from these authors
© 2024 The Japanese Society for Artificial Intelligence
Previous article Next article
feedback
Top