Proceedings of the Annual Conference of JSAI
Online ISSN : 2758-7347
38th (2024)
Session ID : 3Q5-IS-2b-02
Conference information

Evaluating Japanese Language Proficiency in Large Language Models through Definition Modeling Techniques
*Ran LIEdison MARRESE-TAYLORYutaka MATSUO
Author information
CONFERENCE PROCEEDINGS FREE ACCESS

Details
Abstract

With the rapid advancement of Large Language Models(LLMs), a critical issue has been to develop methods and dataset for the evaluation of their language proficiency. Among these, the task of definition modelling has recently been proposed to assess proficiency of language models in certain domains, like finance. By asking the model to generate dictionary-like definitions of a given term under controlled conditions, definition modelling evaluates the capability of lexical understanding of a given model. So far, most of such efforts have focused on English. Japanese, with a complicated writing system and vague grammatical rules, is less explored. In this paper, we propose to use the task of definition modelling to evaluate the proficiency of LLM in the Japanese language. We collect dictionary data in Japanese and use our corpus to explore the effects of different techniques of prompting in various settings.

Content from these authors
© 2024 The Japanese Society for Artificial Intelligence
Previous article Next article
feedback
Top