Proceedings of the Annual Conference of JSAI
Online ISSN : 2758-7347
38th (2024)
Session ID : 4Xin2-66
Conference information

Hallucination Detection in Japanese LLMs under Zero-Resource Black-Box Fixed-Low-Temperature Constraint Through Data-Augmented Sampling
*Ryoma NAKAIRyusei ISHIKAWAShunsuke HASHIMOTOHiroyuki INOUE
Author information
CONFERENCE PROCEEDINGS FREE ACCESS

Details
Abstract

Inaccurate responses, termed hallucinations, pose challenges in various Large Language Model (LLM) applications. Although a sampling-based method called SelfCheckGPT has been devised to detect hallucinations by using the model's input-output interface without external knowledge, the method requires an increase in the temperature parameter, which cannot be controlled in some LLM services, including ChatGPT. In LLM services designed for accurate responses, the temperature parameter is fixed at a low level, which can degrade the performance of SelfCheckGPT. We therefore propose a novel methodology that utilizes data augmentation (adding random strings or back-translation) during sampling to detect hallucinations in Japanese LLMs under the fixed-low-temperature constraint. Our experimental results reveal that the proposed methodology outperforms SelfCheckGPT under the fixed-low-temperature constraint.

Content from these authors
© 2024 The Japanese Society for Artificial Intelligence
Previous article Next article
feedback
Top